Tuesday, January 31, 2012

Honey Helps Heal Wounds



Honey soothes a sore throat. Now research suggests that it could also help fight serious skin infections.
People have used honey's antibacterial properties for centuries. Now, scientists are discovering just how it works—and that it might be even better than antibiotics.
After surgery or a skin injury, many otherwise harmless bacteria that live on the skin can infect the wound site. One type of strep is particularly common and can lead to stubborn wounds that refuse to heal. But researchers found that honey—in particular that made from bees foraging on manuka flowers—stopped this strep in its tracks. The study is in the journal Microbiology. [Sarah Maddocks et al, Manuka Honey Inhibits the Development of Streptococcus pyogenes Biofilms and Causes Reduced Expression of Two Fibronectin Binding Proteins]
In lab tests, just a bit of the honey killed off the majority of bacterial cells—and cut down dramatically on the stubborn biofilms they formed.
It could also be used to prevent wounds from becoming infected in the first place. Hospital-borne infections are all too common, with more and more strains developing resistance to standard antibiotic treatments. So if the honey works in clinical trials, too, this sweet news will be all the buzz.
—Katherine Harmon

What a Yawn Says about Your Relationship

Nothing says “I love you” like a yawn? Image: Alex Gumerov/iStock

You can tell a lot about a person from their body. And I don’t just mean how many hours they spend at the gym, or how easy it is for them to sweet-talk their way out of speeding tickets. For the past several decades researchers have been studying the ways in which the body reveals properties of the mind. An important subset of this work has taken this idea a step further: do the ways our bodies relate to one another tell us about the ways in which our minds relate to one another? Consider behavioral mimicry. Many studies have found that we quite readily mimic the nonverbal behavior of those with whom we interact. Furthermore, the degree to which we mimic others is predicted by both our personality traits as well as our relationship to those around us. Inshort, the more empathetic we are, the more we mimic, and the more we like the people we’re interacting with, the more we mimic. The relationship between our bodies reveals something about the relationship between our minds.

The bulk of this research has made use of clever experimental manipulations involving research assistant actors. The actor crosses his legs and then waits to see if the participant crosses his legs, too. If so, we’ve found mimicry, and can now compare the presence of mimicry with self-reports of, say, liking and interpersonal closeness to see if there is a relationship. More naturalistic evidence for this phenomenon has been much harder to come by. That is, to what extent do we see this kind of nonverbal back and forth in the real world and to what extent does it reveal the same properties of minds that seem to hold true in the lab?

A recent study conducted by Ivan Norscia and Elisabetta Palagi and published in the journal PLoSONE has found such evidence in the unlikeliest of places: yawns. More specifically, yawn contagion, or that annoyingly inevitable phenomenon that follows seeing, hearing (and even reading) about another yawn. You’ve certainly experienced this, but perhaps you have not considered what it might reveal to others (beyond a lack of sleep or your interest level in their conversation). Past work has demonstrated that, similar to behavioral mimicry, contagious yawners tend to be higher in dispositional empathy. That is, they tend to be the type of people who are better, and more interested in, understanding other people’s internal states. Not only that, but contagious yawning seems to emerge in children at the same time that they develop the cognitive capacities involved in empathizing with others. And children who lack this capacity, such as in autism, also show deficits in their ability to catch others’ yawns. In short, the link between yawning and empathizing appears strong.

Given that regions of the brain involved in empathizing with others can beinfluenced by the degree of psychological closeness to those others, Norscia and Palagi wanted to know whether contagious yawning might also reveal information about how we relate to those around us. Specifically, are we more likely to catch the yawns of people to whom we are emotionally closer? Can we deduce something about the quality of the relationships between individuals based solely on their pattern of yawning? Yawning might tell us the degree to which we empathize with, and by extension care about, the people around us.



To test this hypothesis the researchers observed the yawns of 109 adults in their natural environments over the course of a year. When a subject yawned the researchers recorded the time of the yawn, the identity of the yawner, the identities of all the people who could see or hear the yawner (strangers, acquaintances, friends, or kin), the frequency of yawns by these people within 3 minutes after the original yawn, and the time elapsed between these yawns and the original yawn. In order to rule out alternative explanations for any contagion the researchers also recorded the position of the observers relative to the yawner (whether they could see or only hear the yawn), the individuals’ gender, the social context, and their nationality.

Sure enough, yawn contagion was predicted by emotional closeness. Family members showed the greatest contagion, in terms of both occurrence of yawning and frequency of yawns, and strangers and acquaintances showed a longer delay in the yawn response compared to friends and kin. No other variable predicted yawn contagion. It seems that this reflexive, subtle cue exposes deep and meaningful information about our relationship to others. Many studies have shown that we preferentially direct our nobler tendencies towards those with whom we empathize and away from those with whom we do not. The ability and motivation to share other people’s experiences and internal states is crucial for cooperation and altruism and seems to be the defining deficiency when we dehumanize and behave aggressively. Remember this the next time you let out a big one at lunch and your friend continues to calmly chew his sandwich.

Sunday, January 29, 2012

Forget global warming - it's Cycle 25 we need to worry about


Forget global warming - it's Cycle 25 we need to worry about (and if NASA scientists are right the Thames will be freezing over again)

  • Met Office releases new figures which show no warming in 15 years
Last updated at 5:38 AM on 29th January 2012

The supposed ‘consensus’ on man-made global warming is facing an inconvenient challenge after the release of new temperature data showing the planet has not warmed for the past 15 years.
The figures suggest that we could even be heading for a mini ice age to rival the 70-year temperature drop that saw frost fairs held on the Thames in the 17th Century.
Based on readings from more than 30,000 measuring stations, the data was issued last week without fanfare by the Met Office and the University of East Anglia Climatic Research Unit. It confirms that the rising trend in world temperatures ended in 1997.
A painting, dated 1684, by Abraham Hondius depicts one of many frost fairs on the River Thames during the mini ice age
A painting, dated 1684, by Abraham Hondius depicts one of many frost fairs on the River Thames during the mini ice age
Meanwhile, leading climate scientists yesterday told The Mail on Sunday that, after emitting unusually high levels of energy throughout the 20th Century, the sun is now heading towards a ‘grand minimum’ in its output, threatening cold summers, bitter winters and a shortening of the season available for growing food.
Solar output goes through 11-year cycles, with high numbers of sunspots seen at their peak.
We are now at what should be the peak of what scientists call ‘Cycle 24’ – which is why last week’s solar storm resulted in sightings of the aurora borealis further south than usual. But sunspot numbers are running at less than half those seen during cycle peaks in the 20th Century.
Analysis by experts at NASA and the University of Arizona – derived from magnetic-field measurements 120,000 miles beneath the sun’s surface – suggest that Cycle 25, whose peak is due in 2022, will be a great deal weaker still. 
 
According to a paper issued last week by the Met Office, there is a  92 per cent chance that both Cycle 25 and those taking place in the following decades will be as weak as, or weaker than, the ‘Dalton minimum’ of 1790 to 1830. In this period, named after the meteorologist John Dalton, average temperatures in parts of Europe fell by 2C.
However, it is also possible that the new solar energy slump could be as deep as the ‘Maunder minimum’ (after astronomer Edward Maunder), between 1645 and 1715 in the coldest part of the ‘Little Ice Age’ when, as well as the Thames frost fairs, the canals of Holland froze solid.
The world average temperature from 1997 to 2012
Yet, in its paper, the Met Office claimed that the consequences now would be negligible – because the impact of the sun on climate is far less than man-made carbon dioxide. Although the sun’s output is likely to decrease until 2100, ‘This would only cause a reduction in global temperatures of 0.08C.’ Peter Stott, one of the authors, said: ‘Our findings suggest  a reduction of solar activity to levels not seen in hundreds of years would be insufficient to offset the dominant influence of greenhouse gases.’
These findings are fiercely disputed by other solar experts.
‘World temperatures may end up a lot cooler than now for 50 years or more,’ said Henrik Svensmark, director of the Center for Sun-Climate Research at Denmark’s National Space Institute. ‘It will take a long battle to convince some climate scientists that the sun is important. It may well be that the sun is going to demonstrate this on its own, without the need for their help.’
He pointed out that, in claiming the effect of the solar minimum would be small, the Met Office was relying on the same computer models that are being undermined by the current pause in global-warming.
CO2 levels have continued to rise without interruption and, in 2007, the Met Office claimed that global warming was about to ‘come roaring back’. It said that between 2004 and 2014 there would be an overall increase of 0.3C. In 2009, it predicted that at least three of the years 2009 to 2014 would break the previous temperature record set in 1998.
World solar activity cycles from 1749 to 2040
So far there is no sign of any of this happening. But yesterday a Met Office spokesman insisted its models were still valid.
‘The ten-year projection remains groundbreaking science. The period for the original projection is not over yet,’ he said.
Dr Nicola Scafetta, of Duke University in North Carolina, is the author of several papers that argue the Met Office climate models show there should have been ‘steady warming from 2000 until now’.
‘If temperatures continue to stay flat or start to cool again, the divergence between the models and recorded data will eventually become so great that the whole scientific community will question the current theories,’ he said.
He believes that as the Met Office model attaches much greater significance to CO2 than to the sun, it was bound to conclude that there would not be cooling. ‘The real issue is whether the model itself is accurate,’ Dr Scafetta said. Meanwhile, one of America’s most eminent climate experts, Professor Judith Curry of the  Georgia Institute of Technology, said she found the Met Office’s confident prediction of a ‘negligible’ impact difficult to understand.
‘The responsible thing to do would be to accept the fact that the models may have severe shortcomings when it comes to the influence of the sun,’ said Professor Curry. As for the warming pause, she said that many scientists ‘are not surprised’.
Four hundred years of sunspot observations
She argued it is becoming evident that factors other than CO2 play an important role in rising or falling warmth, such as the 60-year water temperature cycles in the Pacific and Atlantic oceans.
‘They have insufficiently been appreciated in terms of global climate,’ said Prof Curry. When both oceans were cold in the past, such as from 1940 to 1970, the climate cooled. The Pacific cycle ‘flipped’ back from warm to cold mode in 2008 and the Atlantic is also thought likely to flip in the next few years .
Pal Brekke, senior adviser at the Norwegian Space Centre, said some scientists found the importance of water cycles difficult to accept, because doing so means admitting that the oceans – not CO2 – caused much of the global warming between 1970 and 1997.
The same goes for the impact of the sun – which was highly active for much of the 20th Century.
‘Nature is about to carry out a very interesting experiment,’ he said. ‘Ten or 15 years from now, we will be able to determine much better whether the warming of the late 20th Century really was caused by man-made CO2, or by natural variability.’
Meanwhile, since the end of last year, world temperatures have fallen by more than half a degree, as the cold ‘La Nina’ effect has re-emerged in the South Pacific.
‘We’re now well into the second decade of the pause,’ said Benny Peiser, director of the Global Warming Policy Foundation. ‘If we don’t see convincing evidence of global warming by 2015, it will start to become clear whether the models are bunk. And, if they are, the implications for some scientists could be very serious.’




Read more: http://www.dailymail.co.uk/sciencetech/article-2093264/Forget-global-warming--Cycle-25-need-worry-NASA-scientists-right-Thames-freezing-again.html#ixzz1kvLxO5Iv

Neanderthals had differently organised brains


Homo neanderthalensis is not a species to be dismissed lightly. They weren’t especially dumb, nor especially weak. Indeed, they actually had larger brains and denser muscles than we did.
On top of that, their technology was so well adapted to their environment that they were able to flourish without drastically altering it for hundreds of thousands of years. It was just that good.
So it would seem we have no clear advantage over them, which makes the fact we survived but they did not especially puzzling.
Recent research argues this might have been because their brain, despite being bigger, ultimately had a more primitive shape. Our frontal and temporal lobes are a different to theirs and our olfactory bulb is larger. Could our brain shape have given us an advantage?
Now, new information presented at the HOBET conference I recently attended lends further credibility to that hypothesis.
Admitting I went to an EvoAnth conference sounded a lot cooler in my head...
Earlier research has identified that there is a link between eye-socket size and eyeball size (no duh) and that there is in turn a link between eyeball size and visual cortex size.
Whilst this might seem like a bit of a “captain obvious” moment, this work also identified something rather interesting: eye-socket size is correlated with latitude.
The further away from the equator, the larger the eye-socket of an individual was. This likely has something to do with the fact that the amount of light from the sun gets lower the further north/south one travels, thus an increase in eyeball size would help maintain good vision in the darkening environment.
Positing light levels as the cause of this variation in eyeball size instead of say, neutral mutations, gains support from the fact that visual acuity remains the same across latitudes, instead of decreasing with the lower light levels as one would expect if these enlarged eyeballs were simply neutral variants.
In other words, they caught evolution in action.
I want a picture of Darwin with a sly smile for these occassions
The presentation at HOBET built upon this information by including in the eye-socket ofHomo neanderthalensis. Being a species that lived in the north for a longer period than Homo sapiens, one would expect them to have larger eyeballs than us. This means that they should have also had a larger visual cortex than us.
So using the statistics the earlier research had gathered, they estimated the size of the neanderthal visual cortex and subtracted it from the total brain size, leaving them with the size of the bits of the brain relevant to intelligence.
Surprisingly, this figure was smaller than that of members of Homo sapiens from the same period (after they too had been corrected for visual cortex size), suggesting that the neanderthals’ bigger brain gave them no intellectual advantage over us.
From the original research, establising a link between latitude and eye-socket size (orbital volume)
Indeed, their reduced brain size might well have put them at a disadvantage. When the group size of Homo neanderthalensis was calculated from this new figure and the correlation between group size and brain size established by thesocial brain hypothesis it was found that they would’ve lived in smaller groups than Homo sapiens.
Further, when compared to the correlation between “levels of intentionality” and brain size it was found that neanderthals would’ve only been able to reach 4th level intentionality!
Now, despite being followed an exclamation point you probably don’t understand the significance of levels of intentionality so let me explain. Each level of intentionality is understanding an additional person thinks something.
When writing Othello, Shakespeare had to understand [1] the audience would think [2] that Iago intended [3] that Othello would believe [4] that Desdemona wanted [5] to love another for his plot to work.
So Homo neanderthalensis could’ve only reached part 4 of Othello. Ultimately what this means is that they would’ve had less complex social groups, a further disadvantage on-top of their smaller group size.
Also, there could’ve been no neanderthal Shakespeare.
Which is a shame because I think he would look quite fetching in mammoth skin
However, the data on which the intentionality/brain size correlation is based off is rather poor, having being gathered from only 3 animal species. Further, neanderthals seem to be very close to the threshold of being able to have 5th level intentionality so the refinement of these statistics with more data may well push them over that limit.
Also – as I said earlier – humans had larger occiptial bulbs. Perhaps if one were to also control for this (and any other “irrelevant” parts of the brain) our brain sizes might be brought back into alignment after all.
On-top of that, the visual cortex isn’t completely divorced from intelligence; with it apparently being associated with various mathematical abilities. So concluding neanderthals were below us intellectually by removing the visual cortex might well be an incorrect conclusion.
That said, the correlation between brain size (without visual cortex) and group size is well established so concluding they did live in smaller groups would likely be correct.
Another piece in the neanderthal puzzle has been discovered.
Pearce, E., & Dunbar, R. (2011). Latitudinal variation in light levels drives human visual system size Biology Letters, 8 (1), 90-93 DOI:10.1098/rsbl.2011.0570
The neanderthal data has yet to be published and will be included here when it is/when I find it

The unity of memory is an illusion


For centuries, neuroscience attempted to neatly assign labels to the various parts of the brain: this is the area for language, this one for morality, this for tool use, color detection, face recognition, and so on. This search for an orderly brain map started off as a viable endeavor, but turned out to be misguided.
The deep and beautiful trick of the brain is more interesting: it possesses multiple, overlapping ways of dealing with the world. It is a machine built of conflicting parts. It is a representative democracy that functions by competitionamong parties who all believe they know the right way to solve the problem.
As a result, we can get mad at ourselves, argue with ourselves, curse at ourselves and contract with ourselves. We can feel conflicted. These sorts of neural battles lie behind marital infidelity, relapses into addiction, cheating on diets, breaking of New Year’s resolutions—all situations in which some parts of a person want one thing and other parts another.
These are things which modern machines simply do not do. Your car cannot be conflicted about which way to turn: it has one steering wheel commanded by only one driver, and it follows directions without complaint. Brains, on the other hand, can be of two minds, and often many more. We don’t know whether to turn toward the cake or away from it, because there are several sets of hands on the steering wheel of behavior.
Take memory. Under normal circumstances, memories of daily events are consolidated by an area of the brain called the hippocampus. But in frightening situations—such as a car accident or a robbery—another area, the amygdala, also lays down memories along an independent, secondary memory track. Amygdala memories have a different quality to them: they are difficult to erase and they can return in “flash-bulb” fashion—a common description of rape victims and war veterans. In other words, there is more than one way to lay down memory. We’re not talking about memories of different events, but different memories of the same event. The unfolding story appears to be that there may be even more than two factions involved, all writing down information and later competing to tell the story. The unity of memory is an illusion.
And consider the different systems involved in decision making: some are fast, automatic and below the surface of conscious awareness; others are slow, cognitive, and conscious. And there’s no reason to assume there are only two systems; there may well be a spectrum. Some networks in the brain are implicated in long-term decisions, others in short-term impulses (and there may be a fleet of medium-term biases as well).
Attention, also, has also recently come to be understood as the end result of multiple, competing networks, some for focused, dedicated attention to a specific task, and others for monitoring broadly (vigilance). They are always locked in competition to steer the actions of the organism.
Even basic sensory functions—like the detection of motion—appear now to have been reinvented multiple times by evolution. This provides the perfect substrate for a neural democracy.
On a larger anatomical scale, the two hemispheres of the brain, left and right, can be understood as overlapping systems that compete. We know this from patients whose hemispheres are disconnected: they essentially function with two independent brains. For example, put a pencil in each hand, and they can simultaneously draw incompatible figures such as a circle and a triangle. The two hemispheres function differently in the domains of language, abstract thinking, story construction, inference, memory, gambling strategies, and so on. The two halves constitute a team of rivals: agents with the same goals but slightly different ways of going about it.
To my mind, this elegant solution to the mysteries of the brain should change the goal for aspiring neuroscientists. Instead of spending years advocating for one’s favorite solution, the mission should evolve into elucidating the different overlapping solutions: how they compete, how the union is held together, and what happens when things fall apart.
Part of the importance of discovering elegant solutions is capitalizing on them. The neural democracy model may be just the thing to dislodge artificial intelligence. We human programmers still approach a problem by assuming there’s a best way to solve it, or that there’s a way it should be solved. But evolution does not solve a problem and then check it off the list. Instead, it ceaselessly reinvents programs, each with overlapping and competing approaches. The lesson is to abandon the question “what’s the most clever way to solve that problem?” in favor of “are there multiple, overlapping ways to solve that problem?” This will be the starting point in ushering in a fruitful new age of elegantly inelegant computational devices.

Friday, January 27, 2012

Why Is Type 1 Diabetes Rising Worldwide?


We’ve gotten sadly accustomed by now to warnings about obesity and its effect on health: joint damage, heart disease, stroke, diabetes and its complications such as blindness and amputation. We almost take for granted that as obesity increases worldwide, diabetes will also, and it is. That is, type 2 diabetes — the kind that is linked to obesity and used to be called adult-onset diabetes — is rising as obesity does.
But here’s a puzzle: Type 1 diabetes — the autoimmune disease that begins in childhood and used to be called juvenile-onset diabetes — is rising too, around the globe, at 3 percent to 5 percent per year. And at this point, no one can quite say why.
I have a column in the February Scientific American, on newsstands now and live on the web, exploring this conundrum. There is a raft of researchers exploring the issue, but so far there is only one thing they can say for sure: The increase, which began in the 1950s and accelerated in about the 1980s, is happening too fast to be due solely to genetic change. Something in the environment is driving the increase. But what?
The challenge for explaining the rising trend in type 1 diabetes is that if the increases are occurring worldwide, the causes must also be. So investigators have had to look for influences that stretch globally and consider the possibility that different factors may be more important in some regions than in others.
The list of possible culprits is long. Researchers have, for example, suggested that gluten, the protein in wheat, may play a role because type 1 patients seem to be at higher risk for celiac disease and the amount of gluten most people consume (in highly processed foods) has grown over the decades. Scientists have also inquired into how soon infants are fed root vegetables. Stored tubers can be contaminated with microscopic fungi that seem to promote the development of diabetes in mice.
None of those lines of research, though, have returned results that are solid enough to motivate other scientists to stake their careers on studying them. So far, in fact, the search for a culprit resembles the next-to-last scene in an Agatha Christie mystery — the one in which the detective explains which of the many suspects could not possibly have committed the crime.
One of the best-elaborated hypotheses suggests that lack of exposure to infections in childhood keeps the various components of the immune system from learning how to hold themselves in balance. If this sounds familiar, it’s because it’s a version of the “hygiene hypothesis” (past posts here, here and here), which says that a too-clean childhood can lead to allergies later in life.
The diabetes version of this hypothesis explores whether conditions that are a proxy for exposure to infections — not having older siblings in the house, not attending day care, being born by Caesarean — can have an effect on the occurrence of diabetes. No clear culprit has been found yet.
Some researchers say it is possible that obesity may play a role. In type 2 diabetes, tissues in the body that receive the hormone insulin, which regulates blood sugar, become insensitive to it. In type 1, the body destroys the insulin-producing cells. But an “overload” hypothesis is now suggesting that if a child is obese to begin with, that could prime the insulin-producing cells for failure, with the autoimmune attack pushing them over the edge.
If obesity is an explanation, it’s not a comforting one. As the CDC’s National Center for Health Statistics noted today, a whopping percentage of United States adults — 36 percent — are obese. And the trend is not reversing. By 2048, according to Johns Hopkins researchers whose work is discussed in my story, every adult in America will be at least overweight if the current trend continues.
That’s a lot of potential diabetes cases: a lot of glucose monitors, syringe jabs and inevitable blood sugar swings, if you care for it well, and a lot of kidney disease, heart disease, amputations and blindness if you don’t. (Not to mention effects like this image of insulin lipohypertrophy published in the New England Journal of Medicine this week, from years of administering insulin injections.) Let’s hope we find, if not a cure, at least a cause for rising type 1, before the trend gets out of control.
For more on this, here’s an interview I did this week with Virginia Prescott at New Hampshire Public Radio.