Sunday, October 31, 2010

Protein 'perforin' may be cancer breakthrough

November 1, 2010 - 3:09AM
You could be getting a better deal. Compare your broadband plan now!
AAP
Embargoed until 0300 AEDT Monday November 1
SYDNEY, Nov 1 AAP - A protein called perforin that kills rogue cells in the human body may a breakthrough in the fight against cancer and diabetes, researchers say.
A team of Melbourne and London researchers have shown perforin punches holes in, and kills, rogue cells in our bodies.
Their discovery of the mechanism is published on Monday in the science journal Nature.
"Perforin is our body's weapon of cleansing and death," Monash University professor James Whisstock said.
"It breaks into cells that have been hijacked by viruses or turned into cancer cells and allows toxic enzymes in, to destroy the cell from within.
"Without it our immune system can't destroy these cells. Now we know how it works, we can start to fine tune it to fight cancer, malaria and diabetes."
The first observations that the human immune system could punch holes in target cells was made by the Nobel laureate Jules Bordet over 110 years ago.
The researchers from Monash University and the Peter MacCallum Cancer Centre in Melbourne, and Birkbeck College in London, collaborated on the ten-year study to unravel the molecular structure and function of perforin, the protein responsible.
The structure was revealed with the help of the Australian Synchrotron, and with powerful electron microscopes at Birkbeck.
Combining the detailed structure of a single perforin molecule with the electron microscopy reconstruction of a ring of perforins forming a hole in a model membrane reveals how this protein assembles to punch holes in cell membranes.
The new research has confirmed that the important parts of the perforin molecule are quite similar to those in toxins deployed by bacteria such as anthrax, listeria and streptococcus.
"The molecular structure has survived for close to two billion years, we think," head of cancer immunology at the Peter MacCallum Cancer Centre, Professor Joe Trapani, said.
Perforin is also the culprit when the wrong cells are marked for elimination, either in autoimmune disease conditions, such as early onset diabetes, or in tissue rejection following bone marrow transplantation.
So the researchers are now investigating ways to boost perforin for more effective cancer protection, and therapy for acute diseases such as cerebral malaria.
With the help of a $1 million grant from the Wellcome Trust they are working on potential inhibitors to suppress perforin and counter tissue rejection.

The Most Amazing Halloween Costume Ever


This Halloween I hope to see someone wander through my neighbourhood in this particular costume. I don’t think there could possibly be anything that’d make those who dutifully dispense candy smile more. [Flickr via Reddit]

Saturday, October 30, 2010

BG to Invest $15 Billion in Australia LNG Project

October 31, 2010, 12:40 AM EDT

(Updates with comments by BG’s Australia managing director from third paragraph.)By Nichola Saminather and James Paton

Oct. 31 (Bloomberg) -- BG Group Plc, the U.K.’s third- largest oil and gas producer, will invest $15 billion in the Curtis liquefied natural gas project in Australia over the next four years, its single biggest investment.
The project in Queensland state, which is scheduled to start fuel shipments in 2014, is among more than a dozen proposed LNG developments in Australia seeking to tap Asian demand for cleaner-burning fuel to curb emissions. BG and rival Santos Ltd. won Australian government approval for their Queensland LNG projects on Oct. 22 and must comply with more than 300 conditions to protect the environment and underground water supplies.
The Queensland Curtis LNG Project involves building an LNG plant on Curtis Island, some 450 kilometers (282 miles) north of Brisbane, a 540-kilometer underground pipeline network, and expanding production in gas fields in the state’s Surat Basin, Catherine Tanna, managing director of the company’s Australian unit, said on a conference call today.
BG has reached agreements to sell LNG to customers in China, Japan, Singapore and Chile and will also provide gas to markets in eastern Australia, the Berkshire, U.K-based company said in a statement today. The project, which will have an operating life of at least 20 years, will likely create 5,000 construction jobs over the next four years, according to the statement.
First Investment Commitment
The BG development is the first coal-seam gas-to-LNG project in Queensland to reach a final investment decision, while Adelaide-based Santos has said it plans to commit to the first phase of its Gladstone LNG venture by the end of the year. That project, with two production units, may cost A$20 billion ($19.7 billion), estimates James Bullen, a Sydney-based analyst at Bank of America Merrill Lynch.
The Curtis plant will be the most efficient in Australia and the second-most efficient in the world in terms of greenhouse gas emissions, generating about 35 percent less than other fossil fuels, Tanna said.
--Editors: Jonathan Annells, Jim McDonald
To contact the reporters on this story: Nichola Saminather in Sydney atnsaminather1@bloomberg.net James Paton in Sydney jpaton4@bloomberg.net.
To contact the editor responsible for this story: Paul Tighe at ptighe@bloomberg.net

Friday, October 29, 2010

NASA’s ‘Solar Shield’ Will Protect Power Grids From Space Weather

NASA’s ‘Solar Shield’ Will Protect Power Grids From Space Weather: "


They’re out there, biding their time. Waiting patiently. And when you least expect it, they’re going to plunge you and everyone you care about into total darkness.

Luckily, we can see solar storms coming from about 150 million kilometres away, and NASA is now in the process of creating a “Solar Shield” that should be able to minimise the damage to power grids caused by electromagnetic disturbances in the atmosphere and ground caused by foul weather on the sun.

The threat to power grids during bad solar weather is known as GIC, or geomagnetically induced current. When the sun ejects a huge coronal mass in our direction, the impact with our atmosphere shakes up Earth’s magnetic field. That generates electric currents from the upper atmosphere all the way down to the ground. These can cripple power grids, overloading circuits and in some cases melting heavy-duty transformers.

Those transformers are very necessary to keep the power flowing. They’re also expensive, irreparable in the field, and can take a year to replace. Meaning that a massive coronal ejection could knock down entire power grids for long stretches of time, grinding economies to a halt and making life more than a little inconvenient.

But NASA has a plan to battle these blackouts with blackouts. If transformers are offline at the time the storm hits they will not be affected, so the trick is to figure out where and when a storm is going to hit before it reaches the atmosphere. To do that, NASA’s SOHO and two STEREO spacecraft identify a coronal mass ejection (CME) heading toward earth and create a 3D image of it, allowing researchers to characterise its strength and determine when it will hit.

Depending on the intensity of the CME, the trip from sun to Earth can take 24-48 hours. NASA would track the CME across the sky, with the pivotal moment coming about 30 minutes prior to impact when the storm comes screaming past the ACE spacecraft, something like 1.5 million kilometres from Earth. Sensors aboard ACE gather more data on the storm’s speed, magnetic field and density that is fed into computer models at NASA’s Goddard Space Flight Center.

With less than 30 minutes until impact, NASA’s models calculate the places most likely to be impacted with dangerous GIC and utilities are notified so they can pull their grids offline. This will cause a blackout in the region, but only a temporary one. When the storm ends, the grids come back online and life goes on.

Solar Shield is experimental at this point, and its hard to know how successful it will be, mainly because it hasn’t had the trial by fire it needs to see if it works. Solar weather has been fairly quiet this year, so the team hasn’t been able to gather the data it needs. But considering we’re going into a period of increased solar activity (solar weather ebbs and flows cyclically) that will peak in 2013, Solar Shield will likely get its chance soon enough. [NASA Science News]

China's Aggressive Push Toward Clean Energy Paying Off

China's Aggressive Push Toward Clean Energy Paying Off: "
A new report out of the Worldwatch Institute details the country's ambitions and growth.
wind turbine

China is on track to be the world leader in clean energy, as evidenced by both policy and technological investments, a report out of the Worldwatch Institute indicates. The new report details China's progress and plans for the next 10 years, all leading to outpacing most of the world in clean energy resource development.


“Governments and industries around the world are now struggling to keep pace with China,” said Worldwatch President Christopher Flavin. “China is succeeding precisely where the United States is failing--in implementing the ambitious policies and making the sustained investment that is needed to spur growth in clean energy. If China keeps on its current pace, it will be the undisputed global leader in clean energy within the next two years.”


Some of the highlights of the report include:


In 2009, China surpassed the United States to become the world’s largest market for wind power, housing nearly one-third of the total installed capacity.China’s newly added wind power capacity has doubled every year for the past four years. The country added 13.8 gigawatts (GW) of new capacity in 2009.
In 2009, China’s solar photovoltaic (PV) companies held 40 percent of the global market, with most production being exported to Europe. More than 20 Chinese solar PV companies have successfully engaged in initial public offerings (IPOs), and five of these rank in the world’s top 10 in solar PV production.
China’s installed solar water heating capacity alone accounts for 80 percent of global installations.

The report is meant to educate those outside of China about the opportunities and challenges the country faces with regard to clean energy.
Follow me, Jenara Nerenberg, on Twitter.
[Image: flickr user Patrick Finnegan]





"

Thursday, October 28, 2010

Solar efficiency, effectiveness - Finding the right tools for the right needs in solar components.

By engineering the tilt of the panel to match the latitude of the building location, systems can generate an optimal amount of energy throughout the year.
As solar technologies continue to evolve, new techniques and development of methodologies for implementing solar installations occur to make the process virtually seamless. This leads to more and more companies from all over the world to continue to look to this renewable energy source as a low-risk, high-return investment. As this demand continues to grow, so does the demand for greater efficiency and effectiveness of the system as a whole. In return, the answer to these demands is higher efficiency and lower cost solar technologies, and better strategies for space planning, building integration, and component utilization.

When people think about solar, they think about solar panels. Along with the inverter, the solar panels are one of the most critical elements of any solar array – and certainly the most popular.

At this point, new developments in photovoltaic (PV) technology happen almost every week. Regardless of a building’s unique needs, there is a PV technology that can support the installation and deliver the required energy-savings. That said, one must gauge what the needs are in order to determine where to start so that the installation will be as efficient as possible.

Before a company can even begin to look at installing solar panels, they have to look at the solar radiation data for the location. The National Renewable Energy Laboratories (NREL) has a great set of tools and models freely available to use as a reference source when determining a specific location’s solar return versus other renewable energy sources. In the end, determining if solar is a cost-effective method for generating energy will come down to economics.

While solar can be installed at any location, it is just not a cost-effective energy solution for some areas or situations. Once it is known how much potential solar energy can generated, how much on average the current cost of electricity is, and what kind of incentives (government or private) can be obtained for the project, those numbers can be compared to determine if a solar installation is the right fit for the building.

To determine the incentives, it might be beneficial to consult an industry expert. Some renewable energy solutions providers will provide you with information regarding local, state, federal, and utility incentives that may qualify for the installation. Having the system designer help with the incentives is generally a more fruitful strategy because incentives are dynamic and new incentives are being offered all the time.

Structure, EnvironmentOnce it is been determined that a solar installation will be a valuable asset to the building, one must examine the layout and structural capabilities of the building. This will dictate the type of solar panels used and how they will be mounted. For example, typically on a rooftop installation, real estate is at a premium and traditional mono or multi crystalline panels will need to be selected as they offer the highest efficiency and energy production per square foot.
However, if it is a large roof, with a lot of unobstructed real estate, thin film panels or membranes, which are cheaper, but less efficient, may be a viable alternative.

The permanency of the array is a key determining factor. For example, one of the more popular installation options is a ballasted mounting system, which is a non-penetrating mounting system. Instead of bolting the system down into the roof structure, the installer simply places the system on top of the roof, using ballasts to hold the installation in place. This installation method is becoming increasingly attractive for building managers and owners due to the fact that it does not penetrate the roof membrane and can be removed or updated without any structural impacts. While a ballasted mounting system offers greater flexibility, attention must be paid to the structural integrity of the roof framing to ensure that the roof members can safely sustain the weight of the ballasted system. Solar providers use structural engineers to determine that the system layout and weighting are specifically designed to a meet a particular roof’s structure’s capabilities under the worst-case loading conditions.

Engineering of modern solar panels are to withstand most environmental conditions. Testing of panel is for the ability of the panel to withstand lead weighted balls dropped directly on their surface without causing any damage. This test requirement ensures that manufacturers’ panels can withstand most hail and flying debris.

Wind load is another environmental factor that needs consideration. When there is a flat rectangle with such a large surface area, it is bound to pick up some wind turbulence. Most cities have implemented some kind of wind load requirements that the installer will have to ensure the system can withstand.

For AEG’s solar installation at its North American headquarters in Plano, TX, the city designated the installation had to be able to withstand the same wind loads (90mph) as the roof structure. Since the installation utilized a ballasted mounting system, special precautions had to be taken to apply enough weight to each mounting bracket to ensure the system could hold up to the design wind conditions. Of course, the more weight you apply to the system, the more stress you place on the roof. In addition, it is important to tilt the panels to ensure optimal energy generation; however, when using a ballasted mounting system the degree of the tilt might be limited depending on the location and the wind requirements.

Typically, installers select an engineered mounting system that considers the tilt and size of the solar panel in use. By engineering the tilt of the panel to match the latitude of the building location, generation of an optimal amount of energy throughout the year is possible. For instance, in the AEG project mentioned earlier, Plano sits at roughly 33° latitude. To achieve optimal results, it would have been necessary to tilt the panels to a 33° angle. However, due to the design wind load of 90mph, and the fact that the array used a ballasted mounting system to reduce the structural impact on the building, the roof framing and required ballast weight only allowed the panels to be tilted at 15°. Even with this limitation, by proper tilting of the panel, there was significant increase in the energy yield.

Even after calculation of the mounting configuration and optimal tilt, one must still draw up plans for the placement. Commercial buildings require a lot more care and precision regarding rooftop array placement due to the extensive amount of equipment already on the roof. Air conditioning units, skylights, turbines, and other miscellaneous equipment not only create hurdles but also can cast shadows over larger portions of the roof that can limit the efficiency and effectiveness of the system. The system designers will need to assess the existing equipment on the roof and check the shadows created when determining the panel layout to ensure the system will continue to generate energy without losses.

Component ChoicesAnother key consideration when designing a solar energy system is the inverter. The inverter is the brain of the solar system, responsible for controlling the electricity flow between the panels, loads, and power grid. Inverters convert the electricity generated by the solar panels from a direct current (DC) into an alternating current (AC) that can then be fed directly into the building’s power source or into the power grid. Selection of the correct solar panel configuration to fit, electrically, with the inverter is critical. Each solar panel will have a standard test condition (STC) rating. This is the amount of energy the panel is expected to generate under optimal conditions. However, remember, the actual amount of energy the panel produces changes all the time due to varying weather states.
To accommodate the inverters, solar panels are connected to the inverters in string configurations. A single inverter may serve parallel input of several strings and the amount of strings per inverter is dependent on the inverter’s load capabilities. In the case of the installation at AEG, the array ended up using 234 solar panels, each with a 210W STC rating. The six inverters that were used on the building can produce 7,000W each; therefore, it was determined that AEG could accommodate three strings of 13 panels per each inverter, or 49.14kW STC total array rating.

In addition to providing the DC/AC conversion between the panels and the building, inverters are responsible for additional functions such as maximizing the power output, charging the battery (if you plan on using one), and protecting the circuit from potential damage due to failures or complications in the system. The inverter has a direct impact on the efficiency and effectiveness of the system as a whole. Since the inverter is directly responsible for providing the power generated by the solar panels to the building or grid, its overall efficiency is key to maximizing your investment. The lower the efficiency of the inverter, the more energy lost. Ideally, one wants to look for an inverter that offers 95% efficiency or greater. There are even some inverters capable of offering efficiencies greater than 98%. While these may cost a little more, if looking at a solar installation as a long-term investment, it may be worth considering the upgrade in order to maximize the system’s efficiency and overall ROI.

Finally, new technologies are helping to distinguish some inverters from the rest of the pack. While the solar system investment might be largely covered by government and utilities incentives, one still wants to see what kind of energy and returns can be achieved. Some solar inverters are being outfitted with new communications capabilities so that they can actually provide performance data, energy production information, environmental conditions, and faults or alarms in the system. While some of the data may not be important for the organization as a whole, it is critical for the person in charge of the system. They can choose to receive email alerts when an alarm goes off or there is a fault in one of the solar panel strings. This is especially important because solar installations, for the most part, are self sustainable. Once the initial installation is complete the system pretty much runs itself; however, if something does go wrong, or there is a problem, the need to react fast is important because every hour the system is not running at full speed is an hour of lost energy and money. For the most part, communications can be integrated easily into the existing local area network (LAN) to eliminate any additional costs.

Planning is CrucialWhen installing a solar energy system, planning is a crucial part of the component selection process. It is more about determining what your specific needs are and what tools will meet those needs most efficiently rather than just saying this is the best inverter or solar panel out there and everyone should be using it. There are so many external factors involved in the process that can have a major impact on the efficiency and effectiveness (from both a production and a cost perspective) on the system as a whole. When it comes down to it, those are the two most important characteristics of any solar installation – efficiency and effectiveness. It is the main reason people turn to solar in the first place: They are looking for a way to meet their energy needs that is more efficient and effective than traditional means.
AEG Power Solutions
Plano, TX

Video: Magnetic Twister Erupts on Sun

Video: Magnetic Twister Erupts on Sun: "


NASA’s Solar Dynamics Observatory caught an enormous plasma twister erupting on the surface of the sun Oct. 28.

The explosion was triggered by a tangled coil of magnetism that suddenly untwisted, acting like a loaded spring and hurling solar matter into space. At its peak, the twister towered more than 217,000 miles above the surface of the sun.

Luckily, the fragments of plasma flung into space were not headed toward Earth, where they could have caused a magnetic storm. Now that the twister has relaxed, it probably won’t erupt again — though other sunspots are gathering energy and could produce medium-sized solar flares.

Via spaceweather.com

Image: NASA/SDO

See Also:


Follow us on Twitter @astrolisa and @wiredscience, and on Facebook.
"

Wednesday, October 27, 2010

Does taking aspirin really reduce risk for prostate cancer-specific mortality?


Posted on  by Sitemaster
A presentation to be given next week at the upcoming annual meeting of the American Society for Radiation Oncology (ASTRO) will suggest that men initially diagnosed with localized prostate cancer and who take an anti-clotting agent are at significantly lower risk for prostate cancer-specific death at 10 years of follow-up.
A regular aspirin regimen has previously been linked to a reduction in risk of developing colon cancer, although this is still not definitive. In the study to be reported by Choe and colleagues next week, they will present data suggesting that men taking aspirin or some other anti-clotting agent (e.g., warfarin or enoxoparin [Lovenox] or clopidogrel [Plavix]) may have a > 50 percent reduction in relative risk for prostate cancer-specific mortality. However, even the research team emphasizes that these are preliminary findings from a retrospective analysis. Newly diagnosed prostate cancer patients are not being encouraged to start an immediate regimen of daily aspirin in the hope that it will extend their lives.
It is already well know that (a) people with cancer are more prone to blood clots; (b) people with blood clots have a higher risk of cancer; and (c) laboratory and animal studieshave shown that anti-clotting agents can affect the growth and spread of cancer. So Choe and his colleagues decided to test the idea that anti-clotting drugs might affect the risk of death among men with prostate cancer.
According to a media release from ASTRO, their study was based on an analysis of data from 5,275 men initially diagnosed with localized cancer who were enrolled in the CaPSURE data registry. The data from these men have been reported (in an article on WebMD) to show the following:
  • 1,982/5,275 men (34.6 percent) were taking anti-clotting agents.
  • At 10 years of follow-up
    • 10 percent of men not taking an anti-clotting agent had died from prostate cancer.
    • 4 percent of men taking an anticlotting agent had died of prostate cancer.
    • 7 percent of men not taking an anti-clotting agent had metastasis.
    • 3 per cent of men taking an anti-clotting agent had metastasis.
    • 43 percent of men not taking an anti-clotting agent had prostate cancer recurrence.
    • 33 percent of men taking an anti-clotting agent had prostate cancer recurrence.
While these data show a relative risk reduction of 50 percent in prostate cancer-specific mortality (from 10 to 4 percent), the absolute risk reduction is only 6 percent.
Anthony Zietman, MD, the president of ASTRO, says that these data need to be confirmed, and that even if they are confirmed, the optimal dose, timing, and duration of use of the anti-clotting agent would still need to be worked out. However, Dr. Choe points out that those ”patients who are taking aspirin for other reasons may see an added benefit.”
Aspirin and other anti-clotting agents are well known to carry risks for internal bleeding and related complications. Even a daily “mini-aspirin” (a dose of just 75 mg/day) is not without risk. Patients should not start to self-medicate with any anti-clotting agent without first discussing this with their doctor.

Great Article - Food for thought: Cooking in human evolution

Did cooking make us human by providing the foundation for the rapid growth of the human brain during evolution? If so, what does this tell us about the diet that we should be eating, and can we turn back the culinary clock to an evolutionarily ideal diet? A number of provocations over the last couple of weeks have me thinking about evolution and diet, especially what our teeth and guts tell us about how our ancestors got their food.


I did a post on this a while back at Neuroanthropology.net, putting up my slides for the then-current version of my ‘brain and diet’ lecture from ‘Human evolution and diversity,’ but I’m also thinking about food and evolution because I just watched Nestlé food scientist, Heribert Watzke’s TED talk, The Brain in Your Gut. Watzke combines two intriguing subjects: the enteric nervous system, or your gut’s ‘second brain,’ and the evolution of diet. I’ll deal with the diet, gastro-intestinal system and teeth today, and the enteric nervous system another day because it’s a great subject itself (if you can’t wait, check out Scientific American).



This piece is going to ramble a bit, as it will also include some thoughts on the subject of diet and brain evolution sparked by multiple conversations: with Prof. Marlene Zuk (of the University of California Riverside), with Paul Mason (about Terrence Deacon’s article that he and Daniel wrote about), and following my annual lecture on human brain evolution as well as conversations today with a documentary crew from SBS. So let’s begin the meander with Dr. Watzke’s opening bit on why he thinks humans should be classified as ‘coctivors,’ that is, animals that eat cooked food, rather than ‘omnivores.’




Although I generally liked the talk, I was struck by some things that didn’t ring quite right, including Dr. Watzke’s opening bit about teeth (from the online transcript):


So everyone of you turns to their neighbor please. Turn and face your neighbors. Please, also on the balcony. Smile. Smile. Open the mouths. Smile, friendly. (Laughter) Do you — Do you see any Canine teeth? (Laughter) Count Dracula teeth in the mouths of your neighbors? Of course not. Because our dental anatomy is actually made, not for tearing down raw meat from bones or chewing fibrous leaves for hours. It is made for a diet which is soft, mushy, which is reduced in fibers, which is very easily chewable and digestible. Sounds like fast food, doesn’t it.


Okay, let’s not be pedantic about it, because we know that humans, in fact, do have canines. Watzke’s point is that we don’t have extended canines, long fangs that we find in most carnivorous mammals or in our primate relatives like chimps or gorillas.


The problem is that the absence of projecting canines in humans is a bit more interesting than just, ‘eat plants=less canine development.’ In fact, gorillas are completely vegetarian, and the males, especially, have massive canines; chimpanzees eat a very small amount of animal protein (something like 2% of their caloric intake), and they too have formidable canines. Our cousins don’t have extended canines because they need them for eating – rather, all evidence suggests that they need big fangs for fighting, especially intraspecies brawling among the males in order to reproduce.


Teeth of human (left), Ar. ramidus (middle), and chimpanzee (right), all males.


The case of chimpanzee canines is especially intriguing because, with the remains of Ardipithecus ramidus now more extensively discussed, a species potentially close to the last common ancestor of humans and chimps, we know very old hominids didn’t have pronounced canines. If the remains are indicative of our common ancestor with chimpanzees (and there’s no guarantee of that), then it’s not so much human canine shrinkage alone that’s the recent evolutionary development but also the re-development of chimpanzee canines, probably due to sexual competition.


Even with all the possible points of disagreement, the basic point is that human teeth are quite small, likely due both to shifts in our patterns of reproduction and sexual selection and to changes in our diet. Over the last few million years, our ancestors seemed to have gotten more and more of their calories out of meat, one argument goes, at the same time that our ancestors’ teeth were getting less and less capable of processing food of all sorts (or, for that matter, being effectively used as a weapon).


Hungrier and hungrier, with weaker jaws and smaller teeth


As I always remind my students in my lecture on human brain evolution, if big brains are so great, why doesn’t every animal have one? The answer is that big brains also pose certain challenges for an organism (or, if you prefer, ‘mo’ neurons, mo’ problums’).


The first and most obvious is that brains are hungry organs, devouring energy very fast and relentlessly, especially as they grow. The statistic that we frequently throw around is that the brain constitutes 2% of human body mass and consumes 25% of the energy used by the body; or, to put it another way, brain tissue consumes nine times as many calories as muscle at rest. So, if evolution is going to grow the brain, an organism is going to have to come up with a lot of energy – a smaller brain means that an animal both can eat less and be more likely to survive calorie drought.


But hominin brain growth also presents a few other problems, which sometimes get underestimated in accounts of our species’ distinctiveness. For example, natural selection had to solve a problem of excess heat, especially if big-brained hominids were going to do things that their big brains should tell them are ill advised, like run around in the hot sun. As your brain chews up energy, it generates heat, and the brain can overheat, a serious problem with sunstroke. The good news is that somewhere along the line our hominin ancestors picked up a number of adaptations that made them very good at shedding heat, from a low-fur epidermis and facility to produce copious sweat to a system of veins that run from the brain, shunting away heat (for a much more extensive discussion, see Sharma, ed. 2007, or the work of anthropologist Dean Falk, including her 1990 article in BBS laying out the ‘radiator theory’).


Not only is our brain hungry and hot; our enlarged cranium also poses some distinctive challenges for our mothers, especially as bipedalism has narrowed her birth canal by slowly making the pelvis more and more basket shape (bringing the hips under our centre of gravity). The ‘obstetrical dilemma,’ the narrowing of the birth canal at the same time that the human brain was enlarging, led to a bit of a brain-birth canal logjam, if you’ll pardon the groan-worthy pun (see Rosenberg and Trevathan 1995).


Although frequently presented as a significant constraint on brain growth (and I’m sure all mothers would agree that they wouldn’t want those brains getting too much larger), the obstetrical dilemma likely also led to hominin infants being born more and more premature, relative to other primates. Humans possess only 23% of their mature brain size at birth. Other mammals are much more; macaques, for example, attain about 65% of their adult brain size at birth. Some comparative research suggests human infants are basically extra-uterine fetuses, relative to other mammals, until about 21 months of development (a year after birth).


The early birth eases labor, of course, but it’s also great for making humans teachable, extending the rapid rate of immature growth much longer. Humans go from having about normal brain-body weight ratios for great apes at birth, until our brains are approximately 3.5 times larger relative to our weight than for our cousins. But like so many evolutionary changes, the extraordinary helplessness of the infant for so long brings with it enormous logistical challenge. I like to joke with students that getting the sperm and egg together is the easy part of human reproduction, childbirth only a bit worse (that passes for humour in my lectures); the real trial is keeping this helpless little thing alive long enough to become viable on its own. Currently, in Australia, this appears to be approaching three decades after birth (… I will refrain from making any jokes about Generation ‘Why?’…).


But the real kicker, for a discussion of diet, is that the whole biological apparatus for getting energy (teeth, jaw muscles, stomach, guts) appears to be inversely related to the enlarging brain over the last few million years. As the hominin body routed more energy to the brain, one of the systems that lost out is the gut; this brain-gut trade-off was one of the basic points argued by Aiello and Wheeler (1995) in their original explanation of the ‘expensive tissue’ hypothesis. In fact, Aiello and Wheeler (1995: 205) calculate that the energy savings gained from the liver and gastro-intestinal tract being smaller than predicted for a human-sized mammal exactly offset the unusually high energy demands of the disproportionately large human brain.


Cooking up some brain evolution


So how, given the decreasing efficiency of jaw and gut, did our ancestors get enough energy? According to Heribert Watzke’s TED talk, our ancestors figured out fire and cooking, and thereby used these techniques to extract more energy with less effort, from a range of foods. Watzke doesn’t explicitly cite his work, but the leading advocate of the evolutionary significance of cooking is Richard Wrangham, and I’ve been thinking a lot about his work because I’ve just lectured on the topic, and because of a conversation I had with Prof. Marlene Zuk a couple of weeks ago.


Prof. Marlene Zuk (UC Riverside)


Prof. Zuk is a specialist in the study of sexual behaviour, sex selection, and parasites, and, although she’s done a lot of interesting work, I find some of her research (and her lab’s) on the possibility of conflict between natural and sexual selection to be especially intriguing, likely because in my own life, the sexual drive and survival instincts seem to be so at odds (but perhaps this is just too much personal information for our readers…).


I originally ‘met’ Prof. Zuk – in an online, virtual sort of way – after she wrote a great piece about contemporary attempts to follow ‘Paleolithic’ diets in The New York Times, and I responded on the old Neuroanthropology website. The piece has been one of my more popular ones, receiving periodic spasms of renewed interest when someone directs people’s attention to it on bulletin boards or discussions of ‘Stone Age fitness regimens’ and the like.


In that piece, I pointed out intersections between Prof. Zuk’s thoughts on ‘paleofantasies’ in diet and my own work on sports (the quote is from my earlier piece):


Zuk draws on Leslie Aiello’s concept of ‘paleofantasies,’ stories about our past spun from thin evidence, to label the nostalgia some people seem to express for prehistoric conditions that they see as somehow healthier. In my research on sports and masculinity, I frequently see paleofantasies come up around fight sports, the idea that, before civilization hemmed us in and blunted our instincts, we would just punch each other if we got angry, and somehow this was healthier, freer and more natural (the problems with this view being so many that I refuse to even begin to enumerate them). It’s an odd inversion on the usual Myth of Progress, the idea that things always get better and better; instead, paleofantasies are a kind of long range projection of Grumpy Old Man Syndrome (‘Things were so much better in MY day…’), spinning fantasies of ‘life before’ everything we have built up around us. (From Paleofantasies of the perfect diet – Marlene Zuk in NYTimes)


In that piece, I tried to point out that you can’t recreate a ‘Paleolithic diet’ by simply rolling your shopping trolley down the aisles of the shopping market, picking out nuts and fruit and lean meat – our ancestors ate on the move, seasonally, from hundreds of food sources, including stuff we likely wouldn’t have laid a hand on, let alone eaten raw. Think the Road Kill Café with a side of grubs and nuts, and then remember that even our meat is domesticated, all soft and flabby and buttery-easy to eat. Not the kind of dinner that was a fair fight.


I won’t re-write the first essay, but I do want to reflect on one thing that we talked about: the role of cooking in human brain evolution. Prof. Zuk and I specifically discussed, in our lunch meeting over cold finger sandwiches and fruit slices, Richard Wrangham’s thoughts on the subject, and I tried to describe why I felt uncomfortable with his specific theory, although I generally thought he was right about the overall pattern.


Richard Wrangham on cooking


Richard Wrangham, Harvard (Photo: Jim Harrison)


Richard Wrangham is Ruth Moore Professor of Biological Anthrpology at Harvard University, with a strong background in the study of chimpanzees. Wrangham argues that hominins began cooking their food 1.8 million years ago (mya) even though the earliest evidence of controlling fire is, at best, half that old. He points to the diet of chimpanzees:


Richard Wrangham has tasted chimp food, and he doesn’t like it. “The typical fruit is very unpleasant,” the Harvard University biological anthropologist says of the hard, strangely shaped fruits endemic to the chimp diet, some of which look like cherries, others like cocktail sausages. “Fibrous, quite bitter. Not a tremendous amount of sugar. Some make your stomach heave.” (from Cooking Up Bigger Brains)


Wrangham has elaborated the argument in a number of his academic papers (for an excellent review, see Wrangham and Conklin-Brittain 2003). He’s also argued that the kind of raw bush meat that chimpanzees eat – usually smaller monkeys, and even then only the mature males get it – is tough and unpleasant, with large portions of it in skin and fur that don’t go down easy, by any stretch.


Wrangham’s theory is controversial in anthropology, and I don’t fully agree with him, but he does put his finger on the complexity of the brain-jaw trade-off in human evolution. Our ancestors were steadily growing larger brains, energy-hungry organs, while the on-board apparatus that they used to get energy out of food (teeth, jaws, guts) was diminishing in effectiveness. Our ancestors had to come up with some sort of better solution, either better food or stronger food processors.


Wrangham suggests that a marked decrease in tooth size and gut length with the advent of Homo erectus, especially as H. erectus had a large brain and a tall, lanky body, suggest that a profound change in diet must have occurred. The usual explanation, the one Wrangham reacted against, is that our hominin ancestors were steadily shifting to a diet heavier and heavier in animal protein as first scavenging and then hunting techniques improved. Stone tools were the key technological innovation, first because they could become a kind of prosthetic teeth, allowing us to ‘pre-chew’ food by pounding, cutting, grinding and butchering it with our tools.


Wrangham, however, thinks that only cooking could have unlocked the calories from food efficiently enough to make the growing hominin brain viable, that chimpanzee food – including the meat that chimpanzees get – is hard to chew, and not sufficiently energy dense. I’m trying to remember which of his publications it appears in, but I recall reading an estimate that I think Wrangham made: if we ate chimpanzee food (which would be unpleasant, as it’s lousy), we would need five kilograms a day to feed ourselves. This bulk of dry fruit would require something like six hours of chewing and demand that our bodies process outrageous amounts of fibre.


An epicurean revolution?


I think Wrangham is only partially right, just as the older theory of increasing carnivorousness in the human diet is probably accurate, to some degree, as well. To me, the evolutionary growth curve of the hominin brain doesn’t look like a single revolution, but a steady, accelerating growth that is more likely the result of multiple changes over time rather than a single innovation.


The steady, accelerating pattern of brain growth was likely supported by shifts in diet as new food-procuring and preparation techniques steadily lifted the energy constraint on the brain’s development. That is, when we look at the upward turn of brain growth over hominin evolution, I think we’re seeing the effects of a set of constraints that decreased: our ancestors, through biological, technological and social change, overcame a number of selective constraints on brain growth, so the organ steadily increased in size.


Rather than being contained by stabilizing selection, the lifted constraint model of selective shift suggests that directional selection could then push the brain toward greater size, selecting for any number of possible greater abilities: social intelligence, strategic foresight, problem solving, or, later, language ability. In other words, brain growth could be driven both by adaptive benefit and by decreasing biological ‘cost’; changing the diet could decrease the cost of the bigger brain just as having a more-and-more premature infant could decrease the obstetrical constraint on encephelization.


Of course, you’d still need a genetic mechanism that could generate variation toward greater brain size and selective advantages that would make the larger brain beneficial. In our case, a significant shift in the maturation pattern to extend the immature growth curve, probably from regulation of gene expression, could provide the mechanism, and any number of advantages have been offered to explain why a bigger brain would be a groovy thing to have. But the removal of constraint would upset a pattern of stabilizing selection, shifting our ancestors into a pattern of directional selection toward greater brain size.


What I like about Wrangham’s approach, then, is that he focuses on relaxed constraint, not simply on adaptive advantage (the idea of relaxed constraint has been on my mind especially since Paul and Daniel’s piece on Terrence Deacon). Just because a big brain is nice to have doesn’t mean a species gets one – the species has to develop ways of dealing with the downsides and limitations, lifting the selective pressures that suppress greater brain growth. In humans, these constraints would include, among others, energy demands, heat dissipation, childbirth, and anatomical remodeling, as I outlined.


I don’t think it was just cooking that overcame the energy constraint, although I think Wrangham is right (and Watzke, too) that using fire to process and concentrate the energy in food would have been a major breakthrough. But unlike Wrangham, I think cooking was part of a pattern of innovation in finding, exploiting and processing high-energy caches of food. The option wasn’t just a forced choice between six hours of chewing on a chimpanzee-like diet or Master Chef. Humans would have found numerous ways to exploit more animal protein, including a lot of invertebrates and aquatic sources, improved their ability to follow seasonal fluctuations in high energy food, pounded fibrous foods, and a number of other techniques.


For example, Watzke highlights how fire could have been used to process unfamiliar foods as hominin range expanded. While this is true, there are other techniques as well that can be just as effective, or even more effective for dealing with some foods, such as soaking, drying, or fermenting. Some of the cognitive traits necessary for effectively transforming foods through non-cooking methods also likely contributed to the ability to use fire effectively: strategic thinking, restraint, planning.


Not all contemporary societies (even our own) rely completely on fire for food preparation, finding other ways to soften, concentrate and prepare raw food so that we don’t wind up chewing all day. Meat can be prepared with acid, such as fruit juice, or cut into strips or small pieces to ease eating; many societies eat raw fish, shellfish, eggs or other foods; tough roots get pounded, ground, and soaked; the sun can be used to dry and preserve meat; liquids and even proteins can be fermented, letting decomposition partially process food; and other animals can be made to process food for us, as we intercept their milk, their honey, or other edible products, often intended for their own consumption. Cooking then was probably one of the most important innovations for concentrating and softening food, but it was not the only one.


I agree with Wrangham that cooking is extraordinarily important, that you can’t grow a human brain on a chimpanzee diet. But, ultimately, fire is not the only thing that makes the human diet different from a chimp diet; our pre-fire hominin ancestors were likely already becoming much more versatile, discerning, wide-ranging eaters, with a whole bag of food preparation tricks and a growing ability to find high-return energy-dense food sources.


One of the many interesting wrinkles in the evolutionary story of our species is that hominins went from being largely, perhaps almost entirely, herbivorous, to being staggeringly versatile eaters. Our hunting and foraging ancestors (and our contemporaries) found their calories all over the place, from sources that far exceed the variety that we can find today in a well-stocked grocery store. As I remind my undergraduate students, with their often chicken-beef-pork urban diet of constant repetition, our ancestors were getting their animal protein from such a variety of species that we can scarcely imagine the selection.


Links:


Heribert Watzke: The brain in your gut


Paleofantasies of the perfect diet – Marlene Zuk in NYTimes


Marlene Zuk in NYTimes


John Durant tries to explain the ‘Caveman Diet’ to Stephen Colbert — Thanks to the Wednesday Roundup (and Daniel) for the link! Offers some good advice on exercise and diet, but thankfully doesn’t attempt to get to deep into the paleoanthropological evidence. Don’t eat heavily processed simple carbohydrates and mix up your fitness regimens; that, I can agree with!



The Evolution of Cooking: A Talk With Richard Wrangham, at The Edge.


From Studying Chimps, a Theory on Cooking, a conversation with Richard Wrangham, by Claudia Dreifus at The New York Times.


Evolving Bigger Brains through Cooking: A Q&A with Richard Wrangham (Scientific American)


Richard Wrangham – Rediscovering Fire at Point of Inquiry. (Host: Chris Mooney)


An interview with Richard Wrangham by Veronique Greenwood at Seed Magazine.


Cooking Up Bigger Brains Scientific American, by Rachel Moeller Gorman (Scientific American).


Images


Dental reconstruction and comparison from G. Suwa et al. 2009, authors’ summary at Science magazine (image here originally).


Photo of Marlene Zuk from ‘HBES 2006 Program Information.’


Photo of Richard Wrangham by Jim Harrison from Harvard Magazine, The Way We Eat Now (2004).


Graph of brain size from the webpage, The Evolution of Intelligence, by Prof. Renato M.E. Sabbatini.


References:


Aiello, Leslie C., and Peter Wheeler. 1995. The Expensive-Tissue Hypothesis: The Brain and the Digestive System in Human and Primate Evolution. Current Anthropology 36(2): 199-221.


Falk, Dean. 1990. Brain Evolution in Homo: The “radiator theory.” Behavioral and Brain Science 13: 333–381 (target article and responses).


Rosenberg, K., & Trevathan, W. (2005). Bipedalism and human birth: The obstetrical dilemma revisited Evolutionary Anthropology: Issues, News, and Reviews, 4 (5), 161-168 DOI: 10.1002/evan.1360040506


Sharma, Hari Shanker, ed. 2007. Neurobiology of Hyperthermia. Progress in Brain Science Research 162. Elsevier.


Suwa, G., Kono, R., Simpson, S., Asfaw, B., Lovejoy, C., & White, T. (2009). Paleobiological Implications of the Ardipithecus ramidus Dentition Science, 326 (5949), 69-69 DOI: 10.1126/science.1175824


Wrangham, R. (2003). ‘Cooking as a biological trait’ Comparative Biochemistry and Physiology – Part A: Molecular & Integrative Physiology, 136 (1), 35-46 DOI: 10.1016/S1095-6433(03)00020-5


"

Jill Rixon's Surprise Birthday Party


Tony and Carmen, aided by other family members, organised a unique event. Jill said it was the first time she been the subject of a surprise birthday party. Look at this photo and see the uncertainty, embarrassment and terror that one feels when the surprise is sprung!















Jim was delighted that the family was so attentive in making her feel so special. Emily was disappointed that her Canberra commitments did not allow her to attend... but otherwise, Jill's brother, all her children, grandchildren, many nephews, nieces, second cousins and in-laws were in attendance.
















Carmen was delighted that her mum and dad were able to attend. Jill was very pleased as well.
















There was a DVD with pictures of Jill from 'infancy' to 'maturity' that included the family. As usual, we were all embarrassed by how 'gorky' we appeared in our youth.

















The night was made possible only because Tony and Carmen put in so much work and resources to bring everyone together in a very fashionable restaurant with so much room to mix and talk. We were all impressed... and the grandchildren will have special vivid memories of Granny that will stay with them for the rest of their days. Tony and Carmen... thank you very much.



Tuesday, October 26, 2010

Solar Millennium Gets The Greenlight To Build The World’s Largest Solar Project In California


The US solar market took another step forward this week with the federal government’s approval of Solar Millennium’s plan to build a massive thermal power station in Blythe, California. Located between Phoenix and Los Angeles in the arid Palo Verde Valley, this thinly populated city will soon be home to the world’s largest solar project.

The Interior Department’s Bureau of Land Management delivered the final greenlight, wrapping up a (relatively swift) year-long approval process. Technically, it is the government’s first approval of a parabolic trough power plant, which uses curved mirrors to direct the sun’s heat towards a pipe that contains a heat transfer fluid. The heat from this fluid helps create steam which ultimately powers a turbine

Solar Millennium, a German firm, plans to build four plants on the expansive property with a total capacity of 1,000 megawatts— which is roughly on par with the country’s current total solar capacity. With 1,000 MW at completion, the station would be able to power more than 300,000 homes.

The hope, the company says, is to start supplying the grid with electricity by 2013. In terms of regional economic impact, Solar Millennium predicts that the project will hire 1,000 people during the construction phase and 220 permanent workers (once its operational).

In the meantime, there’s quite a bit of construction to be done which will require significant financing. In a press release, the company said it has secured enough cash for the first wave of construction, which could begin as early as this year, but acknowledged that it is heavily dependent on government incentives and pending loans.

Speaking of the federal government’s approval, Solar Millennium’s CFO, Oliver Blamberger says, “This paves the way for the start of construction of the first two 242-MW plants before the end of the year…This is also good news for our advanced talks with the US Department of Energy on the loan guarantees for which we have applied. A successful conclusion of this process would secure more than two thirds of the financing volume of the first two planned power plants through the American Federal Financing Bank.”

As we mentioned in a post on Monday, the US solar market is ramping up significantly, with capacity expected to grow roughly 30x over the next 10 years to 44G. But the capital intensive industry will need to continue to raise heaps of private capital (and benefit from generous government policies) to get there.





"

The Neanderthal Romeo and Human Juliet hypothesis

By Paul Mason


Diagram by Paul Mason

Scientists have had trouble reconciling data from analyses of human mitochondrial DNA and the male Y chromosome. Analyses of human mitochondrial DNA indicate that we all share a common female ancestor 170,000 years ago. Analyses of the Y chromosome indicate that we share a common male ancestor 59,000 years ago (Thomson et al. 2000). How can we account for the idea that our common grandmother is 111,000 years older than our common grandfather? Have we found evidence for the world’s oldest cougar, or is there a hypothesis (other than blaming it on statistical anomalies) that could potentially reconcile these two dates? Perhaps we are given a clue in recent findings that a small percentage of human DNA is Neanderthal. Against popular belief (NOVA), Neanderthals did not go extinct without contributing somehow to the gene pool of modern humans.

Sexual reproduction is successful because the process of chromosomal exchange and gamete fusion provides genetic variability between individuals. Asexual reproduction is the kiss of death in the long run due to deleterious mutations. Strangely enough though, inside each cell of our bodies there is a tiny energy regulating organelle that reproduces asexually. This symbiotic bacterium is vital to cellular function and is called a mitochondrion. Both boys and girls inherit their mitochondrial DNA exclusively from their mother.



In female Homo sapiens, the oocyte remains dormant in dictyate from the moment of formation in late foetal life until just prior to ovulation, thereby protecting itself from mutations in both the mitochondrial and nuclear DNA. The male germ cells on the other hand are in a ferment of mitotic and meiotic activity from puberty onwards with most spontaneous DNA mutations occurring in the testis (Short, 1997). Sperm are dependent on maternal mitochondrial DNA in the midpiece sheath for their motility, but these mitochondria are destroyed by the oocyte immediately after fertilization, so the fertilized egg contains only maternal mitochondrial DNA.

From studies of mitochondrial DNA published in Nature (Cann, Stoneking, & Wilson, 1987), population geneticists discovered that people alive today share a common female ancestor anywhere up to 200,000 years ago (most estimates are somewhere between 150,000 to 170,000 years ago). Studies of mitochondrial DNA from Neanderthals and humans have shown no indication that humans have a female Neanderthal ancestor (Ovchinnikov & Goodwin 2001; National Geographic, 2008).

Just this year, researchers have estimated that gene flow from Neanderthals to humans occurred between 80,000 and 50,000 years ago (ScienceDaily May, 2010). Researchers have long wondered if Neanderthals were an entirely separate species, and recent evidence suggests that they probably weren’t. (Actually, one of the problems teaching human evolution is that we use a Linnaean system of classification with a Buffonian definition of species—two incompatible systems). However, even if Neanderthals were a separate species, speciation without any loss of hybrid fertility is possible.

Take the example given to me by Professor Roger Valentine Short: the Camelidae that originated in Florida (The Atlantic, 1999).

The little ones migrated into South America and up into the Andes to become the Llama, Alpaca, Vicuna and Guanaco—phenotypically quite different species, but all of which will produce fertile hybrids when crossbred. The big ones migrated up the Rockies, across the Behring straits, through Mongolia and Northern China—where we find the two-humped Bactrian camel—and into India and from there into Persia and Saudi Arabia—where we find the one-humped Dromedary camel. The spread of the Camelidae from the Americas to the Middle East is an example of speciation in a sexually reproducing species as a result of reproductive isolation. However, there has been no loss of hybrid fertility. Researchers have been able to produce Camas by inseminating Alpacas with Dromedary semen. Interestingly, the reciprocal cross gave fetuses, but no liveborn young.

(For more information, please see Short 1997; Skidmore, Billah, Binns, Short, and Allen 1999; Skidmore, Billah, Short and Allen 2001;

Modern humans may in fact be hybrids. Since Old World and New World Camelids are some 10 – 12 million years apart, we can be pretty certain that Homo neanderthalensis and Homo sapiens were able to hybridize. However, we must remember that studies have not shown any evidence of mitochondrial DNA from Neanderthals in humans (Potts & Short, 1999:59). Studies have shown though that modern humans share a common male ancestor who lived 59,000 years ago. Could this male ancestor have been Neanderthal? Indeed, the date of our closest common male ancestor correlates well with estimations of gene flow between Neanderthals and humans around 50,000 to 80,000 years ago. If H.neanderthalensis and H.sapiens were able to mate, then it is plausible that only the male H.neanderthalensis and the female H.sapiens were able to produce fertile offspring. The reciprocal cross may not have been successful.

According to Haldane’s law, the heterogametic offspring of interspecific hybrids are likely to be absent, rare, or sterile (Short, 1997). If Haldane’s Law applied to the offspring of H.neanderthalensis and H.sapiens, we would expect to find female hybrids quite commonly, but male hybrids much more rarely. The male hybrids would have carried a Y chromosome very similar to that of the original hybridizing male. The lack of Neanderthal mtDNA suggests that the initial hybridization involved a Neanderthal male, but there would probably have been few if any male hybrid offspring, so the Neanderthal Y chromosome and the mtDNA would have been quickly lost. Nonetheless, the Neanderthal autosomes would have happily mingled and interchanged with human autosomes, eventually losing their identity in the process.

Could it be that Homo neanderthalensis males were able to mate with Homo sapiens females but that the reciprocal cross was unsuccessful? Alternatively, were male H.sapiens disastrously incapable of wooing the physically more powerful H.neanderthalensis females? Or were H.neanderthalensis females simply unable to give birth to hybrid offspring? Perhaps male H.neanderthalensis outcompeted early male H.sapiens and eventually the male Neanderthal genes gained dominance (and maybe H.sapiens females somehow out-competed H.neanderthalensis females for partners). All of these possibilities potentially explain how we share a common male ancestor 59,000 years ago, but a common female ancestor 170,000 years ago. Simultaneously, these hypotheses explain why comparisons of DNA sequences in mitochondrial DNA from Neanderthals and modern humans have indicated that there was no interbreeding between these two exceedingly similar species (Potts & Short, 1999:59). Mitochondrial DNA from Neanderthals simply may not have made it into the modern human lineage. The nuclear DNA of Neanderthal males, however, possibly did.

Paul Mason is a doctoral candidate in anthropology at Macquarie University. He is currently finishing his dissertation on the relation between music and movement, and the implications for cultural evolution, in fight dances in Indonesia and Brazil. When he is musing about evolution, he is not working on his dissertation. [Greg: PAUL! Get back to your grindstone!]

"