Wednesday, December 29, 2010

Honey before bedtime improves brain function, mental acuity

Sleep debt or chronic partial sleep deprivation from poor quality sleep has been shown to have a detrimental effect on overall energy metabolism in the body. The toxic interaction of impaired energy metabolism and chronic partial sleep loss is the underlying cause of the reduction in melatonin and the loss of brain function as we age. Failure to provide sufficient energy for the brain during sleep has a significant adverse effect on brain metabolism and on memory and learning.
Ironically, the most severe negative effect on brain energy provision occurs from chronic increased food consumption and the resultant excessive insulin production that follows. Excess insulin in the central nervous system has a profound negative influence on brain metabolism and on memory and learning. Hyperinsulinism prevents glucose uptake into the brain causing partial brain starvation. Our brain is actually starving during periods of excess energy availability.
Insulin also inhibits an enzyme in the brain which allows the accumulation of glutamate in the synaptic space between brain neurons. This leads to irreversible damage of nerve cells in the brain and deterioration of brain function.
Consuming honey before bedtime reduces the release of stress hormones and maximizes the production and release of melatonin, a hormone which is also known as the “learning hormone.” Overproduction of stress hormones night after night inhibit the release of melatonin. When melatonin is produced normally, it inhibits the negative effects of too much insulin in the brain.
Quality sleep which is critical for memory consolidation and vital in human learning may therefore be achieved by a simple strategy of consuming a tablespoon of honey before bedtime. This strategy optimizes recovery physiology, reduces chronic overproduction of adrenal stress hormones that inhibit melatonin, and produces the exact metabolic environment required for the release of melatonin, growth hormone and IGF-1, the key hormones of memory consolidation and learning.

More signs lung cancer screening could save lives

More research is suggesting that heavy smokers may benefit from screening for lung cancer, to detect tumors in their earliest stages.
A new study finds that regular smokers who received three-dimensional X-rays to look for the presence of early tumors had a significantly lower risk of dying over a 10-year period.
The results are in keeping with those of a much larger study published last month, which showed that these 3-D X-rays, or CT scans, reduced the death rate among 53,000 current and former heavy smokers by 20 percent compared with screening using regular chest X-rays. That previous finding was "very good news in the field," said Dr. Bruce Johnson of the Dana Farber Cancer Institute, who treats lung cancer patients and reviewed the results for the news agency.
This latest study, published in the journal Lung Cancer, looked at death rates in a different, smaller population of heavy smokers, and estimated that those who received up to two CT scans would have between a 36 and 64 percent lower risk of dying, compared to those who went unscreened.
The data are "consistent" with earlier studies but there are still many issues to resolve regarding lung cancer screening, Johnson said.
For one, scientists haven't yet worked out how often to screen people, and when to start. It is not clear when or how guidelines for lung cancer screening could be drawn up, and until they are, insurers including government programs such as Medicare are unlikely to pay the average $300 cost of a scan.
Furthermore, an April study showed that 21 percent of a patient's initial lung CT scans show suspicious lesions that turn out not to be cancer, but lead to needless invasive follow-up procedures and radiation exposure, as well as stress and anxiety for patients and their families.
The high so-called "false positive" rate is an issue, said Dr. James Hanley of McGill University, who also reviewed the findings for the British news agency, but many mammograms also find lesions that turn out to be benign. And for lung cancer, doctors know there is a high false-positive rate and have a set protocol to follow in order to determine which lesions are dangerous, added Johnson.

Lung cancer kills 1.2 million people a year globally and it will kill 157,000 people in the United States alone this year, according to the American Cancer Society.

Tobacco use accounts for some 85 percent of lung cancer cases in the U.S., and one estimate puts a smoker's lifetime absolute risk of developing lung cancer between 12 percent and 17 percent. Five-year survival rates for lung cancer are low.

In recent years, CT scans, in particular, have been promoted by some hospitals and advocacy groups for lung cancer screening, even though studies had not yet shown definitively whether such screening saves lives.

In 2006, Dr. Claudia Henschke, currently based at Mount Sinai School of Medicine and Arizona State University, caused a stir when she published a study concluding that 80 percent of lung-cancer deaths could be prevented through widespread use of spiral CT.

Her ideas were controversial to start with, especially when other researchers found her work had been paid for by a tobacco company.

In the current study, funded in part by manufacturers of CT scanners (along with government and other sources), Henschke and her colleagues compared outcomes for nearly 8,000 smokers and former smokers who volunteered to undergo CT scans to outcomes in two sets of people with smoking histories who were not scanned.

The three groups of people had some important differences, such as in average age and how long and heavily they had smoked, so the researchers had to use mathematical tools to try to eliminate the influence of those differences, said Hanley. For instance, to compare death rates, the researchers tracked how many people died among those who were screened, then pulled out all the people with similar underlying characteristics in the other two groups and looked at their death rates, Hanley explained.

A total of 64 people died in the screened population, the authors report -- but applying the death rate among people with the same underlying characteristics in one of the unscreened populations, they estimated that the number of deaths would have been 100. This translates into a 36 percent lower risk of dying among the screened population.
Applying the same methods to the other unscreened population, the authors estimated that screening was associated with a 64 percent lower risk of dying.
Overall, research is suggesting that CT scans of people at risk of lung cancer might make a dent in cancer mortality, and it's possible that more frequent screening might make an even bigger dent, Hanley noted. "If screening is going to work, you've got to keep at it."

Friday, December 24, 2010

'Un-Growth Hormone' Increases Longevity, Researchers Find

A compound which acts in the opposite way as growth hormone can reverse some of the signs of aging, a research team that includes a Saint Louis University physician has shown. The finding may be counter-intuitive to some older adults who take growth hormone, thinking it will help revitalize them.
Their research was published in the Dec. 6 online edition of the Proceedings of the National Academy of Sciences.
 The findings are significant, says John E. Morley, M.D., study co-investigator and director of the divisions of geriatric medicine and endocrinology at Saint Louis University School of Medicine, because people sometimes take growth hormone, believing it will be the fountain of youth.
"Many older people have been taking growth hormone to rejuvenate themselves," Morley said. "These results strongly suggest that growth hormone, when given to middle aged and older people, may be hazardous."
The scientists studied the compound MZ-5-156, a "growth hormone-releasing hormone (GHRH) antagonist." They conducted their research in the SAMP8 mouse model, a strain engineered for studies of the aging process. Overall, the researchers found that MZ-5-156 had positive effects on oxidative stress in the brain, improving cognition, telomerase activity (the actions of an enzyme which protects DNA material) and life span, while decreasing tumor activity.
MZ-5-156, like many GHRH antagonists, inhibited several human cancers, including prostate, breast, brain and lung cancers. It also had positive effects on learning, and is linked to improvements in short-term memory. The antioxidant actions led to less oxidative stress, reversing cognitive impairment in the aging mouse.
William A. Banks, M.D., lead study author and professor of internal medicine and geriatrics at the University of Washington School of Medicine in Seattle, said the results lead the team "to determine that antagonists of growth hormone-releasing hormone have beneficial effects on aging."
The study team included as its corresponding author Andrew V. Schally, M.D., Ph.D., a professor in the department of pathology and division of hematology/oncology at the University of Miami Miller School of Medicine.

Placebos Work -- Even Without Deception

For most of us, the "placebo effect" is synonymous with the power of positive thinking; it works because you believe you're taking a real drug. But a new study rattles this assumption.
Researchers at Harvard Medical School's Osher Research Center and Beth Israel Deaconess Medical Center (BIDMC) have found that placebos work even when administered without the seemingly requisite deception.
 The study is published December 22 in PLoS ONE.
Placebos -- or dummy pills -- are typically used in clinical trials as controls for potential new medications. Even though they contain no active ingredients, patients often respond to them. In fact, data on placebos is so compelling that many American physicians (one study estimates 50 percent) secretly give placebos to unsuspecting patients.
Because such "deception" is ethically questionable, HMS associate professor of medicine Ted Kaptchuk teamed up with colleagues at BIDMC to explore whether or not the power of placebos can be harnessed honestly and respectfully.
To do this, 80 patients suffering from irritable bowel syndrome (IBS) were divided into two groups: one group, the controls, received no treatment, while the other group received a regimen of placebos -- honestly described as "like sugar pills" -- which they were instructed to take twice daily.
"Not only did we make it absolutely clear that these pills had no active ingredient and were made from inert substances, but we actually had 'placebo' printed on the bottle," says Kaptchuk. "We told the patients that they didn't have to even believe in the placebo effect. Just take the pills."
For a three-week period, the patients were monitored. By the end of the trial, nearly twice as many patients treated with the placebo reported adequate symptom relief as compared to the control group (59 percent vs. 35 percent). Also, on other outcome measures, patients taking the placebo doubled their rates of improvement to a degree roughly equivalent to the effects of the most powerful IBS medications.
"I didn't think it would work," says senior author Anthony Lembo, HMS associate professor of medicine at BIDMC and an expert on IBS. "I felt awkward asking patients to literally take a placebo. But to my surprise, it seemed to work for many of them."
The authors caution that this study is small and limited in scope and simply opens the door to the notion that placebos are effective even for the fully informed patient -- a hypothesis that will need to be confirmed in larger trials.
"Nevertheless," says Kaptchuk, "these findings suggest that rather than mere positive thinking, there may be significant benefit to the very performance of medical ritual. I'm excited about studying this further. Placebo may work even if patients knows it is a placebo."

Healing Ear Infections Faster Otolaryngologists Have A New Device For Inserting Ear Tubes

Otolaryngologists now usea stainless steel device to insert into the ear that provides an easier, safer and faster treatment for a common problem associated with earaches, chronic otitis media with effusion. The tiny device consists of a hollow rod with a collar that holds the tube in place, allowing the surgeon to insert the tube with one motion and suction out any residual fluid that might be in the middle ear space.
 Three out of four children fall victim to an ear infection by the time they're three years old, many of them during winter when viruses abound. Treating the common problem can be a tedious procedure, but a new device makes healing ears simple.
For years, Nancy Mazurianic has watched her son Tristen suffer from painful ear infections.
"It was awful," Nancy said. "You hate to see your kid in pain like that."
 For some patients, doctors insert tiny tubes inside the ears to ease pressure and fluid buildup and relieve pain. Multiple medical instruments are normally used for the procedure, putting delicate ears at risk for injury. "The skin is so thin in the ear canal that if you just touch it with an instrument, it will start to bleed," said Bradley Kesser, M.D., an otolaryngologist at the University of Virginia in Charlottesville, Va.
Now, otolaryngologists -- or ear, nose and throat specialists -- have a new tiny device that makes the procedure safer, easier and faster so there's less risk of injury.
"With a single instrument, we're able to insert the tube and suction out any residual fluid that might be in the middle ear space," Dr. Kesser said.
Under general anesthesia, doctors make a small incision in the eardrum. Then the new device, a hollow rod holding a tiny tube, is inserted into this small opening with one motion. The tube lets air in and drains any fluid out. Eventually, the tube falls out.
"We've devised an instrument to increase the reliability of ear tube insertion, increase its safety and potentially increase its speed," Dr. Kesser said.
Tristen's procedure was a success, and his mom is thrilled to see him pain-free.
"It's been just amazing," Nancy said. "He's been a happy little camper [and] never complains about the ears at all anymore."
Happy is a good thing to hear.
ABOUT EAR INFECTIONS: There are three main parts to the human ear: outer, middle and inner ear. The outer ear is the part you can see and opens into the ear canal leading to the middle ear. The middle ear is a closed, air-filled chamber, separated from the outer ear by the ear drum, and ventilated by the Eustachian tube. Sometimes the pressure in the middle ear becomes higher or lower than that in the outer ear, causing hearing loss, severe pain, and the accumulation of fluid in the middle ear. The inner ear contains the hearing nerve that leads to the brain. It detects sound vibrations and turns them into electrical nerve impulses, which the brain then interprets as sound.
PREVENTING EAR INFECTIONS: Chronic middle ear fluid is a condition known as otitis media with effusion (OME).When this condition becomes persistent, and antibiotics aren't effective, it is often treated with surgical insertion of ear ventilation tubes. More than 700,000 children undergo this procedure each year. But the tubes often fall out within four to seven months, and the patients may have a recurrence of the condition.

Sunday, December 12, 2010

A Swarm of Ancient Stars

We know of about 150 of the rich collections of old stars called globular clusters that orbit our galaxy, the Milky Way. A sharp new image of Messier 107, captured by the Wide Field Imager on the 2.2-metre telescope at ESO's La Silla Observatory in Chile, displays the structure of one such globular cluster in exquisite detail. Studying these stellar swarms has revealed much about the history of our galaxy and how stars evolve.
The globular cluster Messier 107, also known as NGC 6171, is a compact and ancient family of stars that lies about 21 000 light-years away. Messier 107 is a bustling metropolis: thousands of stars in globular clusters like this one are concentrated into a space that is only about twenty times the distance between our Sun and its nearest stellar neighbour, Alpha Centauri, across. A significant number of these stars have already evolved into red giants, one of the last stages of a star's life, and have a yellowish colour in this image.

Globular clusters are among the oldest objects in the Universe. And since the stars within a globular cluster formed from the same cloud of interstellar matter at roughly the same time -- typically over 10 billion years ago -- they are all low-mass stars, as lightweights burn their hydrogen fuel supply much more slowly than stellar behemoths. Globular clusters formed during the earliest stages in the formation of their host galaxies and therefore studying these objects can give significant insights into how galaxies, and their component stars, evolve.
Messier 107 has undergone intensive observations, being one of the 160 stellar fields that was selected for the Pre-FLAMES Survey -- a preliminary survey conducted between 1999 and 2002 using the 2.2-metre telescope at ESO's La Silla Observatory in Chile, to find suitable stars for follow-up observations with the VLT's spectroscopic instrument FLAMES (Fibre Large Array Multi-Element Spectrograph). Using FLAMES, it is possible to observe up to 130 targets at the same time, making it particularly well suited to the spectroscopic study of densely populated stellar fields, such as globular clusters.
M107 is not visible to the naked eye, but, with an apparent magnitude of about eight, it can easily be observed from a dark site with binoculars or a small telescope. The globular cluster is about 13 arcminutes across, which corresponds to about 80 light-years at its distance, and it is found in the constellation of Ophiuchus, north of the pincers of Scorpius. Roughly half of the Milky Way's known globular clusters are actually found in the constellations of Sagittarius, Scorpius and Ophiuchus, in the general direction of the centre of the Milky Way. This is because they are all in elongated orbits around the central region and are on average most likely to be seen in this direction.
Messier 107 was discovered by Pierre M├ęchain in April 1782 and it was added to the list of seven Additional Messier Objects that were originally not included in the final version of Messier's catalogue, which was published the previous year. On 12 May 1793, it was independently rediscovered by William Herschel, who was able to resolve this globular cluster into stars for the first time. But it was not until 1947 that this globular cluster finally took its place in Messier's catalogue as M107, making it the most recent star cluster to be added to this famous list.
This image is composed from exposures taken through the blue, green and near-infrared filters by the Wide Field Camera (WFI) on the MPG/ESO 2.2-metre telescope at the La Silla Observatory in Chile.

New Clues to How Earth, Moon, and Mars Formed

New Insights Into Formation of Earth, the Moon, and Mars:
New research reveals that the abundance of so-called highly siderophile, or metal-loving, elements like gold and platinum found in the mantles of Earth, the Moon and Mars were delivered by massive impactors during the final phase of planet formation over 4.5 billion years ago. The predicted sizes of the projectiles, which hit within tens of millions of years of the giant impact that produced our Moon, are consistent with current planet formation models as well as physical evidence such as the size distributions of asteroids and ancient Martian impact scars.
They predict that the largest of the late impactors on Earth -- at 1,500 to 2,000 miles in diameter -- potentially modified Earth's obliquity by approximately 10 degrees, while those for the Moon, at approximately 150-200 miles, may have delivered water to its mantle.
The team that conducted this study comprises solar system dynamicists, such as Dr. William Bottke and Dr. David Nesvorny from the Southwest Research Institute, and geophysical-geochemical modelers, such as Prof. Richard J. Walker from the University of Maryland, Prof. James Day from the University of Maryland and Scripps Institution of Oceanography, and Prof. Linda Elkins-Tanton, from the Massachusetts Institute of Technology. Together, they represent three teams within the NASA Lunar Science Institute (NLSI).
A fundamental problem in planetary science is to determine how Earth, the Moon, and other inner solar system planets formed and evolved. This is a difficult question to answer given that billions of years of history have steadily erased evidence for these early events. Despite this, critical clues can still be found to help determine what happened, provided one knows where to look.
For instance, careful study of lunar samples brought back by the Apollo astronauts, combined with numerical modeling work, indicates that the Moon formed as a result of a collision between a Mars-sized body and the early Earth about 4.5 billion years ago. While the idea that the Earth-Moon system owes its existence to a single, random event was initially viewed as radical, it is now believed that such large impacts were commonplace during the end stages of planet formation. The giant impact is believed to have led to a final phase of core formation and global magma oceans on both the Earth and Moon.
For the giant impact hypothesis to be correct, one might expect samples from the Earth and Moon's mantle, brought to the surface by volcanic activity, to back it up. In particular, scientists have examined the abundance in these rocks of so-called highly siderophile, or metal-loving, elements: Re, Os, Ir, Ru, Pt, Rh, Pd, Au. These elements should have followed the iron and other metals to the core in the aftermath of the Moon-forming event, leaving the rocky crusts and mantles of these bodies void of these elements. Accordingly, their near-absence from mantle rocks should provide a key test of the giant impact model.
However, as described by team member Walker, "The big problem for the modelers is that these metals are not missing at all, but instead are modestly plentiful." Team member Day adds, "This is a good thing for anyone who likes their gold wedding rings or the cleaner air provided by the palladium in their car's catalytic convertors."
A proposed solution to this conundrum is that highly siderophile elements were indeed stripped from the mantle by the effects of the giant impact, but were then partially replenished by later impacts from the original building blocks of the planets, called planetesimals. This is not a surprise -- planet formation models predict such late impacts -- but their nature, numbers, and most especially size of the accreting bodies are unknown. Presumably, they could have represented the accretion of many small bodies or a few large events. To match observations, the late-arriving planetesimals need to deliver 0.5 percent of the Earth's mass to Earth's mantle, equivalent to one-third of the mass of the Moon, and about 1,200 times less mass to the Moon's mantle.
Using numerical models, the team showed that they could reproduce these amounts if the late accretion population was dominated by massive projectiles. Their results indicate the largest Earth impactor was 1,500-2,000 miles in diameter, roughly the size of Pluto, while those hitting the Moon were only 150-200 miles across. Lead author Bottke says, "These impactors are thought to be large enough to produce the observed enrichments in highly siderophile elements, but not so large that their fragmented cores joined with the planet's core. They probably represent the largest objects to hit those worlds since the giant impact that formed our Moon."
Intriguingly, the predicted distribution of projectile sizes, where most of the mass of the population is found among the largest objects, is consistent with other evidence.
  • New models describing how planetesimals form and evolve suggest the biggest ones efficiently gobble up the smaller ones and run away in terms of size, leaving behind a population of enormous objects largely resistant to collisional erosion.
  • The last surviving planetesimal populations in the inner solar system are the asteroids. In the inner asteroid belt, the asteroids Ceres, Pallas and Vesta, at 600, 300 and 300 miles across, respectively, dwarf the next largest asteroids at 150 miles across. No asteroids with "in-between" sizes are observed in this region.
  • The sizes of the oldest and largest craters on Mars, many of which are thousands of miles across, are consistent with it being bombarded by an inner asteroid belt-like population dominated by large bodies early in its history.
These results make it possible to make some interesting predictions about the evolution of the Earth, Mars and the Moon. For example:
  • The largest projectiles that struck Earth were capable of modifying its spin axis, on average, by approximately 10 degrees.
  • The largest impactor to strike Mars, according to this work and the abundance of highly siderophile elements found in Martian meteorites, was 900-1,100 miles across. This is approximately the projectile size needed to create the proposed Borealis basin that may have produced Mars' global hemispheric dichotomy.
  • For the Moon, the projectiles would have been large enough to have created the South-Pole-Aitkin basin or perhaps a comparable-sized early basin. Moreover, if they contained even a trace amount of volatiles, then the same processes that brought highly siderophile elements to the Moon's mantle may have also delivered its observed abundance of water.

Monday, November 29, 2010

Same Face May Look Male or Female, Depending on Where It Appears in a Person's Field of View

Neuroscientists at MIT and Harvard have made the surprising discovery that the brain sees some faces as male when they appear in one area of a person's field of view, but female when they appear in a different location.
The findings challenge a longstanding tenet of neuroscience -- that how the brain sees an object should not depend on where the object is located relative to the observer, says Arash Afraz, a postdoctoral associate at MIT's McGovern Institute for Brain Research and lead author of a new paper on the work.
"It's the kind of thing you would not predict -- that you would look at two identical faces and think they look different," says Afraz. He and two colleagues from Harvard, Patrick Cavanagh and Maryam Vaziri Pashkam, described their findings in the Nov. 24 online edition of the journal Current Biology.
In the real world, the brain's inconsistency in assigning gender to faces isn't noticeable, because there are so many other clues: hair and clothing, for example. But when people view computer-generated faces, stripped of all other gender-identifying features, a pattern of biases, based on location of the face, emerges.
The researchers showed subjects a random series of faces, ranging along a spectrum of very male to very female, and asked them to classify the faces by gender. For the more androgynous faces, subjects rated the same faces as male or female, depending on where they appeared.
Study participants were told to fix their gaze at the center of the screen, as faces were flashed elsewhere on the screen for 50 milliseconds each. Assuming that the subjects sat about 22 inches from the monitor, the faces appeared to be about three-quarters of an inch tall.
The patterns of male and female biases were different for different people. That is, some people judged androgynous faces as female every time they appeared in the upper right corner, while others judged faces in that same location as male. Subjects also showed biases when judging the age of faces, but the pattern for age bias was independent from the pattern for gender bias in each individual.
Sample size
Afraz believes this inconsistency in identifying genders is due to a sampling bias, which can also be seen in statistical tools such as polls. For example, if you surveyed 1,000 Bostonians, asking if they were Democrats or Republicans, you would probably get a fairly accurate representation of these percentages in the city as a whole, because the sample size is so large. However, if you took a much smaller sample, perhaps five people who live across the street from you, you might get 100 percent Democrats, or 100 percent Republicans. "You wouldn't have any consistency, because your sample is too small," says Afraz.
He believes the same thing happens in the brain. In the visual cortex, where images are processed, cells are grouped by which part of the visual scene they analyze. Within each of those groups, there is probably a relatively small number of neurons devoted to interpreting gender of faces. The smaller the image, the fewer cells are activated, so cells that respond to female faces may dominate. In a different part of the visual cortex, cells that respond to male faces may dominate.
"It's all a matter of undersampling," says Afraz.
Michael Tarr, codirector of the Center for the Neural Basis of Cognition at Carnegie Mellon University, says the findings add to the growing evidence that the brain is not always consistent in how it perceives objects under different circumstances. He adds that the study leaves unanswered the question of why each person develops different bias patterns. "Is it just noise within the system, or is some other kind of learning occurring that they haven't figured out yet?" asks Tarr, who was not involved in the research. "That's really the fascinating question."

Tuning an 'Ear' to the Music of Gravitational Waves

A team of scientists and engineers at NASA's Jet Propulsion Laboratory has brought the world one step closer to "hearing" gravitational waves -- ripples in space and time predicted by Albert Einstein in the early 20th century.
The research, performed in a lab at JPL in Pasadena, Calif., tested a system of lasers that would fly aboard the proposed space mission called Laser Interferometer Space Antenna, or LISA. The mission's goal is to detect the subtle, whisper-like signals of gravitational waves, which have yet to be directly observed. This is no easy task, and many challenges lie ahead.
The new JPL tests hit one significant milestone, demonstrating for the first time that noise, or random fluctuations, in LISA's laser beams can be hushed enough to hear the sweet sounds of the elusive waves.
"In order to detect gravitational waves, we have to make extremely precise measurements," said Bill Klipstein, a physicist at JPL. "Our lasers are much noisier than what we want to measure, so we have to remove that noise carefully to get a clear signal; it's a little like listening for a feather to drop in the middle of a heavy rainstorm." Klipstein is a co-author of a paper about the lab tests that appeared in a recent issue of Physical Review Letters.
The JPL team is one of many groups working on LISA, a joint European Space Agency and NASA mission proposal, which, if selected, would launch in 2020 or later. In August of this year, LISA was given a high recommendation by the 2010 U.S. National Research Council decadal report on astronomy and astrophysics.
One of LISA's primary goals is to detect gravitational waves directly. Studies of these cosmic waves began in earnest decades ago when, in 1974, researchers discovered a pair of orbiting dead stars -- a type called pulsars -- that were spiraling closer and closer together due to an unexplainable loss of energy. That energy was later shown to be in the form of gravitational waves. This was the first indirect proof of the waves, and ultimately earned the 1993 Nobel Prize in Physics.
LISA is expected to not only "hear" the waves, but also learn more about their sources -- massive objects such as black holes and dead stars, which sing the waves like melodies out to the universe as the objects accelerate through space and time. The mission would be able to detect gravitational waves from massive objects in our Milky Way galaxy as well as distant galaxies, allowing scientists to tune into an entirely new language of our universe.
The proposed mission would amount to a giant triangle of three distinct spacecraft, each connected by laser beams. These spacecraft would fly in formation around the sun, about 20 degrees behind Earth. Each one would hold a cube made of platinum and gold that floats freely in space. As gravitational waves pass by the spacecraft, they would cause the distance between the cubes, or test masses, to change by almost imperceptible amounts -- but enough for LISA's extremely sensitive instruments to be able to detect corresponding changes in the connecting laser beams.
"The gravitational waves will cause the 'corks' to bob around, but just by a tiny bit," said Glenn de Vine, a research scientist and co-author of the recent study at JPL. "My friend once said it's sort of like rubber duckies bouncing around in a bathtub."
The JPL team has spent the last six years working on aspects of this LISA technology, including instruments called phase meters, which are sophisticated laser beam detectors. The latest research accomplishes one of their main goals -- to reduce the laser noise detected by the phase meters by one billion times, or enough to detect the signal of gravitational waves.
The job is like trying to find a proton in a haystack. Gravitational waves would change the distance between two spacecraft -- which are flying at 5 million kilometers (3.1 million miles) apart -- by about a picometer, which is about 100 million times smaller than the width of a human hair. In other words, the spacecraft are 5,000,000,000 meters apart, and LISA would detect changes in that distance on the order of .000000000005 meters!
At the heart of the LISA laser technology is a process known as interferometry, which ultimately reveals if the distances traveled by the laser beams of light, and thus the distance between the three spacecraft, have changed due to gravitational waves. The process is like combining ocean waves -- sometimes they pile up and grow bigger, and sometimes they cancel each other out or diminish in size.
"We can't use a tape measure to get the distances between these spacecraft," said de Vine, "So we use lasers. The wavelengths of the lasers are like our tick marks on a tape measure."
On LISA, the laser light is detected by the phase meters and then sent to the ground, where it is "interfered" via data processing (the process is called time-delay interferometry for this reason -- there's a delay before the interferometry technique is applied). If the interference pattern between the laser beams is the same, then that means the spacecraft haven't moved relative to each other. If the interference pattern changes, then they did. If all other reasons for spacecraft movement have been eliminated, then gravitational waves are the culprit.
That's the basic idea. In reality, there are a host of other factors that make this process more complex. For one thing, the spacecraft don't stay put. They naturally move around for reasons that have nothing to do with gravitational waves. Another challenge is the laser beam noise. How do you know if the spacecraft moved because of gravitational waves, or if noise in the laser is just making it seem as if the spacecraft moved?
This is the question the JPL team recently took to their laboratory, which mimics the LISA system. They introduced random, artificial noise into their lasers and then, through a complicated set of data processing actions, subtracted most of it back out. Their recent success demonstrated that they could see changes in the distances between mock spacecraft on the order of a picometer.
In essence, they hushed the roar of the laser beams, so that LISA, if selected for construction, will be able to hear the universe softly hum a tune of gravitational waves.
Other authors of the paper from JPL are Brent Ware; Kirk McKenzie; Robert E. Spero and Daniel A. Shaddock, who has a joint post with JPL and the Australian National University in Canberra.
LISA is a proposed joint NASA and European Space Agency mission. The NASA portion of the mission is managed by NASA's Goddard Space Flight Center, Greenbelt, Md. Some of the key instrumentation studies for the mission are being performed at JPL. The U.S. mission scientist is Tom Prince at the California Institute of Technology in Pasadena. JPL is managed by Caltech for NASA.

Saturday, November 27, 2010

How People Perceive Sour Flavors: Proton Current Drives Action Potentials in Taste Cells

This Thanksgiving, when the tartness of cranberry sauce smacks your tongue, consider the power of sour. Neurobiology researchers at the University of Southern California have made a surprising discovery about how some cells respond to sour tastes.
Of the five taste sensations -- sweet, bitter, sour, salty and umami -- sour is arguably the strongest yet the least understood. Sour is the sensation evoked by substances that are acidic, such as lemons and pickles. The more acidic the substance, the more sour the taste.
Acids release protons. How protons activate the taste system had not been understood. The USC team expected to find protons from acids binding to the outside of the cell and opening a pore in the membrane that would allow sodium to enter the cell. Sodium's entry would send an electrical response to the brain, announcing the sensation that we perceive as sour.
Instead, the researchers found that the protons were entering the cell and causing the electrical response directly.
The finding is to be published in the Proceedings of the National Academy of Sciences (PNAS).
"In order to understand how sour works, we need to understand how the cells that are responsive to sour detect the protons," said senior author Emily Liman, associate professor of neurobiology in the USC College of Letters, Arts and Sciences.
"In the past, it's been difficult to address this question because the taste buds on the tongue are heterogeneous. Among the 50 or so cells in each taste bud there are cells responding to each of the five tastes. But if we want to know how sour works, we need to measure activity specifically in the sour sensitive taste cells and determine what is special about them that allows them to respond to protons."
Liman and her team bred genetically modified mice and marked their sour cells with a yellow florescent protein. Then they recorded the electrical responses from just those cells to protons.
The ability to sense protons with a mechanism that does not rely on sodium has important implications for how different tastes interact, Liman speculates.
"This mechanism is very appropriate for the taste system because we can eat something that has a lot of protons and not much sodium or other ions, and the taste system will still be able to detect sour," she said. "It makes sense that nature would have built a taste cell like this, so as not to confuse salty with sour."
In the future, the research may have practical applications for cooks and the food industry.
"We're at the early stages of identifying the molecules that contribute to sour taste," Liman said. "Once we've understood the nature of the molecules that sense sour, we can start thinking about how they might be modified and how that might change the way things taste. We may also find that the number or function of these molecules changes during the course of development or during aging."

New Imaging Technique Accurately Finds Cancer Cells, Fast

The long, anxious wait for biopsy results could soon be over, thanks to a tissue-imaging technique developed at the University of Illinois.
The research team demonstrated the novel microscopy technique, called nonlinear interferometric vibrational imaging (NIVI), on rat breast-cancer cells and tissues. It produced easy-to-read, color-coded images of tissue, outlining clear tumor boundaries, with more than 99 percent confidence -- in less than five minutes.

Led by professor and physician Stephen A. Boppart, who holds appointments in electrical and computer engineering, bioengineering and medicine, the Illinois researchers will publish their findings on the cover of the Dec. 1 issue of the journalCancer Research.
In addition to taking a day or more for results, current diagnostic methods are subjective, based on visual interpretations of cell shape and structure. A small sample of suspect tissue is taken from a patient, and a stain is added to make certain features of the cells easier to see. A pathologist looks at the sample under a microscope to see if the cells look unusual, often consulting other pathologists to confirm a diagnosis.
"The diagnosis is made based on very subjective interpretation -- how the cells are laid out, the structure, the morphology," said Boppart, who is also affiliated with the university's Beckman Institute for Advanced Science and Technology. "This is what we call the gold standard for diagnosis. We want to make the process of medical diagnostics more quantitative and more rapid."
Rather than focus on cell and tissue structure, NIVI assesses and constructs images based on molecular composition. Normal cells have high concentrations of lipids, but cancerous cells produce more protein. By identifying cells with abnormally high protein concentrations, the researchers could accurately differentiate between tumors and healthy tissue -- without waiting for stain to set in.
Each type of molecule has a unique vibrational state of energy in its bonds. When the resonance of that vibration is enhanced, it can produce a signal that can be used to identify cells with high concentrations of that molecule. NIVI uses two beams of light to excite molecules in a tissue sample.
"The analogy is like pushing someone on a swing. If you push at the right time point, the person on the swing will go higher and higher. If you don't push at the right point in the swing, the person stops," Boppart said. "If we use the right optical frequencies to excite these vibrational states, we can enhance the resonance and the signal."
One of NIVI's two beams of light acts as a reference, so that combining that beam with the signal produced by the excited sample cancels out background noise and isolates the molecular signal. Statistical analysis of the resulting spectrum produces a color-coded image at each point in the tissue: blue for normal cells, red for cancer.
Another advantage of the NIVI technique is more exact mapping of tumor boundaries, a murky area for many pathologists. The margin of uncertainty in visual diagnosis can be a wide area of tissue as pathologists struggle to discern where a tumor ends and normal tissue begins. The red-blue color coding shows an uncertain boundary zone of about 100 microns -- merely a cell or two.
"Sometimes it's very hard to tell visually whether a cell is normal or abnormal," Boppart said. "But molecularly, there are fairly clear signatures."
The researchers are working to improve and broaden the application of their technique. By tuning the frequency of the laser beams, they could test for other types of molecules. They are working to make it faster, for real-time imaging, and exploring new laser sources to make NIVI more compact or even portable. They also are developing new light delivery systems, such as catheters, probes or needles that can test tissue without removing samples.
"As we get better spectral resolution and broader spectral range, we can have more flexibility in identifying different molecules," Boppart said. "Once you get to that point, we think it will have many different applications for cancer diagnostics, for optical biopsies and other types of diagnostics."
The National Cancer Institute of the National Institutes of Health sponsored the study. Other co-authors were Beckman Institute researchers Praveen Chowdary, Zhi Jiang, Eric Chaney, Wladimir Benalcazar and Daniel Marks, and professor of chemistry and physics Martin Gruebele.

Polar Bears Unlikely to Survive in Warmer World, According to Biologists

Will polar bears survive in a warmer world? UCLA life scientists present new evidence that their numbers are likely to dwindle.
As polar bears lose habitat due to global warming, these biologists say, they will be forced southward in search of alternative sources of food, where they will increasingly come into competition with grizzly bears.

To test how this competition might unfold, the UCLA biologists constructed three-dimensional computer models of the skulls of polar bears and grizzly bears -- a subspecies of brown bears -- and simulated the process of biting. The models enabled them to compare the two species in terms of how hard they can bite and how strong their skulls are.
"What we found was striking," said Graham Slater, a National Science Foundation-funded UCLA postdoctoral scholar in ecology and evolutionary biology and lead author of the research. "The polar bear and brown bear can bite equally hard, but the polar bear's skull is a much weaker structure."
The implication is that polar bears are likely to lose out in competition for food to grizzlies as warmer temperatures bring them into the same environments, because grizzlies' stronger skulls are better suited to a plant-rich diet, said Slater and Blaire Van Valkenburgh, UCLA professor of ecology and evolutionary biology and senior author of the research.
"The result for polar bears may be lower weight, smaller and fewer litters, less reproductive success, fewer that would survive to adulthood, and dwindling populations," Van Valkenburgh said. "Then you can get into an extinction vortex, where a small population becomes even smaller in a downward spiral to extinction.
"To people who say polar bears can just change their diet, we are saying they will change their diet -- they will have to -- but it probably will not be sufficient for them, especially if they are co-existing with grizzly bears. Their skull is relatively weak and not suited to adapting its diet. We did not expect to find what we found."
"This is one additional piece of evidence that things look pretty bleak for the polar bear if current trends continue," Slater said.
The research, federally funded by the National Science Foundation, was published this month in the online journalPLoS ONE, a publication of the Public Library of Science.
Polar bears are a "marvelous example of rapid adaptation to an extreme environment," Slater said. "The fact that we can lose them equally as rapidly as a result of human-mediated climate change is rather striking. Polar bears are very well suited to do what they do, but they are highly specialized and not well suited to doing much else."
It could take quite some time for polar bears to go extinct, Van Valkenburgh said, but they are likely to become much more rare than today.
Polar bears are losing habitat as a result of global warming and the associated loss of arctic sea ice, which they use to hunt for seals, Van Valkenburgh and Slater said. But could they survive on an alternative food source?
"Our results suggest that this is not too likely," Slater said. "The polar bear's skull is a relatively weak structure that is not suited to diets consisting of a lot of plant material like that of the brown bear. As climate change continues, polar bears will be forced to move south in search of resources, while brown bears move north as their climate becomes more mild. When these two species meet, as they have already begun to, it seems that brown bears will easily out-compete polar bears. Our findings should serve as a warning that polar bears may not be flexible enough to survive if current trends continue.
"Chewing a lot of vegetables takes quite a lot of force to grind up," Slater said. "Grizzly bears are well suited to eating these kinds of food, but the polar bear is not well suited for it. The grizzly has a much more efficient skull for eating these kinds of foods."
In Canada, grizzly bears are moving north and are already in polar bear territory, Van Valkenburgh and Slater said.
The life scientists -- whose co-authors include UCLA undergraduates Leeann Louis and Paul Yang and graduate student Borja Figueirido from Spain's Universidad de Malaga, Campus Universitario de Teatinos -- studied two adult male skulls from museums, one of a polar bear from Canada, the other of a grizzly from Alaska. They built 3-D computer models of the skulls and then analyzed their biomechanics.
"We can apply muscle forces to the skull to simulate biting, and we can measure how hard the animal could bite. We can measure stress and strain in the skull as well," Slater said. "We found that while the stresses in the grizzly bear skull are relatively low, the same bites in the polar bear produce much more stress. Combined with other evidence from Blaire's laboratory, this tells us that the smaller teeth of polar bears are less suited to diets that consist of plants, grass, vegetation and berries."
"Polar bears would not be able to break up the food as well in their mouths and would not digest it as well," Van Valkenburgh said.
In the timeline of evolution, polar bears evolved from the brown bear very recently, and the two are very closely related, Van Valkenburgh and Slater said. Genetic studies indicate that the split between polar bears and brown bears occurred only 500,000 to 800,000 years ago -- the most recent split between any of the eight bear species.
Despite the recentness of the split between these two species, their skulls and teeth are extremely different, probably as a result of where they live (arctic versus temperate regions) and the differences in their diets. Grizzly bears have very large molar teeth, while polar bears have teeth that are much smaller. Polar bears eat seal blubber, which is soft and does not require much chewing, while brown bears consume many plants.
The biologists investigated the rate at which skull shape has evolved in the bear family. They found that the rate of evolution in the branch of the bear family tree leading to the polar bear was twice as fast as the rates in other branches of the tree; it appears that skull shape evolved extremely rapidly in polar bears.
Polar bears probably evolved very rapidly in response to glacial climates during the ice ages, Slater said.
"You don't see many bears that look like polar bears, and the difference in skull shape evolved very rapidly," Slater said.



Monday, November 8, 2010

TB-Drugome Provides New Targets for Anti-Tuberculosis Drug Discovery

Researchers at the University of California, San Diego School of Medicine and the University of Leeds have linked hundreds of federally approved drugs to more than 1,000 proteins in Mycobacterium tuberculosis, the causative agent of tuberculosis (TB), opening new avenues to repurpose these drugs to treat TB.

The study was published Nov. 4 inPLoS Computational Biology.
"Tuberculosis is currently one of the most widely spread infectious diseases, with an estimated one-third of the world's population infected and between one and two million people dying each year from the disease," said Philip Bourne, PhD, professor of pharmacology at UCSD's Skaggs School of Pharmacy and Pharmaceutical Sciences. "The continuing emergence of M. tuberculosis strains resistant to all existing, affordable drug treatments requires the development of novel, effective and inexpensive drugs.
The newly developed TB-drugome may help that effort, Bourne said, by identifying new M. tuberculosis protein targets that can be perturbed by a variety of existing drugs prescribed for other purposes.
Sarah Kinnings at the University of Leeds and a team of scientists at UC San Diego, led by Bourne (who is also associate director of the RCSB Protein Data Bank) and research scientist Lei Xie, PhD, used a novel computational strategy to investigate whether any existing drugs were able to bind to any of the approximately 40 percent of proteins in the M. tuberculosis proteome with decipherable three-dimensional structures.
The researchers not only discovered that approximately one-third of the drugs examined may have the potential to be repurposed to treat tuberculosis, but also that many currently unexploited M. tuberculosis proteins could serve as novel anti-tubercular targets. This finding led the investigators to construct a complex network of drug-target interactions -- a TB-drugome available to all scientists.
While this new computational, high-throughput process of drug discovery is promising, Xie cautioned that "only experimentation can validate the most promising drug-target combinations, and there will be many failures along the way."
Kinnings added that any drugs subsequently confirmed to bind to M. tuberculosis proteins may need to be modified to increase their ability to penetrate the bacterial cell membrane, reduce their required dosage, and improve other pharmacological properties. The screening of a large collection of analogs to known drugs will be the next step towards anti-tuberculosis drug discovery.
Other authors of the study are Richard Jackson of the Institute of Molecular and Cellular Biology and Astbury Centre for Structural Molecular Biology at University of Leeds; Li Xie of the Skaggs School of Pharmacy and Pharmaceutical Sciences at UC San Diego and Kingston Fung of the UCSD's Bioinformatics Program.
Funding for this project came from the National Institutes of Health.

Tarantulas help scientists break down human fear

Scientists using tarantulas to unpick human fear have found that the brain responds differently to threats based on proximity, direction and how scary people expect something to be.
Researchers from the Cognition and Brain Sciences Unit in Cambridge, England used functional magnetic resonance imaging, or fMRI, to track brain activity in 20 volunteers as they watched a tarantula placed near their feet, and then moved closer.

Their results suggest that different components of the brain's fear network serve specific threat-response functions and could help scientists diagnose and treat patients who suffer from clinical phobias.
"We've shown that it's not just a single structure in the brain, it's a number of different parts of the fear network and they are working together to orchestrate the fear response," Dean Mobbs, who led the study, said in a telephone interview.
Mobbs's team assessed the volunteers' brain activity during three sections of the study: first when the tarantula was in a segmented box near their foot and then moved either to nearer or more distant compartments of the box, and also when the spider walked in different directions.
"It seems that when a spider is further away moves closer to you, you see a switch from the anxiety regions of the brain to the panic regions," said Mobbs.
He said there was more activity in the brain's panic response center when the tarantula crept closer than when it retreated, regardless of how close it was in the first place.
He explained that the volunteers were actually watching an elaborately rigged video of a tarantula which they believed was near their foot, since getting the spider to do the same thing for each volunteer would have been impossible.
The scientists also asked volunteers beforehand how scared they thought they might be of the tarantula, and found that those who thought they would be most scared had a false impression afterwards of how large the spider was.
The scientists think it may be this so-called "expectation error" that could be the key to people developing a phobia -- an irrational, intense and persistent fear of certain things, people, animals or situations.
"This may be one cognitive mechanism by which people acquire phobias," said Mobbs. He said that since the expectation of great fear appeared to make a person exaggerate the size of the threat in their mind, this could trigger a "cascade effect," distorting the other processes in the brain to react to a larger threat and panic yet more as it came closer.

Beetle Study Suggests, Genetic 'Battle of the Sexes' More Important to Evolution Than Thought

A new study of beetles shows a genetic 'battle of the sexes' could be much harder to resolve and even more important to evolution than previously thought.
This battle, observed across many species and known as intralocus sexual conflict, happens when the genes for a trait which is good for the breeding success of one sex are bad for the other -- sparking an 'evolutionary tug-o-war' between the sexes.
It has previously been thought these issues were only resolved when the trait in question evolves to become sex-specific in its development -- meaning the trait only develops in the gender it benefits and stops affecting the other. An example of this is male peacocks' tails, used for mating displays, which are not present in females.
However, a new study by the universities of Exeter (UK), Okayama and Kyushu (both Japan) published Nov. 4 in Current Biology shows this doesn't always bring an end to conflict -- as even when the trait becomes sex-specific, knock-on effects can still disadvantage the other sex.
Professor Dave Hosken, from the Centre for Ecology & Conservation (Cornwall) at the University of Exeter, said: "This kind of genetic tussle is everywhere in biology. For example, in humans, male hips are optimised for physical activity, whereas female hips also need to allow child bearing. That's the sort of evolutionary conflict we're talking about, and these conflicts were previously thought to be resolved by sex-specific trait development.
"What we're seeing in this study is that this isn't always the end of the sexual conflict. This means it's no longer clear how or when, if ever, these conflicts get fully resolved and this means it could be more important to the evolutionary process than has generally been thought."
In this study, the researchers looked at broad-horned flour beetles, where males have massively enlarged mandibles used to fight other males for mating supremacy. The enlarged mandibles aren't present in the females at all -- meaning this is a sex-specific trait.
By selectively breeding the beetles for larger or smaller mandible size, the researchers were able to show that the bigger the mandibles were -- the more successful the males were in breeding. There was a corresponding counter-effect on females, however, as females from larger mandibled populations were less successful.
Professor Takahisa Miyatake, from the Graduate School of Environmental Science at Okayama University, said: "We looked at all the possible reasons for this and found that while the females did not develop the larger mandibles, they did inherit many of the other characteristics that made the enlarged mandibles possible in males. This included a reduced abdomen size, which could affect the number of eggs a female can carry -- giving a possible explanation for the disadvantage.
"So here we see a sex-specific trait which is still having a negative effect on the sex which doesn't show it. This means that even though it looks like this genetic conflict is over, it's still ongoing and there's no easy way to end it."
Kensuke Okada, also from Okayama University, said: "The view that sex-limited trait development resolves this kind of genetic battle of the sexes is based on the assumption that traits are genetically independent of each other, which is frequently not true.
"What we're seeing here is that genetic architecture can provide a general barrier to this kind of conflict resolution."