Monday, September 5, 2011

World's Smallest Electric Motor Made from a Single MoleculeWorld's Smallest Electric Motor Made from a Single Molecule

The smallest electrical motor on the planet, at least according to Guinness World Records, is 200 nanometers. Granted, that's a pretty small motor -- after all, a single strand of human hair is 60,000 nanometers wide -- but that tiny mark is about to be shattered in a big way.

Chemists at Tufts University's School of Arts and Sciences have developed the world's first single molecule electric motor, a development that may potentially create a new class of devices that could be used in applications ranging from medicine to engineering.
In research published online Sept. 4 in Nature Nanotechnology, the Tufts team reports an electric motor that measures a mere 1 nanometer across, groundbreaking work considering that the current world record is a 200 nanometer motor. A single strand of human hair is about 60,000 nanometers wide.
According to E. Charles H. Sykes, Ph.D., associate professor of chemistry at Tufts and senior author on the paper, the team plans to submit the Tufts-built electric motor to Guinness World Records.
"There has been significant progress in the construction of molecular motors powered by light and by chemical reactions, but this is the first time that electrically-driven molecular motors have been demonstrated, despite a few theoretical proposals," says Sykes. "We have been able to show that you can provide electricity to a single molecule and get it to do something that is not just random."
Sykes and his colleagues were able to control a molecular motor with electricity by using a state of the art, low-temperature scanning tunneling microscope (LT-STM), one of about only 100 in the United States. The LT-STM uses electrons instead of light to "see" molecules.
The team used the metal tip on the microscope to provide an electrical charge to a butyl methyl sulfide molecule that had been placed on a conductive copper surface. This sulfur-containing molecule had carbon and hydrogen atoms radiating off to form what looked like two arms, with four carbons on one side and one on the other. These carbon chains were free to rotate around the sulfur-copper bond.
The team determined that by controlling the temperature of the molecule they could directly impact the rotation of the molecule. Temperatures around 5 Kelvin (K), or about minus 450 degrees Fahrenheit (ºF), proved to be the ideal to track the motor's motion. At this temperature, the Tufts researchers were able to track all of the rotations of the motor and analyze the data.
While there are foreseeable practical applications with this electric motor, breakthroughs would need to be made in the temperatures at which electric molecular motors operate. The motor spins much faster at higher temperatures, making it difficult to measure and control the rotation of the motor.
"Once we have a better grasp on the temperatures necessary to make these motors function, there could be real-world application in some sensing and medical devices which involve tiny pipes. Friction of the fluid against the pipe walls increases at these small scales, and covering the wall with motors could help drive fluids along," said Sykes. "Coupling molecular motion with electrical signals could also create miniature gears in nanoscale electrical circuits; these gears could be used in miniature delay lines, which are used in devices like cell phones."
The Changing Face of Chemistry
Students from the high school to the doctoral level played an integral role in the complex task of collecting and analyzing the movement of the tiny molecular motors.
"Involvement in this type of research can be an enlightening, and in some cases life changing, experience for students," said Sykes. "If we can get people interested in the sciences earlier, through projects like this, there is a greater chance we can impact the career they choose later in life."
As proof that gaining a scientific footing early can matter, one of the high school students involved in the research, Nikolai Klebanov, went on to enroll at Tufts; he is now a sophomore majoring in chemical engineering.
This work was supported by the National Science Foundation, the Beckman Foundation and the Research Corporation for Scientific Advancement.
Source science daily web

Tuesday, August 9, 2011

Scientist Develops Virus That Targets HIV: Using a Virus to Kill a Virus

In what represents an important step toward curing HIV, a USC scientist has created a virus that hunts down HIV-infected cells.
Dr. Pin Wang's lentiviral vector latches onto HIV-infected cells, flagging them with what is called "suicide gene therapy" -- allowing drugs to later target and destroy them.
"If you deplete all of the HIV-infected cells, you can at least partially solve the problem," said Wang, chemical engineering professor with the USC Viterbi School of Engineering.
The process is analogous to the military practice of "buddy lasing" -- that is, having a soldier on the ground illuminate a target with a laser to guide a precision bombing strike from an aircraft.
Like a precision bombing raid, the lentiviral vector approach to targeting HIV has the advantage of avoiding collateral damage, keeping cells that are not infected by HIV out of harm's way. Such accuracy has not been achieved by using drugs alone, Wang said.
So far, the lentiviral vector has only been tested in culture dishes and has resulted in the destruction of about 35 percent of existing HIV cells. While that may not sound like a large percentage, if this treatment were to be used in humans, it would likely be repeated several times to maximize effectiveness.
Among the next steps will be to test the procedure in mice. While this is an important breakthrough, it is not yet a cure, Wang said.
"This is an early stage of research, but certainly it is one of the options in that direction," he said.
Wang's research, which was funded by the National Institutes of Health, appears in the July 23 issue of Virus Research.

Wednesday, May 25, 2011

Hips Take Walking in Stride, Ankles Put Best Foot Forward in Run

In a first-of-its-kind study comparing human walking and running motions -- and whether the hips, knees or ankles are the most important power sources for these motions -- researchers at North Carolina State University show that the hips generate more of the power when people walk, but the ankles generate more of the power when humans run. Knees provide approximately one-fifth or less of walking or running power.
The research could help inform the best ways of building assistive or prosthetic devices for humans, or constructing next-generation robotics, say NC State biomedical engineers Drs. Dominic Farris and Gregory Sawicki. The co-authors of a study on the mechanics of walking and running in the journal Interface, a Royal Society scientific journal, Sawicki and Farris are part of NC State's Human PoWeR (Physiology of Wearable Robotics) Lab.
A long history of previous studies have focused on the biomechanics of human locomotion from a whole-body or individual limbs perspective. But this study is the first to zoom in on the mechanical power generated by specific lower-limb joints in a single comprehensive study of walking and running across a range of speeds, Sawicki says.
The study shows that, overall, hips generate more power when people walk. That is, until humans get to the point at which they're speed walking -- walking so fast that it feels more comfortable to run -- at 2 meters per second. Hips generate 44 percent of the power when people walk at a rate of 2 meters per second, with ankles contributing 39 percent of the power.
When people start running at this 2-meter-per-second rate, the ankles really kick in, providing 47 percent of the power compared to 32 percent for the hips. Ankles continue to provide the most power of the three lower limb joints as running speeds increase, although the hips begin closing the distance at faster speeds.
"There seems to be a tradeoff in power generation from hips to ankles as you make the transition from walking to running," Sawicki says.
Both researchers are interested in how the study can help people who need assistance walking and running. Knowing which part of the lower limbs provide more power during the different activities can help engineers figure out how, depending on the person's speed and gait, mechanical power needs to be distributed.
"For example, assistive devices such as an exoskeleton or prosthesis may have motors near both the hip and ankle. If a person will be walking and then running, you'd need to redistribute energy from the hip to the ankle when the person makes that transition," Farris says.
Ten people walked and ran at various speeds on a specially designed treadmill in the study; a number of cameras captured their gait by tracking reflective markers attached to various parts of the participants' lower limbs while the treadmill captured data from the applied force.
The study examined walking and running on level ground in order to gauge the differences brought about by increased speed; walking and running on inclined ground is fundamentally different than walking and running on flat ground, the researchers say, and would likely skew the power generation results toward the hips and knees.
Source: Daily science webs

Wednesday, May 11, 2011

Mitochondria: Body’s Power Stations Can Affect Aging

Mitochondria are the body's energy producers, the power stations inside our cells. Researchers at the University of Gothenburg, Sweden, have now identified a group of mitochondrial proteins, the absence of which allows other protein groups to stabilise the genome. This could delay the onset of age-related diseases and increase lifespan.
Some theories of human aging suggest that the power generators of the cell, the mitochondria, play a part in the process. In addition to supplying us with energy in a usable form, mitochondria also produce harmful by-products -- reactive oxyradicals that attack and damage various cell components. Eventually these injuries become too much for the cell to cope with, and it loses its capacity to maintain important functions, so the organism starts to age. That's the theory anyway. Oddly enough, several studies have shown that certain mitochondrial dysfunctions can actually delay aging, at least in fungi, worms and flies. The underlying mechanisms have yet to be determined.
In a study from the Department of Cell and Molecular Biology at the University of Gothenburg, published in the journal Molecular Cell, a research team has now identified a group of mitochondrial proteins that are involved in this type of aging regulation. The researchers found that a group of proteins called MTC proteins, which are normally needed for mitochondrial protein synthesis, also have other functions that influence genome stability and the cell's capacity to remove damaged and harmful proteins.
"When a certain MTC protein is lacking in the cell, e.g. because of a mutation in the corresponding gene, the other MTC proteins appear to adopt a new function. They then gain increased significance for the stabilisation of the genome and for combating protein damage, which leads to increased lifespan," says Thomas Nyström of the Department of Cell and Molecular Biology.
He adds, "These studies also show that this MTC-dependent regulation of the rate of aging uses the same signalling pathways that are activated in calorie restriction -- something that extends the lifespan of many different organisms, including yeasts, mice and primates. Some of the MTC proteins identified in this study can also be found in the human cell, raising the obvious question of whether they play a similar role in the regulation of our own aging processes. It is possible that modulating the activity of the MTC proteins could enable us to improve the capacity of the cell to delay the onset of age-related diseases. These include diseases related to instability of the genome, such as cancer, as well as those related to harmful proteins, such as Alzheimer's disease and Parkinson's disease. At the moment this is only speculation, and the precise mechanism underlying the role of the MTC proteins in the aging process is a fascinating question that remains to be answered."

Friday, April 22, 2011

Worm Studies Shed Light on Human Cancers

Research in the worm is shedding light on a protein associated with a number of different human cancers, and may point to a highly targeted way to treat them.
 University of Wisconsin-Madison scientists were studying a worm protein called TFG-1, which is present in many cell types but whose exact role had never been understood. The scientists discovered that the protein controls key aspects of the movement, or secretion, of growth factors out of cells.
"TFG-1 has never been implicated in the secretory process before," says Dr. Anjon Audhya, an assistant professor of biomolecular chemistry in the School of Medicine and Public Health. "It turns out that humans carry a very similar protein, and we think it plays the same role in humans as in worms."
Reviewing the scientific literature, the researchers found that the gene encoding TFG in humans is fused to at least three other genes implicated in anaplastic large cell lymphoma, papillary thyroid carcinoma and extraskeletal chondrosarcoma. The fusions occur when two broken or rearranged pieces of DNA combine to form a "chimeric" gene with completely distinct properties.
Audhya's studies of TFG-1 in the worm led him to develop a model that explains how TFG fusions may stimulate cancer in humans. As reported in the current issue of Nature Cell Biology (Advanced Online Publication), he proposes that abnormal levels of growth factor secretion may produce a rich micro-environment that helps tumors form and thrive. "We think certain properties of TFG lead it to be a very effective precursor oncogene," he says.
Normally, a growth factor primed to leave a cell is encompassed by a sac, or vesicle, and then transported from one structure inside the cell to another -- endoplasmic reticulum (ER) to Golgi -- before it leaves the cell and discharges into the extracellular space.
Through their genetic studies, the Wisconsin researchers found that TFG-1 in the worm controls vesicle formation and secretion out of the ER.
"We found TFG-1 lies at the interface between the ER and the Golgi, in a scaffolding structure called the ER exit site, where it regulates the formation of vesicles carrying their critical cargo," Audhya says.
The research revealed the precise location where TFG-1 does its work and the mechanism by which it spurs unchecked activity.
The scientists demonstrated that human TFG also functions at ER exit sites, which contain a characterized scaffolding protein called Sec16, and likely regulates secretion of multiple cargoes out of cells.
"In the case of one fusion gene, TFG-NTRK-1, the concentrated non-stop activity of NTRK-1 at ER exit sites may cause the first steps that can transform a normal cell into a cancer cell," Audhya says.
The TFG fusions offer a direct target for future "designer" therapies.
"If you identified patients who have fusion genes that express chimeric proteins, you could create a drug that affects only those proteins," he says, adding that TFG fusions leading to chimeric proteins do not exist in healthy people.
Excited about the possibility that their basic science investigations may be applied to several areas of clinical medicine, the researchers have also begun studying TFG as it relates to B-cell development and the secretion of antibodies



Functioning Synapse Created Using Carbon Nanotubes: Devices Might Be Used in Brain Prostheses or Synthetic Brains

Engineering researchers the University of Southern California have made a significant breakthrough in the use of nanotechnologies for the construction of a synthetic brain. They have built a carbon nanotube synapse circuit whose behavior in tests reproduces the function of a neuron, the building block of the brain.
The team, which was led by Professor Alice Parker and Professor Chongwu Zhou in the USC Viterbi School of Engineering Ming Hsieh Department of Electrical Engineering, used an interdisciplinary approach combining circuit design with nanotechnology to address the complex problem of capturing brain function.
In a paper published in the proceedings of the IEEE/NIH 2011 Life Science Systems and Applications Workshop in April 2011, the Viterbi team detailed how they were able to use carbon nanotubes to create a synapse.
Carbon nanotubes are molecular carbon structures that are extremely small, with a diameter a million times smaller than a pencil point. These nanotubes can be used in electronic circuits, acting as metallic conductors or semiconductors.
"This is a necessary first step in the process," said Parker, who began the looking at the possibility of developing a synthetic brain in 2006. "We wanted to answer the question: Can you build a circuit that would act like a neuron? The next step is even more complex. How can we build structures out of these circuits that mimic the function of the brain, which has 100 billion neurons and 10,000 synapses per neuron?"
Parker emphasized that the actual development of a synthetic brain, or even a functional brain area is decades away, and she said the next hurdle for the research centers on reproducing brain plasticity in the circuits.
The human brain continually produces new neurons, makes new connections and adapts throughout life, and creating this process through analog circuits will be a monumental task, according to Parker.
She believes the ongoing research of understanding the process of human intelligence could have long-term implications for everything from developing prosthetic nanotechnology that would heal traumatic brain injuries to developing intelligent, safe cars that would protect drivers in bold new ways.
Source: daily science web

Monday, March 21, 2011

Pregnancy - 15 Things Women Wish she Knew the First Time Around

After three pregnancies and three wonderful baby girls, I have (let's hope) learned a few things. 

Ok, I’ll admit it: growing up, I was one of those girls who used to stick a pillow under her shirt and look in the mirror, day dreaming of the day I’d become a mommy. I always knew that I wanted kids, and looked forward to the day when that dream would become a reality. When I was newly pregnant with our first daughter, I was on cloud nine. I loved thinking, reading, and talking about my pregnancy. Despite my euphoric haze, though, there are a few things I wish I had known at the time:

  1. Don’t worry so much. In general. This is a broad suggestion, but I really wish I had not worried so much. If you had a beer the night before you found out you’re pregnant, the baby is fine. If you ate three hot dogs and then read that pregnant women shouldn’t eat hot dogs, make a mental note and move on. And don’t worry about being a good mom – you’ll be just fine.
  2. Morning sickness will probably not be what you expect. I was shocked, and convinced I had the flu the first week (even though I knew I was pregnant). Just remember that if it hits you hard, it will pass. Also, you may be one of the lucky ones who don’t get it, or who feel mildly queasy and that’s it. Just don’t set up expectations, like expecting to only get sick in the morning, or thinking that it ends right at 12 weeks. Let your body do what it’s going to do, and just hang in there!
  3. Buy frozen foods and a lot of convenience food before you start feeling nauseous. I wish I had done this – we would have saved a ton of money on take-out and fast food! You may be fine and keep cooking as usual, but I was way too sick to stand the smell of raw meat, doing dishes, or anything else that triggered my gag reflex. Buy frozen lasagna, frozen dinners, and lots of snack stuff. Also stock up on paper plates. Having things on hand will be very helpful when you are either feeling too tired or too sick to cook.
  4. If you take everyone’s advice too seriously, you’ll make yourself miserable. Every one has an opinion, and over the course of your pregnancy, you are going to hear tons of stories, lots of warnings, and plenty of advice. Take it all with a grain of salt – and don’t let it stress you out. Society feels the responsibility to educate and advise pregnant women on just about everything, but it often just causes more stress. Let it roll off of your back.
  5. Don’t be in a rush to wear maternity clothes. I was so excited during my first pregnancy to finally “look pregnant,” I rushed into maternity clothes. I could have gone another month or so, but I was just too excited. Trust me – you will have plenty of time to wear those clothes (and you’ll get sick of them), so enjoy your regular clothes while you still can.
  6. Invest in a belly band. This will extend the life of your pre-pregnancy pants, and will help you with your clothing options. These wonderful things are nice, stretchy bands that enable you to walk around with your pants unzipped, while still held up in place with a nice band covering the zipper. (An added benefit of these bands – they help you get back into your old jeans after having the baby, when you are still carrying some baby weight in your middle.)
  7. Don’t obsess about your pregnancy. When people ask you how you are feeling, try not to go into a monologue about how you threw up yesterday, need to pee every hour, and then give them a long list of all the baby names you are considering. When it comes down to it, most people are asking to be polite. It’s completely normal to want to gush about your pregnancy, but just remember that non-pregnant people may not be as interested as you are in certain things. I was bad about that when I was pregnant with my first, so I can completely understand this – and I wish I had realized it at the time. It’s better to save the gory details for a pregnancy journal, your mom, or your best friend.
  8. A regular soda here and there is fine. Dr. Pepper helped me make it through the end of my first trimester – I wish I had lightened up earlier on. Sure, you aren’t supposed to have tons of caffeine – but a smidge here and there won’t hurt.
  9. Avoid saying, “I will never do that!” Before you actually become a parent, you just don’t know. You may end up co-sleeping with your baby, deciding to get the epidural, or stop nursing after a couple of months. Keep an open mind, and don’t set yourself up for a disappointment.
  10. Don’t feel bad about sleeping in. Sleep while you can. Trust me.
  11. Buy at least one or two fabulous nursing bras. I made the mistake of buying cheap nursing bras when I was still pregnant with my first baby, thinking it didn’t matter. Well, think again. You will need a lot of support during those first few months. I am in love with Bravado bras, because (a) they are crazy comfy, (b) you can sleep in them, and (c) these bras come in many patterns and colors. I have four of the “original nursing bras” and I love them. My favorite is the leopard print – just because you’re nursing doesn’t mean you can’t still be hot!
  12. Be clear about what you want before and after labor, but don’t come up with an elaborate birth plan that spells out exactly how you want it to go. Labor and delivery are full of surprises, so don’t set yourself up thinking it will go a certain way. Do be clear on what you want regarding pain meds, who is allowed in the room with you, the doc’s policy on episiotomies, etc.
  13. You don’t need as much as you think. I was so OCD when I was pregnant. I worried way too much about “getting ready” for the baby, and looking back, I realize now it was bit overboard. When it comes down to it, Target will still exist after you arrive home from the hospital. You husband can go out and buy a bouncy seat or some extra blankets when you are resting at home with the baby, so don’t worry about having everything just right.
  14. Let the hospital nursery take the baby overnight. They will still bring your baby in to nurse during the night, but at least you’ll get some sleep. We chose to “room in” with our first baby, because I was concerned that I’d look bad if I sent her to the nursery. Big mistake. Let the nurses take care of the baby while you have the opportunity – you will have plenty of sleepless nights once you arrive home.
  15. Above all, I wish I had known how much I’d love my kids. I know this sounds cheesy, but it’s true. You have absolutely no idea how much you are going to fall in love with your children until you are staring into their tiny faces at 5am, counting their eyelashes. Once you realize how much you love that little person you saw on the ultrasound screen, it blows your mind. Motherhood is out of this world. Sit back, relax, and enjoy the months leading up to it.

Thursday, March 17, 2011

Robots Swim Through Eyes To Give Treatment


The latest in eye treatment is just around the corner. A new tiny robot capable of being steered through your eye can deliver drugs or maybe even do micro-surgery. Thanks to Michael Kummer and his team at the Institute of Robotics and Intelligent Systems (IRIS), this tech may be available to the public in a short time.
Some time back, researchers at North Carolin State University were able to make micro-bots do U-turns in a fluid on command, and another group developed one capable of clearing blood clots in the blood vessels in the eye. Now, Kumer has brought his similar technology even further. Kumer, a Mechanical Engineer from the Swiss Federal Institute of Technology Zurich (ETH) is a specialist in robotics and thermodynamics in emerging technologies, and his research involves the precision control of microbots using magnetic fields.
Kumer's robots are injected into the eye via needle and are electro-magnetically controlled to eliminate the need for on-board fuel. the team hopes that the tiny robots will be able to help treat macular degeneration injecting a drug slowly over a period of months. So far the robots have only been tested on pig's eyes from cadavers but they plan to test it on living animals soon.
With any luck, not only will the little robots be able to help with macular degeneration, but also with other eye problems and surgeries. Maybe they're even be able to use the robots in other parts of the body like removing a blood clot deep in the heart.

New Biochip Gives Blood Test Results in Minutes

A new breakthrough in microfluidics could lead to autonomous blood analysis chips that will be able to diagnose diseases in mere minutes.
The device, called SIMBAS, was developed by a team of researchers from Dublin City Univerisity in Ireland, Universidad de Valparaiso Chile, and the Bay area's own University of California, Berkeley. 'SIMBAS' stands for 'Self-powered Integrated Microfluidic Blood Analysis System', and requires no extra tubing in order to diagnose diseases.
The lack of extra components is especially important, as it helps keep the chip "small, portable, and cheap," according to UC Berkeley post-doctoral researcher in bioengineering, Ivan Dimov. "The dream of a true lab-on-a-chip has been around for awhile, but most systems developed thus far have not been truly autonomous."
The chips will eventually be able to be used by workers in the field to diagnose diseases such as HIV and tuberculosis in a matter of minutes. The biochip is made of plastic and features five "inlets" on which the blood is dropped. The heavier red blood and white blood cells settle to the bottom of the trenches, and the blood moves through the chip in a "degas-driven" flow:
"For degas-driven flow, air molecules inside the porous polymeric device are removed by placing the device in a vacuum-sealed package. When the seal is broken, the device is brought to atmospheric conditions, and air molecules are reabsorbed into the device material. This generates a pressure difference, which drives the blood fluid flow in the chip."
According to the researchers, they were able to capture more than 99 percent of the blood cells by separating the blood from plasma using this method. The team demonstrated its chip's ability by placing a 5-microliter sample of blood on the chip's inlets and receiving a read out of biotin levels in just 10 minutes.
"Imagine if you had something as cheap and as easy to use as a pregnancy test, but that could quickly diagnose HIV and TB," said UC Berkeley grad student Benjamin Ross.

Sunday, March 6, 2011

Scientists Create Cell Assembly Line: New Technology Synthesizes Cellular Structures from Simple Starting Materials

Borrowing a page from modern manufacturing, scientists from the Florida campus of The Scripps Research Institute have built a microscopic assembly line that mass produces synthetic cell-like compartments.
The new computer-controlled system represents a technological leap forward in the race to create the complex membrane structures of biological cells from simple chemical starting materials.
"Biology is full of synthetic targets that have inspired chemists for more than a century," said Brian Paegel, Scripps Research assistant professor and lead author of a new study published in the Journal of the American Chemical Society. "The lipid membrane assemblies of cells and their organelles pose a daunting challenge to the chemist who wants to synthesize these structures with the same rational approaches used in the preparation of small molecules."
While most cellular components such as genes or proteins are easily prepared in the laboratory, little has been done to develop a method of synthesizing cell membranes in a uniform, automated way. Current approaches are capricious in nature, yielding complex mixtures of products and inefficient cargo loading into the resultant cell-like structures.
The new technology transforms the previously difficult synthesis of cell membranes into a controlled process, customizable over a range of cell sizes, and highly efficient in terms of cargo encapsulation.
The membrane that surrounds all cells, organelles and vesicles -- small subcellular compartments -- consists of a phospholipid bilayer that serves as a barrier, separating an internal space from the external medium.
The new process creates a laboratory version of this bilayer that is formed into small, cell-sized compartments.
How It Works
"The assembly-line process is simple and, from a chemistry standpoint, mechanistically clear," said Sandro Matosevic, research associate and co-author of the study.
A microfluidic circuit generates water droplets in lipid-containing oil. The lipid-coated droplets travel down one branch of a Y-shaped circuit and merge with a second water stream at the Y-junction. The combined flows of droplets in oil and water travel in parallel streams toward a triangular guidepost.
Then, the triangular guide diverts the lipid-coated droplets into the parallel water stream as a wing dam might divert a line of small boats into another part of a river. As the droplets cross the oil-water interface, a second layer of lipids deposits on the droplet, forming a bilayer.
The end result is a continuous stream of uniformly shaped cell-like compartments.
The newly created vesicles range from 20 to 70 micrometers in diameter -- from about the size of a skin cell to that of a human hair. The entire circuit fits on a glass chip roughly the size of a poker chip.
The researchers also tested the synthetic bilayers for their ability to house a prototypical membrane protein. The proteins correctly inserted into the synthetic membrane, proving that they resemble membranes found in biological cells.
"Membranes and compartmentalization are ubiquitous themes in biology," noted Paegel. "We are constructing these synthetic systems to understand why compartmentalized chemistry is a hallmark of life, and how it might be leveraged in therapeutic delivery."
Source: daily science web

New Light-Sensing Mechanism Found in Neurons

A UC Irvine research team led by Todd C. Holmes has discovered a second form of phototransduction light sensing in cells that is derived from vitamin B2. This discovery may reveal new information about cellular processes controlled by light.
For more than 100 years, it had been believed that the photo transduction process was solely based on a chemical derived from vitamin A called retinal. Photo transduction is the conversion of light signals into electrical signals in photo receptive neurons and underlies both image-forming and non-image-forming light sensing.
In discovering this new light-sensing phototransduction mechanism, the UCI scientists found that phototransduction can also be mediated by a protein called cryptochrome, which uses a B2 vitamin chemical derivative for light sensing. Cryptochromes are blue-light photoreceptors found in circadian and arousal neurons that regulate slow biochemical processes, but this is the first time they have been linked to rapid phototransduction.
Their work appears March 3 on online Express site for the journal Science.
"This is totally novel mechanism that does not depend on retinal," said Holmes, a professor of physiology & biophysics. "This discovery opens whole new technology opportunities for adapting light-sensing proteins to drive medically relevant cellular activities."
This basic science breakthrough -- "which literally and figuratively came 'out of the blue,'" Holmes said -- has implications in the fast-growing field of optogenetics. Optogenetics combines optical and genetic research techniques to probe neural circuits at the high speeds needed to understand brain information processing. In one area, it is being used to understand how treatments such as deep brain massage can aid people with neuro degenerative diseases.
Holmes' team found that cryptochrome mediates phototransduction directly in fruit fly circadian and arousal neurons in response to blue-light wavelengths. The researchers also found that they could genetically express cryptochrome in neurons that are not ordinarily electrically responsive to light to make them light responsive.
Keri Fogel, Kelly Parson and Nicole Dahm of UCI contributed to the study, which received National Institutes of Health support.
Source: Daily Science web

Tuesday, March 1, 2011

More Than 4,000 Components of Blood Chemistry Listed

After three years of exhaustive analysis led by a University of Alberta researcher, the list of known compounds in human blood has exploded from just a handful to more than 4,000.
"Right now a medical doctor analyzing the blood of an ailing patient looks at something like 10 to 20 chemicals," said University of Alberta biochemist David Wishart. "We've identified 4,229 blood chemicals that doctors can potentially look at to diagnose and treat health problems."

Blood chemicals, or metabolites, are routinely analyzed by doctors to diagnose conditions such as diabetes and kidney failure. Wishart says the new research opens up the possibility of diagnosing hundreds of other diseases that are characterized by an imbalance in blood chemistry.
Wishart led more than 20 researchers at six different institutions using modern technology to validate past research, and the team also conducted its own lab experiments to break new ground on the content of human-blood chemistry.
"This is the most complete chemical characterization of blood ever done," said Wishart. "We now know the normal values of all the detectable chemicals in blood. Doctors can use these measurements as a reference point for monitoring a patient's current and even future health."
Wishart says blood chemicals are the "canary in the coal mine," for catching the first signs of an oncoming medical problem. "The blood chemistry is the first thing to change when a person is developing a dangerous condition like high cholesterol."
The database created by Wishart and his team is open access, meaning anyone can log on and find the expanded list of blood chemicals. Wishart says doctors can now tap into the collected wisdom of hundreds of blood-research projects done in the past by researchers all over the world. "With this new database doctors can now link a specific abnormality in hundreds of different blood chemicals with a patient's specific medical problem," said Wishart.
Wishart believes the adoption of his research will happen slowly, with hospitals incorporating new search protocols and equipment for a few hundred of the more than 4,000 blood-chemistry markers identified by Wishart and his colleagues.
"People have being studying blood for more than 100 years," said Wishart. "By combining research from the past with our new findings we have moved the science of blood chemistry from a keyhole view of the world to a giant picture window."
The research was published some week in the journal PLoS One

Parts of Brain Can Switch Functions: In People Born Blind, Brain Regions That Usually Process Vision Can Tackle Language

When your brain encounters sensory stimuli, such as the scent of your morning coffee or the sound of a honking car, that input gets shuttled to the appropriate brain region for analysis. The coffee aroma goes to the olfactory cortex, while sounds are processed in the auditory cortex.
That division of labor suggests that the brain's structure follows a predetermined, genetic blueprint. However, evidence is mounting that brain regions can take over functions they were not genetically destined to perform. In a landmark 1996 study of people blinded early in life, neuroscientists showed that the visual cortex could participate in a nonvisual function -- reading Braille.
Now, a study from MIT neuroscientists shows that in individuals born blind, parts of the visual cortex are recruited for language processing. The finding suggests that the visual cortex can dramatically change its function -- from visual processing to language -- and it also appears to overturn the idea that language processing can only occur in highly specialized brain regions that are genetically programmed for language tasks.
"Your brain is not a prepackaged kind of thing. It doesn't develop along a fixed trajectory, rather, it's a self-building toolkit. The building process is profoundly influenced by the experiences you have during your development," says Marina Bedny, an MIT postdoctoral associate in the Department of Brain and Cognitive Sciences and lead author of the study, which appears in the Proceedings of the National Academy of Sciences the week of Feb. 28.
Flexible connections
For more than a century, neuroscientists have known that two specialized brain regions -- called Broca's area and Wernicke's area -- are necessary to produce and understand language, respectively. Those areas are thought to have intrinsic properties, such as specific internal arrangement of cells and connectivity with other brain regions, which make them uniquely suited to process language.
Other functions -- including vision and hearing -- also have distinct processing centers in the sensory cortices. However, there appears to be some flexibility in assigning brain functions. Previous studies in animals (in the laboratory of Mriganka Sur, MIT professor of brain and cognitive sciences) have shown that sensory brain regions can process information from a different sense if input is rewired to them surgically early in life. For example, connecting the eyes to the auditory cortex can provoke that brain region to process images instead of sounds.
Until now, no such evidence existed for flexibility in language processing. Previous studies of congenitally blind people had shown some activity in the left visual cortex of blind subjects during some verbal tasks, such as reading Braille, but no one had shown that this might indicate full-fledged language processing.
Bedny and her colleagues, including senior author Rebecca Saxe, assistant professor of brain and cognitive sciences, and Alvaro Pascual-Leone, professor of neurology at Harvard Medical School, set out to investigate whether visual brain regions in blind people might be involved in more complex language tasks, such as processing sentence structure and analyzing word meanings.
To do that, the researchers scanned blind subjects (using functional magnetic resonance imaging) as they performed a sentence comprehension task. The researchers hypothesized that if the visual cortex was involved in language processing, those brain areas should show the same sensitivity to linguistic information as classic language areas such as Broca's and Wernicke's areas.
They found that was indeed the case -- visual brain regions were sensitive to sentence structure and word meanings in the same way as classic language regions, Bedny says. "The idea that these brain regions could go from vision to language is just crazy," she says. "It suggests that the intrinsic function of a brain area is constrained only loosely, and that experience can have really a big impact on the function of a piece of brain tissue."
Bedny notes that the research does not refute the idea that the human brain needs Broca's and Wernicke's areas for language. "We haven't shown that every possible part of language can be supported by this part of the brain [the visual cortex]. It just suggests that a part of the brain can participate in language processing without having evolved to do so," she says.
Redistribution
One unanswered question is why the visual cortex would be recruited for language processing, when the language processing areas of blind people already function normally. According to Bedny, it may be the result of a natural redistribution of tasks during brain development.
"As these brain functions are getting parceled out, the visual cortex isn't getting its typical function, which is to do vision. And so it enters this competitive game of who's going to do what. The whole developmental dynamic has changed," she says.
This study, combined with other studies of blind people, suggest that different parts of the visual cortex get divvied up for different functions during development, Bedny says. A subset of (left-brain) visual areas appears to be involved in language, including the left primary visual cortex.
It's possible that this redistribution gives blind people an advantage in language processing. The researchers are planning follow-up work in which they will study whether blind people perform better than sighted people in complex language tasks such as parsing complicated sentences or performing language tests while being distracted.
The researchers are also working to pinpoint more precisely the visual cortex's role in language processing, and they are studying blind children to figure out when during development the visual cortex starts processing language.

Wednesday, February 9, 2011

Turning Bacteria Against Themselves

Bacteria often attack with toxins designed to hijack or even kill host cells. To avoid self-destruction, bacteria have ways of protecting themselves from their own toxins.
Now, researchers at Washington University School of Medicine in St. Louis have described one of these protective mechanisms, potentially paving the way for new classes of antibiotics that cause the bacteria's toxins to turn on themselves.
Scientists determined the structures of a toxin and its antitoxin in Streptococcus pyogenes, common bacteria that cause infections ranging from strep throat to life-threatening conditions like rheumatic fever. In Strep, the antitoxin is bound to the toxin in a way that keeps the toxin inactive.
"Strep has to express this antidote, so to speak," says Craig L. Smith, PhD, a postdoctoral researcher and first author on the paper that appears Feb. 9 in the journal Structure. "If there were no antitoxin, the bacteria would kill itself."
With that in mind, Smith and colleagues may have found a way to make the antitoxin inactive. They discovered that when the antitoxin is not bound, it changes shape.
"That's the Achilles' heel that we would like to exploit," says Thomas E. Ellenberger, DVM, PhD, the Raymond H. Wittcoff Professor and head of the Department of Biochemistry and Molecular Biophysics at the School of Medicine. "A drug that would stabilize the inactive form of the immunity factor would liberate the toxin in the bacteria."
In this case, the toxin is known as Streptococcus pyogenes beta-NAD+ glycohydrolase, or SPN. Last year, coauthor Michael G. Caparon, PhD, professor of molecular microbiology, and his colleagues in the Center for Women's Infectious Disease Research showed that SPN's toxicity stems from its ability to use up all of a cell's stores of NAD+, an essential component in powering cell metabolism. The antitoxin, known as the immunity factor for SPN, or IFS, works by blocking SPN's access to NAD+, protecting the bacteria's energy supply system.
With the structures determined, researchers can now test possible drugs that might force the antitoxin to remain unbound to the toxin, thereby leaving the toxin free to attack its own bacteria.
"The most important aspect of the structure is that it tells us a lot about how the antitoxin blocks the toxin activity and spares the bacterium," says Ellenberger.
Understanding how these bacteria cause disease in humans is important in drug design.
"There is a war going on between bacteria and their hosts," Smith says. "Bacteria secrete toxins and we have ways to counterattack through our immune systems and with the help of antibiotics. But, as bacteria develop antibiotic resistance, we need to develop new generations of antibiotics."
Many types of bacteria have evolved this toxin-antitoxin method of attacking host cells while protecting themselves. But today, there are no classes of drugs that take aim at the protective action of the bacteria's antitoxin molecules.
"Obviously they could evolve resistance once you target the antitoxin," Ellenberger says. "But this would be a new target. Understanding structures is a keystone of drug design."

Brain's 'Radio Stations' Have Much to Tell Scientists

Like listeners adjusting a high-tech radio, scientists at Washington University School of Medicine in St. Louis have tuned in to precise frequencies of brain activity to unleash new insights into how the brain works.
"Analysis of brain function normally focuses on where brain activity happens and when," says Eric C. Leuthardt, MD. "What we've found is that the wavelength of the activity provides a third major branch of understanding brain physiology."
Researchers used electrocorticography, a technique for monitoring the brain with a grid of electrodes temporarily implanted directly on the brain's surface. Clinically, Leuthardt and other neurosurgeons use this approach to identify the source of persistent, medication-resistant seizures in patients and to map those regions for surgical removal. With the patient's permission, scientists can also use the electrode grid to experimentally monitor a much larger spectrum of brain activity than they can via conventional brainwave monitoring.
Scientists normally measure brainwaves with a process called electroencephalography (EEG), which places electrodes on the scalp. Brainwaves are produced by many neurons firing at the same time; how often that firing occurs determines the activity's frequency or wavelength, which is measured in hertz, or cycles per second. Neurologists have used EEG to monitor consciousness in patients with traumatic injuries, and in studies of epilepsy and sleep.
In contrast to EEG, electrocorticography records brainwave data directly from the brain's surface.
"We get better signals and can much more precisely determine where those signals come from, down to about one centimeter," Leuthardt, assistant professor of neurosurgery, of neurobiology and of biomedical engineering, says. "Also, EEG can only monitor frequencies up to 40 hertz, but with electrocorticography we can monitor activity up to 500 hertz. That really gives us a unique opportunity to study the complete physiology of brain activity."
Leuthardt and his colleagues have used the grids to watch consciousness fade under surgical anesthesia and return when the anesthesia wears off. They found each frequency gave different information on how different circuits changed with the loss of consciousness, according to Leuthardt.
"Certain networks of brain activity at very slow frequencies did not change at all regardless of how deep under anesthesia the patient was," Leuthardt says. "Certain relationships between high and low frequencies of brain activity also did not change, and we speculate that may be related to some of the memory circuits."
Their results also showed a series of changes that occurred in a specific order during loss of consciousness and then repeated in reverse order as consciousness returned. Activity in a frequency region known as the gamma band, which is thought to be a manifestation of neurons sending messages to other nearby neurons, dropped and returned as patients lost and regained consciousness.
The results appeared in December in the Proceedings of the National Academy of Sciences.
In another paper that will publish Feb. 9 in The Journal of Neuroscience, Leuthardt and his colleagues have shown that the wavelength of brain signals in a particular region can be used to determine what function that region is performing at that time. They analyzed brain activity by focusing on data from a single electrode positioned over a number of different regions involved in speech. Researchers could use higher-frequency bands of activity in this brain area to tell whether patients:
  • had heard a word or seen a word
  • were preparing to say a word they had heard or a word they had seen
  • were saying a word they had heard or a word they had seen.
"We've historically lumped the frequencies of brain activity that we used in this study into one phenomenon, but our findings show that there is true diversity and non-uniformity to these frequencies," he says. "We can obtain a much more powerful ability to decode brain activity and cognitive intention by using electrocorticography to analyze these frequencies."
Source: Daily Science

Sunday, January 23, 2011

Long-Distance Migration May Help Reduce Infectious Disease Risks for Many Animal Species

It's a common assumption that animal migration, like human travel across the globe, can transport pathogens long distances, in some cases increasing disease risks to humans. West Nile Virus, for example, spread rapidly along the East coast of the U.S., most likely due to the movements of migratory birds.
But in a paper just published in the journal Science, researchers in the University of Georgia Odum School of Ecology report that in some cases, animal migrations could actually help reduce the spread and prevalence of disease and may even promote the evolution of less-virulent disease strains.
Every year, billions of animals migrate, some taking months to travel thousands of miles across the globe. Along the way, they can encounter a broad range of pathogens while using different habitats and resources. Stopover points, where animals rest and refuel, are often shared by multiple species in large aggregations, allowing diseases to spread among them.
But, according to Odum School associate professor Sonia Altizer and her co-authors, Odum School postdoctoral associates Rebecca Bartel and Barbara Han, migration can also help limit the spread of some pathogens.
Some kinds of parasites have transmission stages that can build up in the environment where host animals live, and migration allows the hosts to periodically escape these parasite-laden habitats. While hosts are gone, parasite numbers become greatly reduced so that the migrating animals find a largely disease-free habitat when they return. Long migratory journeys can also weed infected animals from the population: imagine running a marathon with the flu. This not only prevents those individuals from spreading disease to others, it also helps to eliminate some of the most virulent strains of pathogens.
"By placing disease in an ecological context," said Odum School dean John Gittleman, "you not only see counterintuitive patterns but also understand advantages to disease transmission. This is a classic example of disease ecology at its best."
Altizer's long-term research on monarch butterflies and a protozoan parasite that infects them provides an excellent demonstration of migration's effects on the spread of infectious disease. Monarchs in eastern North America migrate long distances, from as far north as Canada, to central Mexico, where they spend the winter. Monarchs in other parts of the world migrate shorter distances. In locations with mild year-round climates, such as southern Florida and Hawaii, monarchs do not migrate at all. Work by Altizer and others in her lab showed that parasite prevalence is lowest in the eastern North American population, which migrates the farthest distance, and highest in non-migratory populations. This could be because infected monarchs do not migrate successfully, as suggested by tethered-flight experiments with captive butterflies, or because parasites build up in habitats where monarchs breed year-round. Other work showed that parasites isolated from monarchs that flew the longest were less virulent than those found in monarchs that flew shorter distances or didn't migrate at all, suggesting that monarchs with highly virulent parasites didn't survive the longest migrations.
"Taken together, these findings tell us that migration is important for keeping monarch populations healthy -- a result that could apply to many other migratory animal species," said Altizer.
But for monarchs, and many other species, migration is now considered an endangered phenomenon. Deforestation, urbanization and the spread of agriculture have eliminated many stopover sites, and artificial barriers such as dams and fences have blocked migration routes for other species. These changes can artificially elevate animal densities and facilitate contact between wildlife, livestock and humans, increasing the risk that pathogens will spread across species. As co-author Han noted, "A lot of migratory species are unfairly blamed for spreading infections to humans, but there are just as many examples suggesting the opposite -- that humans are responsible for creating conditions that increase disease in migratory species."
And as the climate warms, species like the monarch may no longer need to undertake the arduous migratory journey to their wintering grounds. With food resources available year-round, some species may shorten or give up their migrations altogether -- prolonging their exposure to parasites in the environment, raising the rates of infection and favoring the evolution of more virulent disease strains. "Migration is a strategy that has evolved over millions of years in response to selection pressures driven by resources, predators and lethal parasitic infections -- any changes to this strategy could translate to changes in disease dynamics," said Han.
"There is an urgent need for more study of pathogen dynamics in migratory species and how human activities affect those dynamics," Altizer said. The paper concludes with an outline of challenges and questions for future research. "We need to learn more in order to make decisions about the conservation and management of wildlife and to predict and mitigate the effects of future outbreaks of infectious diseases."

Friday, January 14, 2011

Painkillers 'cause kidney damage'

Be it a body pain, a headache or the pain of a wound, all we do is to pop in a Painkiller. Soon the pain subsides and you sign off for a peaceful sleep, unaware of the fact that the painkiller is playing tricks on your body organs.
Experts say that occasional intake of Painkillers does not cause harm but regular practice may lead to serious health conditions. Surveys have proved the fast growing practice of Painkiller addiction. Most of the addicts were not even aware that they were addicted to the painkiller.
Painkillers do not work on any specific body part. All they do is to reduce the pain messages sent to brain and relax the reaction. Painkillers do the job of suppressing the pain but they never cure it. If the pain is an outcome of a constant health condition, it will return after a gap of few hours.
The two major body organs damaged by painkillers are Kidney and Heart.
Heart – According to a recent survey, excess consumption of Painkillers can lead to Cardiac Arrest. Cardiac Arrest is the condition when heart stops circulating blood to the body. Excess consumption of painkillers hampers the normal breathing process. This in turn leads to drop in Oxygen supply.
This low level of Oxygen disturbs the heart rhythm called ventricular fibrillation. In this condition though the heart continues to work, it does not supply enough blood to the body.
Kidney - The medicine which reduces pain is called Analgesic. Some Analgesic which does not need prescription like Aspirin, Ibuprofen etc. Over usage of theses drugs can lead to Kidney damage. These painkillers are not broken by the Liver or by the digestive system. These are excreted through the kidney, thus causing damage to it.
The Analgesic can cause two types of Kidney damage – Acute Renal Failure and Chronic Kidney Disease called analgesic nephropathy.

Other Side Effects Of Painkillers
1.Constipation – Painkillers have the ability to disturb your bowel system. The constipation if not diagnosed it time can be very painful and lead to other major diseases.
2.Dizziness – Painkillers relax your brain and generally makes you feel sleepy. Constant usage of painkillers, can make it a permanent trait. Constant heavy usage can lead to dull brain and depression.
3.Nausea – Some painkillers contain a dose of morphine, which is not tolerated by some body types. This may cause nausea which eventually retreats. Continuous usage of these painkillers may cause serious problems.
In few situations, like after an operation or in times of unbearable pain, painkillers can be used but if the pain persists, it's better to take medical advice. Avoiding these problem with painkillers is sure to lead to serious health conditions.

Wednesday, January 12, 2011

Water on Moon Originated from Comets

Researchers at the University of Tennessee, Knoxville, continue to chip away at the mysterious existence of water on the moon -- this time by discovering the origin of lunar water.
 Larry Taylor, a distinguished professor in the Department of Earth and Planetary Sciences, was the one last year to discover trace amounts of water on the moon. This discovery debunked beliefs held since the return of the first Apollo rocks that the moon was bone-dry.
Then, he discovered water was actually pretty abundant and ubiquitous -- enough so a human settlement on the moon is not unquestionable.
Now, Taylor and a team of researchers have determined the lunar water may have originated from comets smashing into the moon soon after it formed.
His findings will be posted online, in the article "Extraterrestrial Hydrogen Isotope Composition of Water in Lunar Rocks" on the website of the scientific journal, Nature Geoscience.
Taylor and his fellow researchers conducted their study by analyzing rocks brought back from the Apollo mission. Using secondary ion mass spectrometry, they measured the samples' "water signatures," which tell the possible origin of the water -- and made the surprising discovery that the water on the Earth and moon are different.
"This discovery forces us to go back to square one on the whole formation of the Earth and moon," said Taylor. "Before our research, we thought the Earth and moon had the same volatiles after the Giant Impact, just at greatly different quantities. Our work brings to light another component in the formation that we had not anticipated -- comets."
Scientists believe the moon formed by a giant impact of the nascent Earth with a Mars-sized object called Theia, which caused a great explosion throwing materials outward to aggregate and create the moon. Taylor's article theorizes that at this time, there was a great flux of comets, or "dirty icebergs," hitting both the Earth and moon systems. The Earth already having lots of water and other volatiles did not change much. However, the moon, being bone-dry, acquired much of its water supply from these comets.
Taylor's research shows that water has been present throughout all of the moon's history -- some water being supplied externally by solar winds and post-formation comets and the other internally during the moon's original formation.
"The water we are looking at is internal," said Taylor. "It was put into the moon during its initial formation, where it existed like a melting pot in space, where cometary materials were added in at small yet significant amounts."
To be precise, the lunar water he has found does not consist of "water" -- the molecule H2O -- as we know it on Earth. Rather, it contains the ingredients for water -- hydrogen and oxygen -- that when the rocks are heated up, will be liberated to create water. The existence of hydrogen and oxygen -- water -- on the moon can literally serve as a launch pad for further space exploration.
"This water could allow the moon to be a gas station in the sky," said Taylor. "Spaceships use up to 85 percent of their fuel getting away from Earth's gravity. This means the moon can act as a stepping stone to other planets. Missions can fuel up at the moon, with liquid hydrogen and liquid oxygen from the water, as they head into deeper space, to other places such as Mars."
Taylor collaborated with James P. Greenwood at Wesleyan University in Middletown, Conn.; Shoichi Itoh, Naoya Sakamoto and Hisayoshi Yurimoto at Hokkaido University in Japan; and Paul Warren at the University of California in Los Angeles.

Couch Potatoes Beware: Too Much Time Spent Watching TV Is Harmful to Heart Health

Spending too much leisure time in front of a TV or computer screen appears to dramatically increase the risk for heart disease and premature death from any cause, perhaps regardless of how much exercise one gets, according to a new study published in the January 18, 2011, issue of the Journal of the American College of Cardiology.
Data show that compared to people who spend less than two hours each day on screen-based entertainment like watching TV, using the computer or playing video games, those who devote more than four hours to these activities are more than twice as likely to have a major cardiac event that involves hospitalization, death or both.
The study -- the first to examine the association between screen time and non-fatal as well as fatal cardiovascular events -- also suggests metabolic factors and inflammation may partly explain the link between prolonged sitting and the risks to heart health.
"People who spend excessive amounts of time in front of a screen -- primarily watching TV -- are more likely to die of any cause and suffer heart-related problems," said Emmanuel Stamatakis, PhD, MSc, Department of Epidemiology and Public Health, University College London, United Kingdom. "Our analysis suggests that two or more hours of screen time each day may place someone at greater risk for a cardiac event."
In fact, compared with those spending less than two hours a day on screen-based entertainment, there was a 48% increased risk of all-cause mortality in those spending four or more hours a day and an approximately 125% increase in risk of cardiovascular events in those spending two or more hours a day. These associations were independent of traditional risk factors such as smoking, hypertension, BMI, social class, as well as exercise.
The findings have prompted authors to advocate for public health guidelines that expressly address recreational sitting (defined as during non-work hours), especially as a majority of working age adults spend long periods being inactive while commuting or being slouched over a desk or computer.
"It is all a matter of habit. Many of us have learned to go back home, turn the TV set on and sit down for several hours -- it's convenient and easy to do. But doing so is bad for the heart and our health in general," said Dr. Stamatakis. "And according to what we know so far, these health risks may not be mitigated by exercise, a finding that underscores the urgent need for public health recommendations to include guidelines for limiting recreational sitting and other sedentary behaviors, in addition to improving physical activity."
Biological mediators also appear to play a role. Data indicate that one fourth of the association between screen time and cardiovascular events was explained collectively by C-reactive protein (CRP), body mass index, and high-density lipoprotein cholesterol suggesting that inflammation and deregulation of lipids may be one pathway through which prolonged sitting increases the risk for cardiovascular events. CRP, a well-established marker of low-grade inflammation, was approximately two times higher in people spending more than four hours of screen time per day compared to those spending less than two hours a day.
Dr. Stamatakis says the next step will be to try to uncover what prolonged sitting does to the human body in the short- and long-term, whether and how exercise can mitigate these consequences, and how to alter lifestyles to reduce sitting and increase movement and exercise.
The present study included 4,512 adults who were respondents of the 2003 Scottish Health Survey, a representative, household-based survey. A total of 325 all-cause deaths and 215 cardiac events occurred during an average of 4.3 years of follow up.
Measurement of "screen time" included self-reported TV/DVD watching, video gaming, as well as leisure-time computer use. Authors also included multiple measures to rule out the possibility that ill people spend more time in front of the screen as opposed to other way around. Authors excluded those who reported a previous cardiovascular event (before baseline) and those who died during the first two years of follow up just in case their underlying disease might have forced them to stay indoors and watch TV more often. Dr. Stamatakis and his team also adjusted analyses for indicators of poor health (e.g., diabetes, hypertension).

Sunday, January 9, 2011

Emotional Signals Are Chemically Encoded in Tears, Researchers Find

Emotional crying is a universal, uniquely human behavior. When we cry, we clearly send all sorts of emotional signals. In a paper published online January 6 in Science Express, scientists at the Weizmann Institute have demonstrated that some of these signals are chemically encoded in the tears themselves. Specifically, they found that merely sniffing a woman's tears -- even when the crying woman is not present -- reduces sexual arousal in men.
Humans, like most animals, expel various compounds in body fluids that give off subtle messages to other members of the species. A number of studies in recent years, for instance, have found that substances in human sweat can carry a surprising range of emotional and other signals to those who smell them.
But tears are odorless. In fact, in a first experiment led by Shani Gelstein, Yaara Yeshurun and their colleagues in the lab of Prof. Noam Sobel in the Weizmann Institute's Neurobiology Department, the researchers first obtained emotional tears from female volunteers watching sad movies in a secluded room and then tested whether men could discriminate the smell of these tears from that of saline. The men could not.
In a second experiment, male volunteers sniffed either tears or a control saline solution, and then had these applied under their nostrils on a pad while they made various judgments regarding images of women's faces on a computer screen. The next day, the test was repeated -- the men who were previously exposed to tears getting saline and vice versa. The tests were double blinded, meaning neither the men nor the researchers performing the trials knew what was on the pads. The researchers found that sniffing tears did not influence the men's estimates of sadness or empathy expressed in the faces. To their surprise, however, sniffing tears negatively affected the sex appeal attributed to the faces.
To further explore the finding, male volunteers watched emotional movies after similarly sniffing tears or saline. Throughout the movies, participants were asked to provide self-ratings of mood as they were being monitored for such physiological measures of arousal as skin temperature, heart rate, etc. Self-ratings showed that the subjects' emotional responses to sad movies were no more negative when exposed to women's tears, and the men "smelling" tears showed no more empathy. They did, however, rate their sexual arousal a bit lower. The physiological measures, however, told a clearer story. These revealed a pronounced tear-induced drop in physiological measures of arousal, including a significant dip in testosterone -- a hormone related to sexual arousal.
Finally, in a fourth trial, Sobel and his team repeated the previous experiment within an fMRI machine that allowed them to measure brain activity. The scans revealed a significant reduction in activity levels in brain areas associated with sexual arousal after the subjects had sniffed tears.
Sobel said, "This study raises many interesting questions. What is the chemical involved? Do different kinds of emotional situations send different tear-encoded signals? Are women's tears different from, say, men's tears? Children's tears? This study reinforces the idea that human chemical signals -- even ones we're not conscious of -- affect the behavior of others."
Human emotional crying was especially puzzling to Charles Darwin, who identified functional antecedents to most emotional displays -- for example, the tightening of the mouth in disgust, which he thought originated as a response to tasting spoiled food. But the original purpose of emotional tears eluded him. The current study has offered an answer to this riddle: Tears may serve as a chemosignal. Sobel points out that some rodent tears are known to contain such chemical signals. "The uniquely human behavior of emotional tearing may not be so uniquely human after all," he says.

Major Advance in MRI Allows Much Faster Brain Scans

An international team of physicists and neuroscientists has reported a breakthrough in magnetic resonance imaging that allows brain scans more than seven times faster than currently possible.
 In a paper that appeared Dec. 20 in the journal PLoS ONE, a University of California, Berkeley, physicist and colleagues from the University of Minnesota and Oxford University in the United Kingdom describe two improvements that allow full three-dimensional brain scans in less than half a second, instead of the typical 2 to 3 seconds.
"When we made the first images, it was unbelievable how fast we were going," said first author David Feinberg, a physicist and adjunct professor in UC Berkeley's Helen Wills Neuroscience Institute and president of the company Advanced MRI Technologies in Sebastopol, Calif. "It was like stepping out of a prop plane into a jet plane. It was that magnitude of difference."
For neuroscience, in particular, fast scans are critical for capturing the dynamic activity in the brain.
"When a functional MRI study of the brain is performed, about 30 to 60 images covering the entire 3-D brain are repeated hundreds of times like the frames of a movie but, with fMRI, a 3-D movie," Feinberg said. "By multiplexing the image acquisition for higher speed, a higher frame rate is achieved for more information in a shorter period of time."
"The brain is a moving target, so the more refined you can sample this activity, the better understanding we will have of the real dynamics of what's going on here," added Dr. Marc Raichle, a professor of radiology, neurology, neurobiology, biomedical engineering and psychology at Washington University in St. Louis who has followed Feinberg's work.
Because the technique works on all modern MRI scanners, the impact of the ultrafast imaging technique will be immediate and widespread at research institutions worldwide, Feinberg said. In addition to broadly advancing the field of neural-imaging, the discovery will have an immediate impact on the Human Connectome Project, funded last year by the National Institutes of Health (NIH) to map the connections of the human brain through functional MRI (fMRI) and structural MRI scans of 1,200 healthy adults.
"At the time we submitted our grant proposal for the Human Connectome Project, we had aspirations of acquiring better quality data from our study participants, so this discovery is a tremendous step in helping us accomplish the goals of the project," said Dr. David Van Essen, a neurobiologist at Washington University and co-leader of the project. "It's vital that we get the highest quality imaging data possible, so we can infer accurately the brain's circuitry -- how connections are established, and how they perform."
The faster scans are made possible by combining two technical improvements invented in the past decade that separately boosted scanning speeds two to four times over what was already the fastest MRI technique, echo planar imaging (EPI). Physical limitations of each method prevented further speed improvements, "but together their image accelerations are multiplied," Feinberg said. The team can now obtain brain scans substantially faster than the time reductions reported in their paper and many times faster than the capabilities of today's machines.
Magnetic resonance imaging works by using a magnetic field and radio waves to probe the environment of hydrogen atoms in water molecules in the body. Because hydrogen atoms in blood, for example, respond differently than atoms in bone or tissue, computers can reconstruct the body's interior landscape without the use of penetrating X-rays.
Nearly 20 years ago, however, a new type of MRI called functional MRI (fMRI) was developed to highlight areas of the brain using oxygen, and thus presumably engaged in neuronal activity, such as thinking .Using echo planar imaging (EPI), fMRI vividly distinguishes oxygenated blood funneling into working areas of the brain from deoxygenated blood in less active areas.
As with standard MRI, fMRI machines create magnetic fields that vary slightly throughout the brain, providing a different magnetic environment for hydrogen atoms in different areas. The differing magnetic field strengths make the spin of each hydrogen atom precess at different rates, so that when a pulse of radio waves is focused on the head, the atoms respond differently depending on location and on their particular environment. Those that absorb radio energy and then release the energy are detected by magnetic coils surrounding the head, and these signals, or "echoes," are used to produce an image of the brain.
With EPI, a single pulse of radio waves is used to excite the hydrogen atoms, but the magnetic fields are rapidly reversed several times to elicit about 50 to 100 echoes before the atoms settle down. The multiple echoes provide a high-resolution picture of the brain.
In 2002, Feinberg proposed using a sequence of two radio pulses to obtain twice the number of images in the same amount of time. Dubbed simultaneous image refocusing (SIR) EPI, it has proved useful in fMRI and for 3-D imaging of neuronal axonal fiber tracks, though the improvement in scanning speed is limited because with a train of more than four times as many echoes, the signal decays and the image resolution drops.
Another acceleration improvement, multiband excitation of several slices using multiple coil detection, was proposed in the U.K. at about the same time by David Larkmann for spinal imaging. The technique was recently used for fMRI by Steen Moeller and colleagues at the University of Minnesota. This technique, too, had limitations, primarily because the multiple coils are relatively widely spaced and cannot differentiate very closely spaced images.
In collaboration with Essa Yacoub, senior author on the paper, and Kamil Ugurbil, director of the University of Minnesota's Center for Magnetic Resonance Research and co-leader of the Human Connectome Project, Feinberg combined these techniques to get significantly greater acceleration than either technique alone while maintaining the same image resolution.
"With the two methods multiplexed, 10, 12 or 16 images the product of their two acceleration factors were read out in one echo train instead of one image," Feinberg said. "The new method is in the optimization phase and is now substantially faster than the scan times reported in this paper."
The ability to scan the brain in under 400 milliseconds moves fMRI closer to electroencephalography (EEG) for capturing very rapid sequences of events in the brain.
"Other techniques which capture signals derived from neuronal activity, EEG or MEG, have much higher temporal resolution; hundred microsecond neuronal changes. But MRI has always been very slow, with 2 second temporal resolution," Feinberg said. "Now MRI is getting down to a few hundred milliseconds to scan the entire brain, and we are beginning to see neuronal network dynamics with the high spatial resolution of MRI."
The development will impact general fMRI as well as diffusion imaging of axonal fibers in the brain, both of which are needed to achieve the main goal of the Human Connectome Project. Diffusion imaging reveals the axonal fiber networks that are the main nerve connections between areas of the brain, while fMRI shows which areas of the brain are functionally connected, that is, which areas are active together or sequentially during various activities.
"While it simply is not possible to show the billions of synaptic connections in the live human brain, the hope is that understanding patterns of how the normal brain is functionally interacting and structurally connected will lead to insights about diseases that involve miswiring in the brain," Feinberg said.
"We suspect several neurologic and psychiatric disorders, such as autism and schizophrenia, may be brain connectivity disorders, but we don't know what normal connectivity is," Feinberg added. "Although the fMRI and neuronal fiber images do not have the resolution of an electron microscope, the MRI derived Connectome reveals the live human brain and can be combined with genetic and environmental information to identify individual differences in brain circuitry."
Raichle, a collaborator in the NIH Human Connectome project, is one of the pioneers of "resting state" MRI, in which brain scans are taken of patients not involved in any specific task. He believes that the ongoing spontaneous activity discovered during such scans will tell us about how the brain remains flexible and maintains a degree of homeostatis so that "you know who you are."
"Being able to sample this ongoing activity at increasing temporal fidelity and precision becomes really important for understanding how the brain is doing this," Raichle said. "David is superclever at this kind of technical stuff, and I have been cheering him along, saying that the faster we can go, the better we can understand the brain's spontaneous activity."
The other authors of the PLoS ONE paper are Steen Moeller and Edward Auerbach of the Center for Magnetic Resonance Research at the University of Minnesota Medical School; Sudhir Ramanna of Advanced MRI Technologies; Matt F. Glasser of Washington University; and Karla L. Miller and Stephen M. Smith of the Oxford Centre for Functional MRI of the Brain at the University of Oxford. Feinberg is also affiliated with the UC San Francisco Department of Radiology.
The work was supported by the NIH's Human Connectome Project and by other grants from the NIH and from Advanced MRI Technologies.