-->
LATEST HEADLINES
66th REPUBLIC DAY WISHES TO ALL INDIANSZizix Tutorials
LATEST POSTS TIME OF NOW
Showing posts with label HUMAN BIOLOGY. Show all posts
Showing posts with label HUMAN BIOLOGY. Show all posts

SETTLING FOR ‘MR. RIGHT NOW’ BETTER THAN WAITING FOR ‘MR. RIGHT’

Evolutionary researchers have determined that settling for “Mr. Okay” is a better evolutionary strategy than waiting for “Mr. Perfect.” When studying the evolution of risk aversion researchers found that it is in our nature – traced back to the earliest humans – to take the safe bet when stakes are high, such as whether or not we will mate. Photo by D.L. Turner
Evolutionary researchers have determined that settling for “Mr. Okay” is a better evolutionary strategy than waiting for “Mr. Perfect.” When studying the evolution of risk aversion researchers found that it is in our nature – traced back to the earliest humans – to take the safe bet when stakes are high, such as whether or not we will mate. Photo by D.L. Turner

Evolutionary researchers have determined that settling for “Mr. Okay” is a better evolutionary strategy than waiting for “Mr. Perfect.”

When studying the evolution of risk aversion, Michigan State University researchers found that it is in our nature – traced back to the earliest humans – to take the safe bet when stakes are high, such as whether or not we will mate.

“Primitive humans were likely forced to bet on whether or not they could find a better mate,” said Chris Adami, MSU professor of microbiology and molecular genetics and co-author of the paper.

“They could either choose to mate with the first, potentially inferior, companion and risk inferior offspring, or they could wait for Mr. or Ms. Perfect to come around,” he said. “If they chose to wait, they risk never mating.”

Adami and his co-author Arend Hintze, MSU research associate, used a computational model to trace risk-taking behaviors through thousands of generations of evolution with digital organisms. These organisms were programmed to make bets in high-payoff gambles, which reflect the life-altering decisions that natural organisms must make, as for example choosing a mate.

“An individual might hold out to find the perfect mate but run the risk of coming up empty and leaving no progeny,” Adami said. “Settling early for the sure bet gives you an evolutionary advantage, if living in a small group.”

Adami and his team tested many variables that influence risk-taking behavior and concluded that certain conditions influence our decision-making process. The decision must be a rare, once-in-a-lifetime event and also have a high payoff for the individual’s future – such as the odds of producing offspring.

How risk averse we are correlates to the size of the group in which we were raised. If reared in a small group – fewer than 150 people – we tend to be much more risk averse than those who were part of a larger community.

It turns out that primitive humans lived in smaller groups, about 150 individuals. Because resources tend to be more scarce in smaller communities, this environment helps promote risk aversion.

“We found that it is really the group size, not the total population size, which matters in the evolution of risk aversion,” Hintze said.

However, not everyone develops the same level of aversion to risk. The study also found that evolution doesn’t prefer one single, optimal way of dealing with risk, but instead allows for a range of less, and sometimes more-risky, behaviors to evolve.

“We do not all evolve to be the same,” Adami said. “Evolution creates a diversity in our acceptance of risk, so you see some people who are more likely to take bigger risks than others. We see the same phenomenon in our simulations.”

The research was part of an interdisciplinary collaboration with Ralph Hertwig of the Max Planck Institute for Human Development in Berlin.

Also contributing to the study was Randal Olson, graduate student, MSU Department of Computer Science and Engineering and BEACON Center for the Study of Evolution in Action.

Source: Michigan State University

Adults Sought for Study on Aging and Mobility

Walking stickman
                                                               Walking stickman
The Biomechanics Laboratory in the kinesiology department is recruiting volunteers for a study about the effects of age and exercise on walking and muscle function. The researchers hope to learn about how changes in muscle function with age are related to the onset of disability.

The lab is looking for individuals who meet the following criteria: ages 55-70 with healthy body weight who participate in fewer than five 30-minute bouts of exercise per week (or less than a total of 150 minutes of planned exercise per week), no history of reconstructive surgery of the legs, no major health issues (heart disease, diabetes, neurological disease), able to walk for 30 minutes and no contraindications to MRI (metal implant, claustrophobia).

The study consists of two visits: a 1-hour visit to an MRI facility in Amherst and one 3-hour visit to the Biomechanics Lab on campus. During the lab visit researchers will collect data on how participants’ joints move as they walk over the ground and on a treadmill. The study will also collect data on the strength of the muscles in participants’ thighs. All procedures are non-invasive.

Source: UMass

EGFR-Mediated Beclin 1 Phosphorylation in Autophagy Suppression, Tumor Progression, and Tumor Chemoresistance

EGFR negatively regulates autophagy by binding to Beclin 1.
Active EGFR phosphorylates Beclin 1 and alters its interactome.
EGFR suppression of Beclin 1 may contribute to tumor progression in lung cancer.

Lung cancer responses to EGFR inhibitors may involve activation of Beclin 1. Image Credit: Cell Press

Summary
Cell surface growth factor receptors couple environmental cues to the regulation of cytoplasmic homeostatic processes, including autophagy, and aberrant activation of such receptors is a common feature of human malignancies. Here, we defined the molecular basis by which the epidermal growth factor receptor (EGFR) tyrosine kinase regulates autophagy. Active EGFR binds the autophagy protein Beclin 1, leading to its multisite tyrosine phosphorylation, enhanced binding to inhibitors, and decreased Beclin 1-associated VPS34 kinase activity. EGFR tyrosine kinase inhibitor (TKI) therapy disrupts Beclin 1 tyrosine phosphorylation and binding to its inhibitors and restores autophagy in non-small-cell lung carcinoma (NSCLC) cells with a TKI-sensitive EGFR mutation. In NSCLC tumor xenografts, the expression of a tyrosine phosphomimetic Beclin 1 mutant leads to reduced autophagy, enhanced tumor growth, tumor dedifferentiation, and resistance to TKI therapy. Thus, oncogenic receptor tyrosine kinases directly regulate the core autophagy machinery, which may contribute to tumor progression and chemoresistance.

Introduction
Epidermal growth factor receptor (EGFR), an oncogenic receptor tyrosine kinase, links extracellular signals to cellular homeostasis. In normal cells, EGFR signaling is triggered by the binding of growth factors, such as epidermal growth factor (EGF), leading to homodimerization or heterodimerization with other EGFR family members (such as HER2/neu) and autophosphorylation of the intracellular domain (Lemmon and Schlessinger, 2010). The phosphotyrosines formed serve as a docking site for adaptor molecules, which results in the activation of signaling pathways including the Ras/MAPK pathway, the PI3K/Akt pathway, and STAT signaling pathways. In tumor cells, the tyrosine kinase activity of EGFR may be dysregulated by EGFR gene mutation, increased EGFR gene copy number, or EGFR protein overexpression, leading to aberrant EGFR signaling and increased tumor cell survival, proliferation, invasion, and metastasis ( Ciardiello and Tortora, 2008). EGFR signaling is deregulated in many human cancers, including those of the lung, head and neck, colon, pancreas, and brain.

The deregulation of EGFR in human cancers has led to the development of anticancer agents that target EGFR, including: (1) anti-EGFR antibodies that inhibit ligand binding and (2) small-molecule receptor tyrosine kinase inhibitors (TKIs), erlotinib and gefitinib, that block EGFR intracellular tyrosine kinase activity. Although the EGFR TKIs have shown limited clinical benefit in the majority of solid tumors, they are effective in non-small-cell lung carcinomas (NSCLCs) that harbor specific mutations in the tyrosine kinase domain of EGFR (most commonly, in-frame deletion in exon 19 around codons 746–750 or single-base substitution, L858R, in exon 21) (Ciardiello and Tortora, 2008, Lynch et al., 2004 and Pao and Chmielecki, 2010). Most patients with NSCLCs with EGFR mutations initially respond favorably to erlotinib or gefitinib, suggesting these mutations drive tumorigenesis. However, among tumors that initially respond to EGFR TKIs, most eventually acquire resistance, often due to the emergence of a secondary mutation, T790M, in the kinase domain of EGFR (Pao and Chmielecki, 2010).

Several studies have shown that EGFR signaling regulates autophagy, a lysosomal degradation pathway that functions in cellular homeostasis and protection against a variety of diseases, including cancer (Levine and Kroemer, 2008). The downstream targets of EGFR—PI3K, Akt, and mammalian target of rapamycin (mTOR)—are well-established negative regulators of autophagy (Botti et al., 2006). Moreover, EGFR inhibitors induce autophagy in NSCLCs (Gorzalczany et al., 2011 and Han et al., 2011) and other cancer cells (Fung et al., 2012). However, the links between EGFR signaling and autophagy remain poorly understood, particularly (1) the molecular mechanisms by which EGFR signaling suppresses autophagy, (2) the role of EGFR suppression of autophagy in lung cancer pathogenesis, and (3) the role of autophagy induction in the response to TKI therapy. EGFR inhibitor-induced autophagy in lung cancer cells has been postulated to exert either cytoprotective (Han et al., 2011) or cytotoxic (Gorzalczany et al., 2011) effects.

Conflicting results regarding the role of autophagy in the response or resistance to EGFR TKI treatment reflects broader uncertainties in the role of autophagy in cancer therapy (Rubinsztein et al., 2012). It is not understood in what contexts autophagy induction contributes to tumor progression or suppression and to tumor chemoresistance or chemosensitivity. There is a general consensus that autophagy prevents tumor initiation, as loss-of-function mutations of several different autophagy genes results in spontaneous tumorigenesis (beclin 1, Atg5, and Atg7) and/or increased chemical-induced tumorigenesis (Atg4C) in mice ( Rubinsztein et al., 2012). Despite this inhibitory role in tumor initiation, it has been proposed that autophagy may promote the growth of established tumors and contribute to chemoresistance, principally through its actions to prolong the survival of metabolically stressed neoplastic cells ( Rubinsztein et al., 2012).

To understand the relationship between oncogenic signaling, autophagy, and distinct stages of tumorigenesis, it is important to define the molecular mechanisms by which oncogenic signaling regulates autophagy. We recently showed that the oncogene Akt inhibits autophagy independently of mTOR signaling via serine phosphorylation of the essential autophagy protein, Beclin 1 (Wang et al., 2012), a haploinsufficient tumor suppressor protein frequently monoallelically deleted in human breast and ovarian cancer (Levine and Kroemer, 2008). Moreover, Akt-mediated phosphorylation of Beclin 1 contributes to Akt-dependent fibroblast transformation, supporting the concept that inactivation of Beclin 1-dependent autophagy plays a role in tumor initiation. However, it is not known whether oncogenic inactivation of Beclin 1 (or other autophagy proteins) influences progression of established tumors and/or their response to therapy.


Here, we identify the molecular basis by which EGFR tyrosine kinase activity regulates autophagy. We show that active EGFR binds to Beclin 1, leading to its tyrosine phosphorylation, alteration of its interactome, and inhibition of its autophagy function. A mutant of Beclin 1 containing phosphomimetic mutations in the EGFR-dependent tyrosine phosphorylation sites enhances autophagy suppression in EGFR-mutated NSCLC cells, resulting in enhanced tumor progression, altered tumor cell differentiation, and partial tumor resistance to EGFR TKI therapy. These findings demonstrate a heretofore unknown link between oncogenic receptor tyrosine kinases and the autophagy machinery, which may contribute to tumor progression and resistance to targeted therapy.

Source: Full Artical At - CELL PRESS

Reducing Myc gene activity extends healthy lifespan in mice

No bones about it Young mice have good bone density whether they have two copies (top row; +/+) or one copy (bottom row; +/-) of the Myc gene. As they age, researchers found, mice with just one copy maintain better bone density and stay healthy longer. Sedivy lab/Brown University
Mice with one rather than the normal two copies of the gene Myc (also found in humans) lived 15 percent longer and had considerably healthier lives than normal mice, according to a new Brown University-led study in Cell.

PROVIDENCE, R.I. [Brown University] — A team of scientists based at Brown University has found that reducing expression of a fundamentally important gene called Myc significantly increased the healthy lifespan of laboratory mice, the first such finding regarding this gene in a mammalian species.

Myc is found in the genomes of all animals, ranging from ancestral single-celled organisms to humans. It is a major topic of biomedical research and has been shown to be a central regulator of cell proliferation, growth, and death. It is of such widespread and fundamental importance that animals cannot live without it. But in humans and mice, too much expression of the protein that Myc encodes has been closely linked to cancer, making it a well-known but elusive target of drug developers.

In a new study in the journal Cell, the scientists report that when they bred laboratory mice to have only one copy of the gene, instead of the normal two, thus reducing the expression of the encoded protein, those mice lived 15 percent longer on average — 20 percent longer for females and 10 percent longer among males — than normal mice. Moreover, the experimental mice showed many signs of better health into old age.

The experimental — “heterozygous” — mice grew to be about 15 percent smaller than the normal mice (a probable disadvantage in the wild) but that was the only discernable downside found to date for lacking a second copy of the gene, said senior author John Sedivy, the Hermon C. Bumpus Professor of Biology and professor of medical science at Brown.

“The animals are definitely aging slower,” he said. “They are maintaining the function of their organs and tissues for longer periods of time.”

Physiological differences

That assessment is based on detailed studies of the physiology — down to the molecular level — of the heterozygous and normal mice. The researchers conducted these experiments to try to understand the longevity difference between the two groups.
John Sedivy “The animals are definitely aging slower [and[ they are maintaining the function of their organs and tissues for longer periods of time.”
Co-lead author Jeffrey Hoffman, a medical and doctoral student, led the studies of the health of the mice, including various bodily systems. In many cases they were just like their normal counterparts. They reproduced just as well, for example.

“These mice are incredibly normal, yet they are really long-lived,” Sedivy said. “The reason why we were struck by that is because in many other longevity models like caloric restriction or treatment with rapamycin, the animals live longer but they also have some health issues.”

Instead the Myc heterozygous mice simply experienced fewer problems of aging. They did not develop osteoporosis, they maintained a healthier balance of immune system T cells, had less cardiac fibrosis, were more active, experienced less age-related slowing of their metabolic rate, produced less cholesterol, and exhibited better coordination.

Graduate student and co-lead author Xiaoai Zhao, meanwhile, led the molecular analysis of several pathways known to be involved in regulating longevity to find out how they might be different. Sure enough, heterozygous mice exhibited changes in IGF-1 signaling and nutrient and energy-sensing pathways, but how Myc engages those mechanisms is still not clear. Of particular interest, heterozygous mice showed less protein synthesis in several tissues. Regulation of this process is known to be under direct Myc control, and its reduction by a variety of means is known to extend lifespan in diverse species from yeast to mammals.

Genome-wide patterns of gene expression showed that Myc heterozygotes had significant differences in pathways related to metabolism and the immune system. Those patterns, however, only overlapped somewhat with patterns seen in other lifespan extending interventions.

Zhao and Hoffman’s studies also argue against a role for Myc in an oft-cited paradigm of greater longevity: upregulation of a variety of stress defense mechanisms. Their experimental mice seemed to suffer from as much stress and consequences of stress as normal mice.

The different benefits of Myc reduction compared to other laboratory longevity extenders shows that just as there are many ways the body can break down with aging, Sedivy said, there may be many ways to forestall that.

“There is more than one way to become long-lived,” Sedivy said.

Help for humans?

In the long term, Sedivy said he is optimistic that the findings about Myc could prove to matter to human health.

Finding the right target for a drug in one of Myc’s key metabolic or immune system pathways may or may not extend human lifespan, he said, but it might help people stay healthier as they age — for example, if it can reduce osteoporosis in people the way it does in mice. In particular, Sedivy said, it emphasizes the importance of the process of protein synthesis as a target of interventions that are likely to have widespread benefits on many organ systems.

And the study also offers encouragement to companies seeking to develop cancer drugs that block Myc overexpression. As important as normal Myc expression is to physiology, it appears that at least in mice there were many substantial benefits in reducing it by, say, half. Thus, Sedivy said, any drug that can target Myc directly is likely to find many applications beyond cancer.

In addition to Sedivy, Hoffman, and Zhao, the paper’s other authors are Marco De Cecco, Abigail Peterson, Luca Paglilaroli, Jayameenakshi Manivannan Bin Feng, Thomas Serre, Kevin Bath, Haiyan Xu, and Nicola Neretti of Brown; Gene Hubbard, Wenbo Qi, and Holly Van Remmen of the University of Texas; Yongqing Zhang and Rafael de Cabo of the National Institute on Aging; and Richard Miller of the University of Michigan.

The National Institutes of Health (grants R37AG016694, F30AG035592), the Ellison Medical Foundation, and the Glenn Award for Research on the Biological Mechanism of Aging supported the research. Some experiments were conducted in the Brown University molecular pathology and genomics cores.
Not just a longer life, but a healthier, stronger body “Do you think she might be a Myc hypomorph?” Drawing: Emma Sedivy
Source: Brown University

An Advanced Method of DNA Nanostructure Formation Developed

Figure 1: Uni-molecular magnetic tweezers orchestrating the DNA nanostructure formation
Professor Tae-Young Yoon’s research team from the Department of Physics at KAIST has developed a new method to form DNA nanostructures by using magnetic tweezers to observe and to induce the formation of the structure in real time.

Unlike traditional designs of "DNA origami" which relies on thermal or chemical annealing methods, the new technology utilizes a completely different dynamic in DNA folding. This allows the folding to be done within only ten minutes.

Developed in 2006, the "DNA origami" allows a long skeleton of DNA to be folded into an arbitrary structure by using small stapler DNA pieces. This has been a prominent method in DNA nanotechnology.
 
Figure 2: The evolution of DNA nanostructure formation using magnetic tweezers. The DNA nanostructure with a 21-nanometer size was formed in about eight minutes.

However, the traditional technology which adopts thermal processes could not control the DNA formation during the folding because every interaction among DNAs occurs simultaneously. Thus, the thermal processes, which take dozens of hours to complete, had to be repeated multiple times in order to find the optimal condition.

The research team designed a DNA folding using uni-molecular magnetic tweezers that applied force to a single DNA molecule while measuring the state of the DNA. Through this technology, they were able to induce the formation of DNA nanostructure and observe it at the same time.

During high temperature heat treatment, the first stage of conventional thermal processes, the internal structure of the long skeleton DNA untangles. To induce such state, after attaching one side of the skeleton DNA to the surface of glass and the other side to a magnetic material, the team unfolded the internal structure of the DNA by pulling the two sides apart with magnetic force.

Unlike the conventional thermal processes, this method lets the stapler DNA swiftly adhere to the skeleton DNA within a minute because the sites are revealed at room temperature.

After the stapler pieces connected to the skeleton, the team removed the magnetic force. Next, the structure folded through self-assembly as the stapler DNAs stuck to different sites on the skeleton DNA.

Professor Yoon said, “With the existing thermal methods, we could not differentiate the reactions of the DNA because the response of each DNA pieces mutually interacted with each other.” He added that “Using the magnetic tweezers, we were able to sort the process of DNA nanostructure formation into a series of reactions of DNA molecules that are well known, and shorten the time taken for formation in only ten minutes.”

He commented, “This nanostructure formation method will enable us to create more intricate and desirable DNA nanostructures by programming the folding of DNA origami structures.”

Conducted by Dr. Woori Bae under the guidance of Professor Yoon, the research findings were published online in the December 4th issue of Nature Communications.


Source: KAIST

Skull sheds light on human-Neanderthal relationship

Retrieved from a cave in northern Israel, the partial skull provides the first evidence that Homo sapiens inhabited that region at the same time as Neanderthals. (Reuters: Nikola Solic)
A partial skull, found in a cave in Israel, is shedding light on the pivotal moment in early human history when our species left Africa and encountered our close cousins the Neanderthals.

Anthropologist Israel Hershkovitz, from Tel Aviv University, called the skull "an important piece of the puzzle of the big story of human evolution."

The findings of the research, led by Hershkovitz, are published today in the journal Nature.

The upper part of the skull - the domed portion without the face or jaws - was unearthed in Manot Cave in Israel's Western Galilee.

Scientific dating techniques determined the skull was about 55,000 years old, a time period when members of our species were thought to have been marching out of Africa,

The researchers say characteristics of the skull suggest the individual was closely related to the first Homo sapiens populations that later colonized Europe.

They also say the skull provides the first evidence that Homo sapiens inhabited that region at the same time as Neanderthals, our closest extinct human relative.

Previous genetic evidence suggests our species and Neanderthals interbred around the time the skull is dated to, with all people of Eurasian ancestry still retaining a small amount of Neanderthal DNA as a result.

"It is the first direct fossil evidence that modern humans and Neanderthals inhabited the same area at the same time," says palaeontologist Bruce Latimer of Case Western Reserve University in Cleveland, another of the researchers.

"The co-existence of these two populations in a confined geographic region at the same time that genetic models predict interbreeding promotes the notion that interbreeding may have occurred in the Levant region," Hershkovitz says.

The robust, large-browed Neanderthals prospered across Europe and Asia from about 350,000 to 40,000 years ago, going extinct sometime after Homo sapiens arrived.

Scientists say our species first appeared about 200,000 years ago in Africa and later migrated outwards. The cave is located along the sole land route for ancient humans to take from Africa into the Middle East, Asia and Europe.

Latimer says he suspects the skull belonged to a woman, although the researchers could not say definitively.

The cave, sealed off for 30,000 years, was discovered in 2008 during sewage line construction work. Hunting tools, perforated seashells perhaps used ornamentally and animal bones have been excavated from the cave, along with further human remains.

Source: ABC

Why Do We Feel Thirst? An Interview with Yuki Oka

Credit: Lance Hayashida/Caltech Marketing and Communications
To fight dehydration on a hot summer day, you instinctively crave the relief provided by a tall glass of water. But how does your brain sense the need for water, generate the sensation of thirst, and then ultimately turn that signal into a behavioral trigger that leads you to drink water? That's what Yuki Oka, a new assistant professor of biology at Caltech, wants to find out.

Oka's research focuses on the study of how the brain and body work together to maintain a healthy ratio of salt to water as part of a delicate form of biological balance called homeostasis.

Recently, Oka came to Caltech from Columbia University. We spoke with him about his work, his interests outside of the lab, and why he's excited to be joining the faculty at Caltech.

Can you tell us a bit more about your research?

The goal of my research is to understand the mechanisms by which the brain and body cooperate to maintain our internal environment's stability, which is called homeostasis. I'm especially focusing on fluid homeostasis, the fundamental mechanism that regulates the balance of water and salt. When water or salt are depleted in the body, the brain generates a signal that causes either a thirst or a salt craving. And that craving then drives animals to either drink water or eat something salty.

I'd like to know how our brain generates such a specific motivation simply by sensing internal state, and then how that motivation—which is really just neural activity in the brain—goes on to control the behavior.

Why did you choose to study thirst?

After finishing my Ph.D. in Japan, I came to Columbia University where I worked on salt sensing mechanisms in the mammalian taste system. We found that the peripheral taste system has a key function for salt homeostasis in the body by regulating our salt intake behavior. But of course, the peripheral sensor does not work by itself.  It requires a controller, the brain, which uses information from the sensor. So I decided to move on to explore the function of the brain; the real driver of our behaviors.

I was fascinated by thirst because the behavior it generates is very robust and stereotyped across various species. If an animal feels thirst, the behavioral output is simply to drink water. On the other hand, if the brain triggers salt appetite, then the animal specifically looks for salt—nothing else. These direct causal relations make it an ideal system to study the link between the neural circuit and the behavior.

You recently published a paper on this work in the journal Nature. Could you tell us about those findings?

In the paper, we linked specific neural populations in the brain to water drinking behavior. Previous work from other labs suggested that thirst may stem from a part of the brain called the hypothalamus, so we wanted to identify which groups of neurons in the hypothalamus control thirst. Using a technique called optogenetics that can manipulate neural activities with light, we found two distinct populations of neurons that control thirst in two opposite directions. When we activated one of those two populations, it evoked an intense drinking behavior even in fully water-satiated animals. In contrast, activation of a second population drastically suppressed drinking, even in highly water-deprived thirsty animals.  In other words, we could artificially create or erase the desire for drinking water.

Our findings suggest that there is an innate brain circuit that can turn an animal's water-drinking behavior on and off, and that this circuit likely functions as a center for thirst control in the mammalian brain. This work was performed with support from Howard Hughes Medical Institute and National Institutes of Health [for Charles S. Zuker at Columbia University, Oka's former advisor].

You use a mouse model to study thirst, but does this work have applications for humans?

There are many fluid homeostasis-associated conditions; one example is dehydration. We cannot specifically say a direct application for humans since our studies are focused on basic research. But if the same mechanisms and circuits exist in mice and humans, our studies will provide important insights into human physiologies and conditions.

Where did you grow up—and what started your initial interest in science?

I grew up in Japan, close to Tokyo, but not really in the center of the city. It was a nice combination between the big city and nature. There was a big park close to my house and when I was a child, I went there every day and observed plants and animals. That's pretty much how I spent my childhood. My parents are not scientists—neither of them, actually. It was just my innate interest in nature that made me want to be a scientist.

What drew you to Caltech?

I'm really excited about the environment here and the great climate. That's actually not trivial; I think the climate really does affect the people. For example, if you compare Southern California to New York, it's just a totally different character. I came here for a visit last January, and although it was my first time at Caltech I kind of felt a bond. I hadn't even received an offer yet, but I just intuitively thought, "This is probably the place for me."

I'm also looking forward to talking to my colleagues here who use fMRI for human behavioral research. One great advantage about using human subjects in behavioral studies is that they can report back to you about how they feel. There are certainly advantages of using an animal model, like mice. But they cannot report back. We just observe their behavior and say, "They are drinking water, so they must be thirsty." But that is totally different than someone telling you, "I feel thirsty." I believe that combining advantages of animal and human studies should allow us to address important questions about brain functions.

Do you have any hobbies?

I play basketball in my spare time, but my major hobby is collecting fossils. I have some trilobites and, actually, I have a complete set of bones from a type of herbivorous dinosaur. It is being shipped from New York right now and I may put it in my new office.

Written by Jessica Stoller-Conrad


Source: California Institute of Technology

Platelets modulate clotting behavior by 'feeling' their surroundings

Researchers devised a way to separate the physical stiffness of the material where platelets spread out from its biochemical properties. Credit: Wilbur Lam
Platelets, the tiny cell fragments whose job it is to stop bleeding, are very simple. They don't have a cell nucleus. But they can "feel" the physical environment around them, researchers at Emory and Georgia Tech have discovered.

Platelets respond to surfaces with greater stiffness by increasing their stickiness, the degree to which they "turn on" other platelets and other components of the clotting system, the researchers found.

"Platelets are smarter than we give them credit for, in that they are able to sense the physical characteristics of their environment and respond in a graduated way," says Wilbur Lam, MD, PhD, assistant professor in the Department of Pediatrics at Emory University School of Medicine and in the Wallace H. Coulter Department of Biomedical Engineering at Georgia Tech and Emory University.

The results are published in Proceedings of the National Academy of Sciences. The first author of the paper is research associate Yongzhi Qiu. Lam is also a physician in the Aflac 
Cancer and Blood Disorders Center, Children's Healthcare of Atlanta.

The researchers' findings could influence the design of medical devices, because when platelets grab onto the surfaces of catheters and medical implants, they tend to form clots, a major problem for patient care.

Modifying the stiffness of materials used in these devices could reduce clot formation, the authors suggest. The results could also guide the refinement of blood thinning drugs, which are prescribed to millions to reduce the risk of heart attack or stroke.

The team was able to separate physical and biochemical effects on platelet behavior by forming polymer gels with different degrees of stiffness, and then overlaying them each with the same coating of fibrinogen, a sticky protein critical for blood clotting. Fibrinogen is the precursor for fibrin, which forms a mesh of insoluble strands in a blood clot.

With stiffer gels, platelets spread out more and become more activated. This behavior is most pronounced when the concentration of fibrinogen is relatively low, the researchers found.

"This variability helps to explain platelet behavior in the 3D context of a clot in the body, which can be quite heterogenous in makeup," Lam says.

Qiu and colleagues were also able to dissect platelet biochemistry by allowing the platelets to adhere and then spread on the various gels under the influence of drugs that interfere with different biochemical steps.

Proteins called integrins, which engage the fibrinogen, and the protein Rac1 are involved in the initial mechanical sensing during adhesion, while myosin and actin, components of the cytoskeleton, are responsible for platelet spreading.

"We found that the initial adhesion and later spreading are separable, because different biochemical pathways are involved in each step," Lam says. "Our data show that mechanosensing can occur and plays important roles even when the cellular structural building blocks are fairly basic, even when the nucleus is absent."

Breathing in diesel exhaust leads to changes 'deep under the hood'

A student participates in the study while seated in a booth.
Credit: Image courtesy of University of British Columbia
Just two hours of exposure to diesel exhaust fumes can lead to fundamental health-related changes in biology by switching some genes on, while switching others off, according to researchers at the University of British Columbia and Vancouver Coastal Health.

The study involved putting volunteers in a polycarbonate-enclosed booth -- about the size of a standard bathroom -- while breathing in diluted and aged exhaust fumes that are about equal to the air quality along a Beijing highway, or a busy port in British Columbia.

The researchers examined how such exposure affected the chemical "coating" that attaches to many parts of a person's DNA. That carbon-hydrogen coating, called methylation, can silence or dampen a gene, preventing it from producing a protein -- sometimes to a person's benefit, sometimes not. Methylation is one of several mechanisms for controlling gene expression, which is the focus of a rapidly growing field of study called epigenetics.

The study, published this month in Particle and Fibre Toxicology, found that diesel exhaust caused changes in methylation at about 2,800 different points on people's DNA, affecting about 400 genes. In some places it led to more methylation; in more cases, it decreased methylation.

How these changes in gene expression translate to health is the next step for researchers. But this study shows how vulnerable our genetic machinery can be to air pollution, and that changes are taking place even if there are no obvious symptoms.

"Usually when we look at the effects of air pollution, we measure things that are clinically obvious -- air flow, blood pressure, heart rhythm," said senior author Dr. Chris Carlsten, an associate professor in the Division of Respiratory Medicine. "But asthma, higher blood pressure or arrhythmia might just be the gradual accumulation of epigenetic changes. So we've revealed a window into how these long-term problems arise. We're looking at changes 'deep under the hood.'"

The fact that DNA methylation was affected after only two hours of exposure has positive implications, said Carlsten, the AstraZeneca Chair in Occupational and Environmental Lung Disease.

"Any time you can show something happens that quickly, it means you can probably reverse it -- either through a therapy, a change in environment, or even diet," he said.

Carlsten's team, having catalogued the changes along the entire human genome, is now sharing its data with scientists who are further exploring the function of specific genes.

Cold virus replicates better at cooler temperatures

Artist's rendering of a rhinovirus (stock illustration). Credit: © fotoliaxrender / Fotolia
The common cold virus can reproduce itself more efficiently in the cooler temperatures found inside the nose than at core body temperature, according to a new Yale-led study. This finding may confirm the popular yet contested notion that people are more likely to catch a cold in cool-weather conditions.

Researchers have long known that the most frequent cause of the common cold, the rhinovirus, replicates more readily in the slightly cooler environment of the nasal cavity than in the warmer lungs. However, the focus of prior studies has been on how body temperature influenced the virus as opposed to the immune system, said study senior author and Yale professor of immunobiology Akiko Iwasaki.

To investigate the relationship between temperature and immune response, Iwasaki and an interdisciplinary team of Yale researchers spearheaded by Ellen Foxman, a postdoctoral fellow in Iwasaki's lab, examined the cells taken from the airways of mice. They compared the immune response to rhinovirus when cells were incubated at 37 degrees Celsius, or core body temperature, and at the cooler 33 degrees Celsius. "We found that the innate immune response to the rhinovirus is impaired at the lower body temperature compared to the core body temperature," Iwasaki said.

The study also strongly suggested that the varying temperatures influenced the immune response rather than the virus itself. Researchers observed viral replication in airway cells from mice with genetic deficiencies in the immune system sensors that detect virus and in the antiviral response. They found that with these immune deficiencies, the virus was able to replicate at the higher temperature. "That proves it's not just virus intrinsic, but it's the host's response that's the major contributor," Iwasaki explained.

Although the research was conducted on mouse cells, it offers clues that may benefit people, including the roughly 20% of us who harbor rhinovirus in our noses at any given time. "In general, the lower the temperature, it seems the lower the innate immune response to viruses," noted Iwasaki. In other words, the research may give credence to the old wives' tale that people should keep warm, and even cover their noses, to avoid catching colds.

Yale researchers also hope to apply this insight into how temperature affects immune response to other conditions, such as childhood asthma. While the common cold is no more than a nuisance for many people, it can cause severe breathing problems for children with asthma, noted Foxman. Future research may probe the immune response to rhinovirus-induced asthma.

The study was published in the Proceedings of the National Academy of Sciences.

Source: Yale University

Keeping upright: How much gravity is enough?

The experimental setup. (A) Participants lay on a human centrifuge with their feet out so that centripetal force from the centrifuge produced a centripetal force simulating gravity along the long axis of the body. (B) They viewed a screen mounted above their heads which presented a scene tilted at 112° relative to their bodies. The direction signaled by each cue to upright is indicated by arrow: red, vision; green, simulated gravity and blue, the body. (C) Thus, the three vectors involved in determining the perceptual upright (body, gravity and vision) could be dissociated.
 Credit: Harris et al; doi:10.1371/journal.pone.0106207.g001
Keeping upright in a low-gravity environment is not easy, and NASA documents abound with examples of astronauts falling on the lunar surface. Now, a new study by an international team of researchers led by York University professors Laurence Harris and Michael Jenkin, published today in PLOS ONE, suggests that the reason for all these moon mishaps might be because its gravity isn't sufficient to provide astronauts with unambiguous information on which way is "up."

"The perception of the relative orientation of oneself and the world is important not only to balance, but also for many other aspects of perception including recognizing faces and objects and predicting how objects are going to behave when dropped or thrown," says Harris. "Misinterpreting which way is up can lead to perceptual errors and threaten balance if a person uses an incorrect reference point to stabilize themselves."

Using a short-arm centrifuge provided by the European Space Agency, the international team simulated gravitational fields of different strengths, and used a York-invented perceptual test to measure the effectiveness of gravity in determining the perception of up. 
The team found that the threshold level of gravity needed to just influence a person's orientation judgment was about 15 per cent of the level found on Earth -- very close to that on the moon.

The team also found that Martian gravity, at 38 per cent of that on Earth, should be sufficient for astronauts to orient themselves and maintain balance on any future manned missions to Mars.
"If the brain does not sense enough gravity to determine which way is up, astronauts may get disoriented, which can lead to errors like flipping switches the wrong way or moving the wrong way in an emergency," says Jenkin. "Therefore, it's crucial to understand how the direction of up is established and to establish the relative contribution of gravity to this direction before journeying to environments with gravity levels different to that of Earth."
This work builds upon results obtained in long-duration microgravity by Harris and Jenkin and other members of York's Centre for Vision Research on board the International Space Station during the Bodies in the Space Environment project, funded by the Canadian Space Agency.

Source: York University

Ancient human genome from southern Africa throws light on our origins

Professor Vanessa Hayes in the field.
The skeleton of a man who lived 2,330 years ago in the southernmost tip of Africa tells us about ourselves as humans, and throws some light on our earliest common genetic ancestry.

What can DNA from the skeleton of a man who lived 2,330 years ago in the southernmost tip of Africa tell us about ourselves as humans? A great deal when his DNA profile is one of the 'earliest diverged' -- oldest in genetic terms -- found to-date in a region where modern humans are believed to have originated roughly 200,000 years ago.

The man's maternal DNA, or 'mitochondrial DNA', was sequenced to provide clues to early modern human prehistory and evolution. Mitochondrial DNA provided the first evidence that we all come from Africa, and helps us map a figurative genetic tree, all branches deriving from a common 'Mitochondrial Eve'.
When archaeologist Professor Andrew Smith from the University of Cape Town discovered the skeleton at St. Helena Bay in 2010, very close to the site where 117,000 year old human footprints had been found -- dubbed "Eve's footprints" -- he contacted Professor Vanessa Hayes, an expert in African genomes.

At the time, Hayes was Professor of Genomic Medicine at the J. Craig Venter Institute in San Diego, California. She now heads the Laboratory for Human Comparative and Prostate Cancer Genomics at Sydney's Garvan Institute of Medical Research.

The complete 1.5 metre tall skeleton was examined by Professor Alan Morris, from the University of Cape Town. A biological anthropologist, Morris showed that the man was a 'marine forager'. A bony growth in his ear canal, known as 'surfer's ear', suggested that he spent some time diving for food in the cold coastal waters, while shells carbon-dated to the same period, and found near his grave, confirmed his seafood diet. Osteoarthritis and tooth wear placed him in his fifties.

Due to the acidity of the soil within the region, acquiring DNA from skeletons has proven problematic. The Hayes team therefore worked with the world's leading laboratory in ancient DNA research, namely that of paleogeneticist Professor Svante Pääbo at the Max Planck Institute for Evolutionary Anthropolgy in Leipzig, Germany, who successfully sequenced a Neanderthal.

The team generated a complete mitochondrial genome, using DNA extracted from a tooth and a rib. The findings provided genomic evidence that this man, from a lineage now presumed extinct, as well as other indigenous coastal dwellers like him, were the most closely related to 'Mitochondrial Eve'.

The study underlines the significance of southern African archaeological remains in defining human origins, and is published in the journal Genome Biology and Evolution, now online.

"We were thrilled that archaeologist Andrew Smith understood the importance of not touching the skeleton when he found it, and so did not contaminate its DNA with modern human DNA," said Professor Hayes.

"I approached Svante Pääbo because his lab is the best in the world at DNA extraction from ancient bones. This skeleton was very precious and we needed 
to make sure the sample was in safe hands."

"Alan Morris undertook some incredible detective work. He used his skills in forensics and murder cases to assemble a profile of the man behind the St Helena skeleton."

"Alan helped establish that this man was a marine hunter-gatherer -- in contrast to the contemporary inland hunter-gatherers from the Kalahari dessert. We were very curious to know how this man related to them."

"We also know that this man pre-dates migration into the region, which took place around 2,000 years ago when pastoralists made their way down the coast from Angola, bringing herds of sheep. We could demonstrate that our marine hunter-gatherer carried a different maternal lineage to these early migrants -- containing a DNA variant that we have never seen before."

"Because of this, the study gives a baseline against which historic herders at the Cape can now be compared."

While interested in African lineages, and how they interact with each other, Professor Hayes is especially keen for Africa to inform genomic research and medicine worldwide.

"One of the biggest issues at present is that no-one is assembling genomes from scratch -- in other words, when someone is sequenced, their genome is not pieced together as is," she said.

"Instead, sections of the sequenced genome are mapped to a reference genome. Largely biased by European contribution, the current reference is poorly representative of indigenous peoples globally."

"If we want a good reference, we have to go back to our early human origins."
"None of us that walk on this planet now are pure anything -- we are all mixtures. For example 1-4% of Eurasians even carry Neanderthal DNA"

"We need more genomes that don't have extensive admixture. In other words, we need to reduce the noise."

"In this study, I believe we may have found an individual from a lineage that broke off early in modern human evolution and remained geographically isolated. That would contribute significantly to refining the human reference genome."

Source: Garvan Institute of Medical Research

Human faces are so variable because we evolved to look unique

The amazing variety of human faces -- far greater than that of most other animals -- is the result of evolutionary pressure to make each of us unique and easily recognizable. Credit: UC Berkeley
The amazing variety of human faces -- far greater than that of most other animals -- is the result of evolutionary pressure to make each of us unique and easily recognizable, according to a new study by University of California, Berkeley, scientists.

Our highly visual social interactions are almost certainly the driver of this evolutionary trend, said behavioral ecologist Michael J. Sheehan, a postdoctoral fellow in UC Berkeley's Museum of Vertebrate Zoology. Many animals use smell or vocalization to identify individuals, making distinctive facial features unimportant, especially for animals that roam after dark, he said. But humans are different.

"Humans are phenomenally good at recognizing faces; there is a part of the brain specialized for that," Sheehan said. "Our study now shows that humans have been selected to be unique and easily recognizable. It is clearly beneficial for me to recognize others, but also beneficial for me to be recognizable. Otherwise, we would all look more similar."

"The idea that social interaction may have facilitated or led to selection for us to be individually recognizable implies that human social structure has driven the evolution of how we look," said coauthor Michael Nachman, a population geneticist, professor of integrative biology and director of the UC Berkeley Museum of Vertebrate Zoology.

The study will appear Sept. 16 in the online journal Nature Communications.
In the study, Sheehan said, "we asked, 'Are traits such as distance between the eyes or width of the nose variable just by chance, or has there been evolutionary selection to be more variable than they would be otherwise; more distinctive and more unique?'"

As predicted, the researchers found that facial traits are much more variable than other bodily traits, such as the length of the hand, and that facial traits are independent of other facial traits, unlike most body measures. People with longer arms, for example, typically have longer legs, while people with wider noses or widely spaced eyes don't have longer noses. Both findings suggest that facial variation has been enhanced through evolution.

Finally, they compared the genomes of people from around the world and found 
more genetic variation in the genomic regions that control facial characteristics than in other areas of the genome, a sign that variation is evolutionarily advantageous.

"All three predictions were met: facial traits are more variable and less correlated than other traits, and the genes that underlie them show higher levels of variation," Nachman said. "Lots of regions of the genome contribute to facial features, so you would expect the genetic variation to be subtle, and it is. But it is consistent and statistically significant."

Using Army data
Sheehan was able to assess human facial variability thanks to a U.S. Army database of body measurements compiled from male and female personnel in 1988. The Army Anthropometric Survey (ANSUR) data are used to design and size everything from uniforms and protective clothing to vehicles and workstations.

A statistical comparison of facial traits of European Americans and African Americans -- forehead-chin distance, ear height, nose width and distance between pupils, for example -- with other body traits -- forearm length, height at waist, etc. -- showed that facial traits are, on average, more varied than the others. The most variable traits are situated within the triangle of the eyes, mouth and nose.

Sheehan and Nachman also had access to data collected by the 1000 Genome project, which has sequenced more than 1,000 human genomes since 2008 and catalogued nearly 40 million genetic variations among humans worldwide. Looking at regions of the human genome that have been identified as determining the shape of the face, they found a much higher number of variants than for traits, such as height, not involving the face.

Prehistoric origins
"Genetic variation tends to be weeded out by natural selection in the case of traits that are essential to survival," Nachman said. "Here it is the opposite; selection is maintaining variation. All of this is consistent with the idea that there has been selection for variation to facilitate recognition of individuals."
They also compared the human genomes with recently sequenced genomes of Neanderthals and Denisovans and found similar genetic variation, which indicates that the facial variation in modern humans must have originated prior to the split between these different lineages.

"Clearly, we recognize people by many traits -- for example their height or their gait -- but our findings argue that the face is the predominant way we recognize people," Sheehan said.

The shape of infectious prions

Structural changes were located in the prion protein N-terminus, where a novel reorganization of the beta sheet (in yellow) was observed. In the background, the X-ray diffraction pattern of the crystal composed by the complex prion protein-Nanoboy.
Prions are unique infective agents -- unlike viruses, bacteria, fungi and other parasites, prions do not contain either DNA or RNA. Despite their seemingly simple structure, they can propagate their pathological effects like wildfire, by "infecting" normal proteins. PrPSc (the pathological form of the prion protein) can induce normal prion proteins (PrPC) to acquire the wrong conformation and convert into further disease-causing agents.

"When they are healthy, they look like tiny spheres; when they are malignant, they appear as cubes" stated Giuseppe Legname, principal investigator of the Prion Biology Laboratory at the Scuola Internazionale Superiore di Studi Avanzati (SISSA) in Trieste, when describing prion proteins. Prions are "misfolded" proteins that cause a group of incurable neurodegenerative diseases, including spongiform encephalopathies (for example, mad cow diseases) and Creutzfeldt-Jakob disease. Legname and coworkers have recently published a detailed analysis of the early mechanisms of misfolding. Their research has just been published in the Journal of the American Chemical Society, the most authoritative scientific journal in the field.

"For the first time, our experimental study has investigated the structural elements leading to the disease-causing conversion" explains Legname. "With the help of X-rays, we observed some synthetic prion proteins engineered in our lab by applying a new approach -- we used nanobodies, i.e. small proteins that act as a scaffolding and induce prions to stabilize their structure." Legname and colleagues reported that misfolding originates in a specific part of the protein named "N-terminal." "The prion protein consists of two subunits. The C-terminal has a clearly defined and well-known structure, whereas the unstructured N-terminal is disordered, and still largely unknown. This is the very area where the early prion pathological misfolding occurs" adds Legname. "The looser conformation of the N-terminal likely determines a dynamic structure, which can thus change the protein shape."

"Works like ours are the first, important steps to understand the mechanisms underlying the pathogenic effect of prions" concludes Legname. "Elucidating the misfolding process is essential to the future development of drugs and therapeutic strategies against incurable neurodegenerative diseases."

Source: Sissa Medialab

Bacteria could be rich source for making terpenes

Odoriferous terpene metabolites: A phylogenetic tree of terpene synthases shows the synthases (bold face or underlined) found by researchers in Japan and at Brown University using bacterial sequences. Credit: Image courtesy of Brown University
If you've ever enjoyed the scent of a pine forest or sniffed a freshly cut basil leaf, then you're familiar with terpenes. The compounds are responsible for the essential oils of plants and the resins of trees. Since the discovery of terpenes more than 150 years ago, scientists have isolated some 50,000 different terpene compounds derived from plants and fungi. Bacteria and other microorganisms are known to make terpenes too, but they've received much less study.

New research at Brown University, published in the Proceedings of the National Academy of Sciences, shows that the genetic capacity of bacteria to make terpenes is widespread. 

Using a specialized technique to sift through genomic databases for a variety of bacteria, the researchers found 262 gene sequences that likely code for terpene synthases -- enzymes that catalyze the production terpenes. The researchers then used several of those enzymes to isolate 13 previously unidentified bacterial terpenes.

The findings suggest that bacteria "represent a fertile source for discovery of new natural products," the researchers write.

David Cane, a professor of chemistry at Brown and one of the authors on the new paper, began working about 15 years ago to understand how bacteria make terpenes.

"At that time, the first genomic sequences of certain classes of bacteria were just beginning to come out," he said. "We had this idea that maybe you could find the enzymes responsible for making terpenes by looking at the sequences of the genes that were being discovered."

To do that, Cane searched through the genome data gathered for a group of bacteria called Streptomyces, looking for sequences similar those known to produce terpene synthases in plants and fungi. Eventually, he found that Streptomyces did indeed have genes encoding terpene synthases and that those enzymes could be used to make terpenes.

The verified bacterial sequences found by Cane and others enabled researchers to refine subsequent searches for additional terpene synthase genes. "Instead of using plant sequences or fungal sequences as your search query, we can now use bacterial sequences, which should yield a greater degree of similarity," he said. "So now we're fishing in the right waters with the right kind of bait, and you can find more matches."

This latest paper made use of the third generation of iterative searches and a powerful search technique developed by Haruo Ikeda of Kitasato University in Japan. Previous work had identified 140 probable sequences for terpene synthases. This latest work expanded that to 262.

The next step was to verify that these sequences did indeed code for enzymes capable of making terpenes. Testing all 262 wasn't practical, so the team chose a few they thought might give them the best chance of finding terpene compounds that hadn't previously been identified. They looked for sequences that didn't seem to fit clearly into previously known categories of terpenes.

After they had selected a few, the team made use of a genetically engineered Streptomyces bacterium as a bio-refinery to generate the terpene products.

"What Professor Ikeda did, in collaboration with us, is develop a variant of a very well-studied Streptomyces system," Cane said. "He eliminated the genes that were responsible for making most of its native products, but he left behind all of the capacity to provide the starting materials and handle the accumulation of products."

By taking some of the gene sequences they found and splicing them into their test organism, the researchers could let the organisms generate the product using the instructions from the newly introduced gene. Using this method, they were able to make 13 previously unknown terpenes, their structures verified by mass spectrometry and nuclear magnetic resonance spectroscopy.

"It's a big step forward in the area in that it provides a paradigm for how one could go about discovering many new substances," Cane said. "It's a good example of how one can use sequence analysis to identify genes of interest and then apply molecular genetic and microbiological techniques to produce the chemical substances of interest."

The work also suggests that there may be many new terpene products as yet undiscovered hiding in the genomes of bacteria.

Source: Brown University
Environment Now
Technology+Physics
Health + Medicine
Plants + Animals
SPACE + TIME
Science + Society

 
BREAKING NEWS