Friday, August 15, 2008
ancient civilizations! XD
http://news.nationalgeographic.com/news/2008/08/080815-sahara-video-vin.html
Wednesday, August 13, 2008
Ah, a scientific research to the phrase “learning from mistakes”
Minding Mistakes: How the Brain Monitors Errors and Learns from Goofs
Brain scientists have identified nerve cells that monitor performance, detect errors and govern the ability to learn from misfortunes
By Markus Ullsperger
April 26, 1986: During routine testing, reactor number 4 of the Chernobyl nuclear power plant explodes, triggering the worst catastrophe in the history of the civilian use of nuclear energy.
September 22, 2006: On a trial run, experimental maglev train Transrapid 08 plows into a maintenance vehicle at 125 mph near Lathen, Germany, spewing wreckage over hundreds of yards, killing 23 passengers and severely injuring 10 others.
Human error was behind both accidents. Of course, people make mistakes, both large and small, every day, and monitoring and fixing slipups is a regular part of life. Although people understandably would like to avoid serious errors, most goofs have a good side: they give the brain information about how to improve or fine-tune behavior. In fact, learning from mistakes is likely essential to the survival of our species.
In recent years researchers have identified a region of the brain called the medial frontal cortex that plays a central role in detecting mistakes and responding to them. These frontal neurons become active whenever people or monkeys change their behavior after the kind of negative feedback or diminished reward that results from errors.
Much of our ability to learn from flubs, the latest studies show, stems from the actions of the neurotransmitter dopamine. In fact, genetic variations that affect dopamine signaling may help explain differences between people in the extent to which they learn from past goofs. Meanwhile certain patterns of cerebral activity often foreshadow miscues, opening up the possibility of preventing blunders with portable devices that can detect error-prone brain states.
Error Detector
Hints of the brain's error-detection apparatus emerged serendipitously in the early 1990s. Psychologist Michael Falkenstein of the University of Dortmund in Germany and his colleagues were monitoring subjects' brains using electroencephalography (EEG) during a psychology experiment and noticed that whenever a subject pressed the wrong button, the electrical potential in the frontal lobe suddenly dropped by about 10 microvolts. Psychologist William J. Gehring of the University of Illinois and his colleagues confirmed this effect, which researchers refer to as error-related negativity, or ERN.
An ERN may appear after various types of errors, unfavorable outcomes or conflict situations. Action errors occur when a person's behavior produces an unintended result. Time pressure, for example, often leads to misspellings while typing or incorrect addresses on e-mails. An ERN quickly follows such action errors, peaking within 100 milliseconds after the incorrect muscle activity ends.
A slightly more delayed ERN, one that crests 250 to 300 milliseconds after an outcome, occurs in response to unfavorable feedback or monetary losses. This so-called feedback ERN also may appear in situations in which a person faces a difficult choice—known as decision uncertainty—and remains conflicted even after making a choice. For instance, a feedback ERN may occur after a person has picked a checkout line in a supermarket and then realizes that the line is moving slower than the adjacent queue.
Where in the brain does the ERN originate? Using functional magnetic resonance imaging, among other imaging methods, researchers have repeatedly found that error recognition takes place in the medial frontal cortex, a region on the surface of the brain in the middle of the frontal lobe, including the anterior cingulate. Such studies implicate this brain region as a monitor of negative feedback, action errors and decision uncertainty—and thus as an overall supervisor of human performance.
In a 2005 paper, along with psychologist Stefan Debener of the Institute of Hearing Research in Southampton, England, and our colleagues, I showed that the medial frontal cortex is the probable source of the ERN. In this study, subjects performed a so-called flanker task, in which they specified the direction of a central target arrow in the midst of surrounding decoy arrows while we monitored their brain activity using EEG and fMRI simultaneously. We found that as soon as an ERN occurs, activity in the medial frontal cortex increases and that the bigger the ERN the stronger the fMRI signal, suggesting that this brain region does indeed generate the classic error signal.
Learning from Lapses
In addition to recognizing errors, the brain must have a way of adaptively responding to them. In the 1970s psychologist Patrick Rabbitt of the University of Manchester in England, one of the first to systematically study such reactions, observed that typing misstrikes are made with slightly less keyboard pressure than are correct strokes, as if the typist were attempting to hold back at the last moment.
More generally, people often react to errors by slowing down after a mistake, presumably to more carefully analyze a problem and to switch to a different strategy for tackling a task. Such behavioral changes represent ways in which we learn from our mistakes in hopes of avoiding similar slipups in the future.
The medial frontal cortex seems to govern this process as well. Imaging studies show that neural activity in this region increases, for example, before a person slows down after an action error. Moreover, researchers have found responses from individual neurons in the medial frontal cortex in monkeys that implicate these cells in an animal's behavioral response to negative feedback, akin to that which results from an error.
In 1998 neuroscientists Keisetsu Shima and Jun Tanji of the Tohoku University School of Medicine in Sendai, Japan, trained three monkeys to either push or turn a handle in response to a visual signal. A monkey chose its response based on the reward it expected: it would, say, push the handle if that action had been consistently followed by a reward. But when the researchers successively reduced the reward for pushing—a type of negative feedback or error signal—the animals would within a few trials switch to turning the handle instead. Meanwhile researchers were recording the electrical activity of single neurons in part of the monkeys' cingulate.
Shima and Tanji found that four types of neurons altered their activity after a reduced reward but only if the monkey used that reduction as a cue to push instead of turn, or vice versa. These neurons did not flinch if the monkey did not decide to switch actions or if it did so in response to a tone rather than to a lesser reward. And when the researchers temporarily deactivated neurons in this region, the monkey no longer switched movements after a dip in its incentive. Thus, these neurons relay information about the degree of reward for the purpose of altering behavior and can use negative feedback as a guide to improvement.
In 2004 neurosurgeon Ziv M. Williams and his colleagues at Massachusetts General Hospital reported finding a set of neurons in the human anterior cingulate with similar properties. The researchers recorded from these neurons in five patients who were scheduled for surgical removal of that brain region. While these neurons were tapped, the patients did a task in which they had to choose one of two directions to move a joystick based on a visual cue that also specified a monetary reward: either nine or 15 cents. On the nine-cent trials, participants were supposed to change the direction in which they moved the joystick.
Similar to the responses of monkey neurons, activity among the anterior cingulate neurons rose to the highest levels when the cue indicated a reduced reward along with a change in the direction of movement. In addition, the level of neuronal activity predicted whether a person would act as instructed or make an error. After surgical removal of those cells, the patients made more errors when they were cued to change their behavior in the face of a reduced payment. These neurons, therefore, seem to link information about rewards to behavior. After detecting discrepancies between actual and desired outcomes, the cells determine the corrective action needed to optimize reward.
But unless instructed to do so, animals do not generally alter their behavior after just one mishap. Rather they change strategies only after a pattern of failed attempts. The anterior cingulate also seems to work in this more practical fashion in arbitrating the response to errors. In a 2006 study experimental psychologists Stephen Kennerley and Matthew Rushworth and their colleagues at the University of Oxford taught rhesus monkeys to pull a lever to get food. After 25 trials, the researchers changed the rules, dispensing treats when the monkeys turned the lever instead of pulling it. The monkeys adapted and switched to turning the lever. After a while, the researchers changed the rules once more, and the monkeys again altered their behavior.
Each time the monkeys did not immediately switch actions, but did so only after a few false starts, using the previous four or five trials as a guide. After damage to the anterior cingulate, however, the animals lost this longer-term view and instead used only their most recent success or failure as a guide. Thus, the anterior cingulate seems to control an animal's ability to evaluate a short history of hits and misses as a guide to future decisions.
Chemical Incentive
Such evaluations may depend on dopamine, which conveys success signals in the brain. Neurophysiologist Wolfram Schultz, now at the University of Cambridge, and his colleagues have shown over the past 15 years that dopamine-producing nerve cells alter their activity when a reward is either greater or less than anticipated. When a monkey is rewarded unexpectedly, say, for a correct response, the cells become excited, releasing dopamine, whereas their activity drops when the monkey fails to get a treat after an error. And if dopamine quantity stably altered the connections between nerve cells, its differential release could thereby promote learning from successes and failures.
Indeed, changes in dopamine levels may help to explain how we learn from positive as well as negative reinforcement. Dopamine excites the brain's so-called Go pathway, which promotes a response while also inhibiting the action-suppressing "NoGo" pathway. Thus, bursts of dopamine resulting from positive reinforcement promote learning by both activating the Go channel and blocking NoGo. In contrast, dips in dopamine after negative outcomes should promote avoidance behavior by inactivating the Go pathway while releasing inhibition of NoGo.
In 2004 psychologist Michael J. Frank, then at the University of Colorado at Boulder, and his colleagues reported evidence for dopamine's influence on learning in a study of patients with Parkinson's disease, who produce too little of the neurotransmitter. Frank theorized that Parkinson's patients may have trouble generating the dopamine needed to learn from positive feedback but that their low dopamine levels may facilitate training based on negative feedback.
In the study the researchers displayed pairs of symbols on a computer screen and asked 19 healthy people and 30 Parkinson's patients to choose one symbol from each pair. The word "correct" appeared whenever a subject had chosen an arbitrarily correct symbol, whereas the word "incorrect" flashed after every "wrong" selection. (No symbol was invariably correct or incorrect.) One of them was deemed right 80 percent of the time, and another 20 percent. For other pairs, the probabilities were 70:30 and 60:40. The subjects were expected to learn from this feedback and thereby increase the number of correct choices in later test runs.
As expected, the healthy people learned to prefer the correct symbols and avoid the incorrect ones with about equal proficiency. Parkinson's patients, on the other hand, showed a stronger tendency to reject negative symbols than to select the positive ones—that is, they learned more from their errors than from their hits, showing that the lack of dopamine did bias their learning in the expected way. In addition, the patients' ability to learn from positive feedback outpaced that from negative feedback after they took medication that boosted brain levels of dopamine, underscoring the importance of dopamine in positive reinforcement.
Dopamine-based discrepancies in learning ability also appear within the healthy population. Last December, along with psychology graduate student Tilmann A. Klein and our colleagues, I showed that such variations are partly based on individual differences in a gene for the D2 dopamine receptor. A variant of this gene, called A1, results in up to a 30 percent reduction in the density of those receptors on nerve cell membranes.
We asked 12 males with the A1 variant and 14 males who had the more common form of this gene to perform a symbol-based learning test like the one Frank used. We found that A1 carriers were less able to remember, and avoid, the negative symbols than were the participants who did not have this form of the gene. The A1 carriers also avoided the negative symbols less often than they picked the positive ones. Noncarriers learned about equally well from the good and bad symbols.
Thus, fewer D2 receptors may impair a person's ability to learn from mistakes or negative outcomes. (This molecular quirk is just one of many factors that influence such learning.) Accordingly, our fMRI results show that the medial frontal cortex of A1 carriers generates a weaker response to errors than it does in other people, suggesting that this brain area is one site at which dopamine exerts its effect on learning from negative feedback.
But if fewer D2 receptors leads to impaired avoidance learning, why do drugs that boost dopamine signaling also lead to such impairments in Parkinson's patients? In both scenarios, dopamine signaling may, in fact, be increased through other dopamine receptors; research indicates that A1 carriers produce an unusually large amount of dopamine, perhaps as a way to compensate for their lack of D2 receptors. Whatever the reason, insensitivity to unpleasant consequences may contribute to the slightly higher rates of obesity, compulsive gambling and addiction among A1 carriers than in the general population.
Foreshadowing Faults
Although learning from mistakes may help us avoid future missteps, inexperience or inattention can still lead to errors. Many such goofs turn out to be predictable, however, foreshadowed by telltale changes in brain metabolism, according to research my team published in April in the Proceedings of the National Academy of Sciences USA.
Along with cognitive neuroscientist Tom Eichele of the University of Bergen in Norway and several colleagues, I asked 13 young adults to perform a flanker task while we monitored their brain activity using fMRI. Starting about 30 seconds before our subjects made an error, we found distinct but gradual changes in the activation of two brain networks.
One of the networks, called the default mode region, is usually more active when a person is at rest and quiets down when a person is engaged in a task. But before an error, the posterior part of this network—which includes the retrosplenial cortex, located near the center of the brain at the surface—became more active, indicating that the mind was relaxing. Meanwhile activity declined in areas of the frontal lobe that spring to life whenever a person is working hard at something, suggesting that the person was also becoming less engaged in the task at hand.
Our results show that errors are the product of gradual changes in the brain rather than unpredictable blips in brain activity. Such adjustments could be used to foretell errors, particularly those that occur during monotonous tasks. In the future, people might wear portable devices that monitor these brain states as a first step toward preventing mistakes where they are most likely to occur—and when they matter most.
Editor's Note: This story was originally published with the title "Minding Mistakes"
http://www.sciam.com/article.cfm?id=minding-mistakes
Monday, August 11, 2008
Are Viruses Alive?
So, are they?
Although viruses challenge our concept of what "living" means, they are vital members of the web of life
By Luis P. Villarreal
In an episode of the classic 1950s television comedy The Honeymooners, Brooklyn bus driver Ralph Kramden loudly explains to his wife, Alice, "You know that I know how easy you get the virus." Half a century ago even regular folks like the Kramdens had some knowledge of viruses—as microscopic bringers of disease. Yet it is almost certain that they did not know exactly what a virus was. They were, and are, not alone.
For about 100 years, the scientifi c community has repeatedly changed its collective mind over what viruses are. First seen as poisons, then as life-forms, then biological chemicals, viruses today are thought of as being in a gray area between living and nonliving: they cannot replicate on their own but can do so in truly living cells and can also affect the behavior of their hosts profoundly. The categorization of viruses as nonliving during much of the modern era of biological science has had an unintended consequence: it has led most researchers to ignore viruses in the study of evolution. Finally, however, scientists are beginning to appreciate viruses as fundamental players in the history of life.
Coming to Terms
It is easy to see why viruses have been diffi cult to pigeonhole. They seem to vary with each lens applied to examine them. The initial interest in viruses stemmed from their association with diseases—the word "virus" has its roots in the Latin term for "poison." In the late 19th century researchers realized that certain diseases, including rabies and foot-and-mouth, were caused by particles that seemed to behave like bacteria but were much smaller. Because they were clearly biological themselves and could be spread from one victim to another with obvious biological effects, viruses were then thought to be the simplest of all living, gene-bearing life-forms.
Their demotion to inert chemicals came after 1935, when Wendell M. Stanley and his colleagues, at what is now the Rockefeller University in New York City, crystallized a virus— tobacco mosaic virus—for the fi rst time. They saw that it consisted of a package of complex biochemicals. But it lacked essential systems necessary for metabolic functions, the biochemical activity of life. Stanley shared the 1946 Nobel Prize— in chemistry, not in physiology or medicine—for this work.
Further research by Stanley and others established that a virus consists of nucleic acids (DNA or RNA) enclosed in a protein coat that may also shelter viral proteins involved in infection. By that description, a virus seems more like a chemistry set than an organism. But when a virus enters a cell (called a host after infection), it is far from inactive. It sheds its coat, bares its genes and induces the cell's own replication machinery to reproduce the intruder's DNA or RNA and manufacture more viral protein based on the instructions in the viral nucleic acid. The newly created viral bits assemble and, voilà, more virus arises, which also may infect other cells.
These behaviors are what led many to think of viruses as existing at the border between chemistry and life. More poetically, virologists Marc H. V. van Regenmortel of the University of Strasbourg in France and Brian W. J. Mahy of the Centers for Disease Control and Prevention have recently said that with their dependence on host cells, viruses lead "a kind of borrowed life." Interestingly, even though biologists long favored the view that viruses were mere boxes of chemicals, they took advantage of viral activity in host cells to determine how nucleic acids code for proteins: indeed, modern molecular biology rests on a foundation of information gained through viruses.
Molecular biologists went on to crystallize most of the essential components of cells and are today accustomed to thinking about cellular constituents—for example, ribosomes, mitochondria, membranes, DNA and proteins—as either chemical machinery or the stuff that the machinery uses or produces. This exposure to multiple complex chemical structures that carry out the processes of life is probably a reason that most molecular biologists do not spend a lot of time puzzling over whether viruses are alive. For them, that exercise might seem equivalent to pondering whether those individual subcellular constituents are alive on their own. This myopic view allows them to see only how viruses co-opt cells or cause disease. The more sweeping question of viral contributions to the history of life on earth, which I will address shortly, remains for the most part unanswered and even unasked.
To Be or Not to Be
The seemingly simple question of whether or not viruses are alive, which my students often ask, has probably defi ed a simple answer all these years because it raises a fundamental issue: What exactly defi nes "life?" A precise scientifi c defi nition of life is an elusive thing, but most observers would agree that life includes certain qualities in addition to an ability to replicate. For example, a living entity is in a state bounded by birth and death. Living organisms also are thought to require a degree of biochemical autonomy, carrying on the metabolic activities that produce the molecules and energy needed to sustain the organism. This level of autonomy is essential to most definitions.
Viruses, however, parasitize essentially all biomolecular aspects of life. That is, they depend on the host cell for the raw materials and energy necessary for nucleic acid synthesis, protein synthesis, processing and transport, and all other biochemical activities that allow the virus to multiply and spread. One might then conclude that even though these processes come under viral direction, viruses are simply nonliving parasites of living metabolic systems. But a spectrum may exist between what is certainly alive and what is not.
A rock is not alive. A metabolically active sack, devoid of genetic material and the potential for propagation, is also not alive. A bacterium, though, is alive. Although it is a single cell, it can generate energy and the molecules needed to sustain itself, and it can reproduce. But what about a seed? A seed might not be considered alive. Yet it has a potential for life, and it may be destroyed. In this regard, viruses resemble seeds more than they do live cells. They have a certain potential, which can be snuffed out, but they do not attain the more autonomous state of life.
Another way to think about life is as an emergent property of a collection of certain nonliving things. Both life and consciousness are examples of emergent complex systems. They each require a critical level of complexity or interaction to achieve their respective states. A neuron by itself, or even in a network of nerves, is not conscious—whole brain complexity is needed. Yet even an intact human brain can be biologically alive but incapable of consciousness, or "brain-dead." Similarly, neither cellular nor viral individual genes or proteins are by themselves alive. The enucleated cell is akin to the state of being braindead, in that it lacks a full critical complexity. A virus, too, fails to reach a critical complexity. So life itself is an emergent, complex state, but it is made from the same fundamental, physical building blocks that constitute a virus. Approached from this perspective, viruses, though not fully alive, may be thought of as being more than inert matter: they verge on life.
In fact, in October, French researchers announced fi ndings that illustrate afresh just how close some viruses might come. Didier Raoult and his colleagues at the University of the Mediterranean in Marseille announced that they had sequenced the genome of the largest known virus, Mimivirus, which was discovered in 1992. The virus, about the same size as a small bacterium, infects amoebae. Sequence analysis of the virus revealed numerous genes previously thought to exist only in cellular organisms. Some of these genes are involved in making the proteins encoded by the viral DNA and may make it easier for Mimivirus to co-opt host cell replication systems. As the research team noted in its report in the journal Science, the enormous complexity of the Mimivirus's genetic complement "challenges the established frontier between viruses and parasitic cellular organisms."
Impact on Evolution
Debates over whether to label viruses as living lead naturally to another question: Is pondering the status of viruses as living or nonliving more than a philosophical exercise, the basis of a lively and heated rhetorical debate but with little real consequence? I think the issue is important, because how scientists regard this question infl uences their thinking about the mechanisms of evolution.
Viruses have their own, ancient evolutionary history, dating to the very origin of cellular life. For example, some viral- repair enzymes—which excise and resynthesize damaged DNA, mend oxygen radical damage, and so on— are unique to certain viruses and have existed almost unchanged probably for billions of years.
Nevertheless, most evolutionary biologists hold that because viruses are not alive, they are unworthy of serious consideration when trying to understand evolution. They also look on viruses as coming from host genes that somehow escaped the host and acquired a protein coat. In this view, viruses are fugitive host genes that have degenerated into parasites. And with viruses thus dismissed from the web of life, important contributions they may have made to the origin of species and the maintenance of life may go unrecognized. (Indeed, only four of the 1,205 pages of the 2002 volume The Encyclopedia of Evolution are devoted to viruses.)
Of course, evolutionary biologists do not deny that viruses have had some role in evolution. But by viewing viruses as inanimate, these investigators place them in the same category of infl uences as, say, climate change. Such external infl uences select among individuals having varied, genetically controlled traits; those individuals most able to survive and thrive when faced with these challenges go on to reproduce most successfully and hence spread their genes to future generations.
But viruses directly exchange genetic information with living organisms—that is, within the web of life itself. A possible surprise to most physicians, and perhaps to most evolutionary biologists as well, is that most known viruses are persistent and innocuous, not pathogenic. They take up residence in cells, where they may remain dormant for long periods or take advantage of the cells' replication apparatus to reproduce at a slow and steady rate. These viruses have developed many clever ways to avoid detection by the host immune system— essentially every step in the immune process can be altered or controlled by various genes found in one virus or another.
Furthermore, a virus genome (the entire complement of DNA or RNA) can permanently colonize its host, adding viral genes to host lineages and ultimately becoming a critical part of the host species' genome. Viruses therefore surely have effects that are faster and more direct than those of external forces that simply select among more slowly generated, internal genetic variations. The huge population of viruses, combined with their rapid rates of replication and mutation, makes them the world's leading source of genetic innovation: they constantly "invent" new genes. And unique genes of viral origin may travel, finding their way into other organisms and contributing to evolutionary change.
Data published by the International Human Genome Sequencing Consortium indicate that somewhere between 113 and 223 genes present in bacteria and in the human genome are absent in well-studied organisms—such as the yeast Saccharomyces cerevisiae, the fruit fly Drosophila melanogaster and the nematode Caenorhabditis elegans—that lie in between those two evolutionary extremes. Some researchers thought that these organisms, which arose after bacteria but before vertebrates, simply lost the genes in question at some point in their evolutionary history. Others suggested that these genes had been transferred directly to the human lineage by invading bacteria.
My colleague Victor DeFilippis of the Vaccine and Gene Therapy Institute of the Oregon Health and Science University and I suggested a third alternative: viruses may originate genes, then colonize two different lineages—for example, bacteria and vertebrates. A gene apparently bestowed on humanity by bacteria may have been given to both by a virus.
In fact, along with other researchers, Philip Bell of Macquarie University in Sydney, Australia, and I contend that the cell nucleus itself is of viral origin. The advent of the nucleus— which differentiates eukaryotes (organisms whose cells contain a true nucleus), including humans, from prokaryotes, such as bacteria—cannot be satisfactorily explained solely by the gradual adaptation of prokaryotic cells until they became eukaryotic. Rather the nucleus may have evolved from a persisting large DNA virus that made a permanent home within prokaryotes. Some support for this idea comes from sequence data showing that the gene for a DNA polymerase (a DNAcopying enzyme) in the virus called T4, which infects bacteria, is closely related to other DNA polymerase genes in both eukaryotes and the viruses that infect them. Patrick Forterre of the University of Paris-Sud has also analyzed enzymes responsible for DNA replication and has concluded that the genes for such enzymes in eukaryotes probably have a viral origin.
From single-celled organisms to human populations, viruses affect all life on earth, often determining what will survive. But viruses themselves also evolve. New viruses, such as the AIDS-causing HIV-1, may be the only biological entities that researchers can actually witness come into being, providing a real-time example of evolution in action.
Viruses matter to life. They are the constantly changing boundary between the worlds of biology and biochemistry. As we continue to unravel the genomes of more and more organisms, the contributions from this dynamic and ancient gene pool should become apparent. Nobel laureate Salvador Luria mused about the viral infl uence on evolution in 1959. "May we not feel," he wrote, "that in the virus, in their merging with the cellular genome and reemerging from them, we observe the units and process which, in the course of evolution, have created the successful genetic patterns that underlie all living cells?" Regardless of whether or not we consider viruses to be alive, it is time to acknowledge and study them in their natural context—within the web of life.
http://www.sciam.com/article.cfm?id=are-viruses-alive-2004&print=true
Thursday, August 7, 2008
Must read! All who lack sleep!
Interesting.. why do we need sleep? I think the second argument is more likely yea?
Sleep on It: How Snoozing Makes You Smarter
During slumber, our brain engages in data analysis, from strengthening memories to solving problems
By Robert Stickgold and Jeffrey M. Ellenbogen
In 1865 Friedrich August Kekulé woke up from a strange dream: he imagined a snake forming a circle and biting its own tail. Like many organic chemists of the time, Kekulé had been working feverishly to describe the true chemical structure of benzene, a problem that continually eluded understanding. But Kekulé's dream of a snake swallowing its tail, so the story goes, helped him to accurately realize that benzene's structure formed a ring. This insight paved the way for a new understanding of organic chemistry and earned Kekulé a title of nobility in Germany.
Although most of us have not been ennobled, there is something undeniably familiar about Kekulé's problem-solving method. Whether deciding to go to a particular college, accept a challenging job offer or propose to a future spouse, "sleeping on it" seems to provide the clarity we need to piece together life's puzzles. But how does slumber present us with answers?
The latest research suggests that while we are peacefully asleep our brain is busily processing the day's information. It combs through recently formed memories, stabilizing, copying and filing them, so that they will be more useful the next day. A night of sleep can make memories resistant to interference from other information and allow us to recall them for use more effectively the next morning. And sleep not only strengthens memories, it also lets the brain sift through newly formed memories, possibly even identifying what is worth keeping and selectively maintaining or enhancing these aspects of a memory. When a picture contains both emotional and unemotional elements, sleep can save the important emotional parts and let the less relevant background drift away. It can analyze collections of memories to discover relations among them or identify the gist of a memory while the unnecessary details fade—perhaps even helping us find the meaning in what we have learned.
Not Merely Resting
If you find this news surprising, you are not alone. Until the mid-1950s, scientists generally assumed that the brain was shut down while we snoozed. Although German psychologist Hermann Ebbinghaus had evidence in 1885 that sleep protects simple memories from decay, for decades researchers attributed the effect to a passive protection against interference. We forget things, they argued, because all the new information coming in pushes out the existing memories. But because there is nothing coming in while we get shut-eye, we simply do not forget as much.
Then, in 1953, the late physiologists Eugene Aserinsky and Nathaniel Kleitman of the University of Chicago discovered the rich variations in brain activity during sleep, and scientists realized they had been missing something important. Aserinsky and Kleitman found that our sleep follows a 90-minute cycle, in and out of rapid-eye-movement (REM) sleep. During REM sleep, our brain waves—the oscillating electromagnetic signals that result from large-scale brain activity—look similar to those produced while we are awake. And in subsequent decades, the late Mircea Steriade of Laval University in Quebec and other neuroscientists discovered that individual collections of neurons were independently firing in between these REM phases, during periods known as slow-wave sleep, when large populations of brain cells fire synchronously in a steady rhythm of one to four beats each second. So it became clear that the sleeping brain was not merely "resting," either in REM sleep or in slow-wave sleep. Sleep was doing something different. Something active.
Sleep to Remember
The turning point in our understanding of sleep and memory came in 1994 in a groundbreaking study. Neurobiologists Avi Karni, Dov Sagi and their colleagues at the Weizmann Institute of Science in Israel showed that when volunteers got a night of sleep, they improved at a task that involved rapidly discriminating between objects they saw—but only when they had had normal amounts of REM sleep. When the subjects were deprived of REM sleep, the improvement disappeared. The fact that performance actually rose overnight negated the idea of passive protection. Something had to be happening within the sleeping brain that altered the memories formed the day before. But Karni and Sagi described REM sleep as a permissive state—one that could allow changes to happen—rather than a necessary one. They proposed that such unconscious improvements could happen across the day or the night. What was important, they argued, was that improvements could only occur during part of the night, during REM.
It was not until one of us (Stickgold) revisited this question in 2000 that it became clear that sleep could, in fact, be necessary for this improvement to occur. Using the same rapid visual discrimination task, we found that only with more than six hours of sleep did people's performance improve over the 24 hours following the learning session. And REM sleep was not the only important component: slow-wave sleep was equally crucial. In other words, sleep—in all its phases—does something to improve memory that being awake does not do.
To understand how that could be so, it helps to review a few memory basics. When we "encode" information in our brain, the newly minted memory is actually just beginning a long journey during which it will be stabilized, enhanced and qualitatively altered, until it bears only faint resemblance to its original form. Over the first few hours, a memory can become more stable, resistant to interference from competing memories. But over longer periods, the brain seems to decide what is important to remember and what is not—and a detailed memory evolves into something more like a story.
In 2006 we demonstrated the powerful ability of sleep to stabilize memories and provided further evidence against the myth that sleep only passively (and, therefore, transiently) protects memories from interference. We reasoned that if sleep merely provides a transient benefit for memory, then memories after sleep should be, once again, susceptible to interference. We first trained people to memorize pairs of words in an A-B pattern (for example, "blanket-window") and then allowed some of the volunteers to sleep. Later they all learned pairs in an A-C pattern ("blanket-sneaker"), which were meant to interfere with their memories of the A-B pairs. As expected, the people who slept could remember more of the A-B pairs than people who had stayed awake could. And when we introduced interfering A-C pairs, it was even more apparent that those who slept had a stronger, more stable memory for the A-B sets. Sleep changed the memory, making it robust and more resistant to interference in the coming day.
But sleep's effects on memory are not limited to stabilization. Over just the past few years, a number of studies have demonstrated the sophistication of the memory processing that happens during slumber. In fact, it appears that as we sleep, the brain might even be dissecting our memories and retaining only the most salient details. In one study we created a series of pictures that included either unpleasant or neutral objects on a neutral background and then had people view the pictures one after another. Twelve hours later we tested their memories for the objects and the backgrounds. The results were quite surprising. Whether the subjects had stayed awake or slept, the accuracy of their memories dropped by 10 percent for everything. Everything, that is, except for the memory of the emotionally evocative objects after night of sleep. Instead of deteriorating, memories for the emotional objects actually seemed to improve by a few percent overnight, showing about a 15 percent improvement relative to the deteriorating backgrounds. After a few more nights, one could imagine that little but the emotional objects would be left. We know this culling happens over time with real-life events, but now it appears that sleep may play a crucial role in this evolution of emotional memories.
Precisely how the brain strengthens and enhances memories remains largely a mystery, although we can make some educated guesses at the basic mechanism. We know that memories are created by altering the strengths of connections among hundreds, thousands or perhaps even millions of neurons, making certain patterns of activity more likely to recur. These patterns of activity, when reactivated, lead to the recall of a memory—whether that memory is where we left the car keys or a pair of words such as "blanket-window." These changes in synaptic strength are thought to arise from a molecular process known as long-term potentiation, which strengthens the connections between pairs of neurons that fire at the same time. Thus, cells that fire together wire together, locking the pattern in place for future recall.
During sleep, the brain reactivates patterns of neural activity that it performed during the day, thus strengthening the memories by long-term potentiation. In 1994 neuroscientists Matthew Wilson and Bruce McNaughton, both then at the University of Arizona, showed this effect for the first time using rats fitted with implants that monitored their brain activity. They taught these rats to circle a track to find food, recording neuronal firing patterns from the rodents' brains all the while. Cells in the hippocampus—a brain structure critical for spatial memory—created a map of the track, with different "place cells" firing as the rats traversed each region of the track [see "The Matrix in Your Head," by James J. Knierim; Scientific American Mind, June/July 2007]. Place cells correspond so closely to exact physical locations that the researchers could monitor the rats' progress around the track simply by watching which place cells were firing at any given time. And here is where it gets even more interesting: when Wilson and McNaughton continued to record from these place cells as the rats slept, they saw the cells continuing to fire in the same order—as if the rats were "practicing" running around the track in their sleep.
As this unconscious rehearsing strengthens memory, something more complex is happening as well—the brain may be selectively rehearsing the more difficult aspects of a task. For instance, Matthew P. Walker's work at Harvard Medical School in 2005 demonstrated that when subjects learned to type complicated sequences such as 4-1-3-2-4 on a keyboard (much like learning a new piano score), sleeping between practice sessions led to faster and more coordinated finger movements. But on more careful examination, he found that people were not simply getting faster overall on this typing task. Instead each subject was getting faster on those particular keystroke sequences at which he or she was worst.
The brain accomplishes this improvement, at least in part, by moving the memory for these sequences overnight. Using functional magnetic resonance imaging, Walker showed that his subjects used different brain regions to control their typing after they had slept. The next day typing elicited more activity in the right primary motor cortex, medial prefrontal lobe, hippocampus and left cerebellum—places that would support faster and more precise key-press movements—and less activity in the parietal cortices, left insula, temporal pole and frontopolar region, areas whose suppression indicates reduced conscious and emotional effort. The entire memory got strengthened, but especially the parts that needed it most, and sleep was doing this work by using different parts of the brain than were used while learning the task.
Solutions in the Dark
These effects of sleep on memory are impressive. Adding to the excitement, recent discoveries show that sleep also facilitates the active analysis of new memories, enabling the brain to solve problems and infer new information. In 2007 one of us (Ellenbogen) showed that the brain learns while we are asleep. The study used a transitive inference task; for example, if Bill is older than Carol and Carol is older than Pierre, the laws of transitivity make it clear that Bill is older than Pierre. Making this inference requires stitching those two fragments of information together. People and animals tend to make these transitive inferences without much conscious thought, and the ability to do so serves as an enormously helpful cognitive skill: we discover new information (Bill is older than Pierre) without ever learning it directly.
The inference seems obvious in Bill and Pierre's case, but in the experiment, we used abstract colored shapes that have no intuitive relation to one another, making the task more challenging. We taught people so-called premise pairs—they learned to choose, for example, the orange oval over the turquoise one, turquoise over green, green over paisley, and so on. The premise pairs imply a hierarchy—if orange is a better choice than turquoise and turquoise is preferred to green, then orange should win over green. But when we tested the subjects on these novel pairings 20 minutes after they learned the premise pairs, they had not yet discovered these hidden relations. They chose green just as often as they chose orange, performing no better than chance.
When we tested subjects 12 hours later on the same day, however, they made the correct choice 70 percent of the time. Simply allowing time to pass enabled the brain to calculate and learn these transitive inferences. And people who slept during the 12 hours performed significantly better, linking the most distant pairs (such as orange versus paisley) with 90 percent accuracy. So it seems the brain needs time after we learn information to process it, connecting the dots, so to speak—and sleep provides the maximum benefit.
In a 2004 study Ullrich Wagner and others in Jan Born's laboratory at the University of Lübeck in Germany elegantly demonstrated just how powerful sleep's processing of memories can be. They taught subjects how to solve a particular type of mathematical problem by using a long and tedious procedure and had them practice it about 100 times. The subjects were then sent away and told to come back 12 hours later, when they were instructed to try it another 200 times.
What the researchers had not told their subjects was that there is a much simpler way to solve these problems. The researchers could tell if and when subjects gained insight into this shortcut, because their speed would suddenly increase. Many of the subjects did, in fact, discover the trick during the second session. But when they got a night's worth of sleep between the two sessions, they were more than two and a half times more likely to figure it out—59 percent of the subjects who slept found the trick, compared with only 23 percent of those who stayed awake between the sessions. Somehow the sleeping brain was solving this problem, without even knowing that there was a problem to solve.
The Need to Sleep
As exciting findings such as these come in more and more rapidly, we are becoming sure of one thing: while we sleep, our brain is anything but inactive. It is now clear that sleep can consolidate memories by enhancing and stabilizing them and by finding patterns within studied material even when we do not know that patterns might be there. It is also obvious that skimping on sleep stymies these crucial cognitive processes: some aspects of memory consolidation only happen with more than six hours of sleep. Miss a night, and the day's memories might be compromised—an unsettling thought in our fast-paced, sleep-deprived society.
But the question remains: Why did we evolve in such a way that certain cognitive functions happen only while we are asleep? Would it not seem to make more sense to have these operations going on in the daytime? Part of the answer might be that the evolutionary pressures for sleep existed long before higher cognition—functions such as immune system regulation and efficient energy usage (for instance, hunt in the day and rest at night) are only two of the many reasons it makes sense to sleep on a planet that alternates between light and darkness. And because we already had evolutionary pressure to sleep, the theory goes, the brain evolved to use that time wisely by processing information from the previous day: acquire by day; process by night.
Or it might have been the other way around. Memory processing seems to be the only function of sleep that actually requires an organism to truly sleep—that is, to become unaware of its surroundings and stop processing incoming sensory signals. This unconscious cognition appears to demand the same brain resources used for processing incoming signals when awake. The brain, therefore, might have to shut off external inputs to get this job done. In contrast, although other functions such as immune system regulation might be more readily performed when an organism is inactive, there does not seem to be any reason why the organism would need to lose awareness. Thus, it may be these other functions that have been added to take advantage of the sleep that had already evolved for memory.
Many other questions remain about our nighttime cognition, however it might have evolved. Exactly how does the brain accomplish this memory processing? What are the chemical or molecular activities that account for these effects? These questions raise a larger issue about memory in general: What makes the brain remember certain pieces of information and forget others? We think the lesson here is that understanding sleep will ultimately help us to better understand memory.
The task might seem daunting, but these puzzles are the kind on which scientists thrive—and they can be answered. First, we will have to design and carry out more and more experiments, slowly teasing out answers. But equally important, we are going to have to sleep on it.
Note: This article was originally published with the title, "Quiet! Sleeping Brain at Work."
HA i knew viruses were alive! not that this makes it any clearer -.-
'Virophage' suggests viruses are alive
Evidence of illness enhances case for life.
The discovery of a giant virus that falls ill through infection by another virus1 is fuelling the debate about whether viruses are alive.
“There’s no doubt this is a living organism,” says Jean-Michel Claverie, a virologist at the the CNRS UPR laboratories in Marseilles, part of France’s basic-research agency. “The fact that it can get sick makes it more alive.”
Giant viruses have been captivating virologists since 2003, when a team led by Claverie and Didier Raoult at CNRS UMR, also in Marseilles, reported the discovery of the first monster2. The virus had been isolated more than a decade earlier in amoebae from a cooling tower in Bradford, UK, but was initially mistaken for a bacterium because of its size, and was relegated to the freezer.
Closer inspection showed the microbe to be a huge virus with, as later work revealed, a genome harbouring more than 900 protein-coding genes3 — at least three times more than that of the biggest previously known viruses and bigger than that of some bacteria. It was named Acanthamoeba polyphaga mimivirus (for mimicking microbe), and is thought to be part of a much larger family. “It was the cause of great excitement in virology,” says Eugene Koonin at the National Center for Biotechnology Information in Bethesda, Maryland. “It crossed the imaginary boundary between viruses and cellular organisms.”
Now Raoult, Koonin and their colleagues report the isolation of a new strain of giant virus from a cooling tower in Paris, which they have named mamavirus because it seemed slightly larger than mimivirus. Their electron microscopy studies also revealed a second, small virus closely associated with mamavirus that has earned the name Sputnik, after the first man-made satellite.
With just 21 genes, Sputnik is tiny compared with its mama — but insidious. When the giant mamavirus infects an amoeba, it uses its large array of genes to build a ‘viral factory’, a hub where new viral particles are made. Sputnik infects this viral factory and seems to hijack its machinery in order to replicate. The team found that cells co-infected with Sputnik produce fewer and often deformed mamavirus particles, making the virus less infective. This suggests that Sputnik is effectively a viral parasite that sickens its host — seemingly the first such example.
The team suggests that Sputnik is a ‘virophage’, much like the bacteriophage viruses that infect and sicken bacteria. “It infects this factory like a phage infects a bacterium,” Koonin says. “It’s doing what every parasite can — exploiting its host for its own replication.”
Sputnik’s genome reveals further insight into its biology. Although 13 of its genes show little similarity to any other known genes, three are closely related to mimivirus and mamavirus genes, perhaps cannibalized by the tiny virus as it packaged up particles sometime in its history. This suggests that the satellite virus could perform horizontal gene transfer between viruses — paralleling the way that bacteriophages ferry genes between bacteria.
The findings may have global implications, according to some virologists. A metagenomic study of ocean water4 has revealed an abundance of genetic sequences closely related to giant viruses, leading to a suspicion that they are a common parasite of plankton. These viruses had been missed for many years, Claverie says, because the filters used to remove bacteria screened out giant viruses as well. Raoult’s team also found genes related to Sputnik’s in an ocean-sampling data set, so this could be the first of a new, common family of viruses. “It suggests there are other representatives of this viral family out there in the environment,” Koonin says.
By regulating the growth and death of plankton, giant viruses — and satellite viruses such as Sputnik — could be having major effects on ocean nutrient cycles and climate. “These viruses could be major players in global systems,” says Curtis Suttle, an expert in marine viruses at the University of British Columbia in Vancouver.
“I think ultimately we will find a huge number of novel viruses in the ocean and other places,” Suttle says — 70% of viral genes identified in ocean surveys have never been seen before. “It emphasizes how little is known about these organisms — and I use that term deliberately.”
-
References
- La Scola, B. et al. Nature doi:10.1038/nature07218 (2008).
- La Scola, B. et al. Science 299, 2033 (2003).
- Raoult, D. et al. Science 306, 1344–1350 (2004).
- Monier, A. , Claverie, J.-M. & Ogata, H. Genome Biol. 9, R106 (2008).
Wednesday, August 6, 2008
Synaesthesia: a world of wonders
People with synaesthesia can't help but get two sensory perceptions for the price of one. Some perceive colours when they hear words or musical notes, or read numbers; rarer individuals can even get tastes from shapes.
Neuroscientists have now reported1 another variant, in which flashes and moving images trigger the perception of sounds. The finding could help to identify the precise neural causes of the phenomenon, reportedly experienced by at least one in every hundred people, and suggests that at least some types of synaesthesia are closely related to ordinary perception.
"This [study] will make a big impact," says synaesthesia expert Edward Hubbard of France's biomedical research organization INSERM. "It will affect not just the synaesthesia community, but also researchers interested in how the brain handles information from multiple senses."
Neuroscientist Melissa Saenz of the California Institute of Technology (Caltech) in Pasadena stumbled across the variant last year while giving a group of undergraduate students a tour of her perception research lab. In front of a silent display (see picture) designed to evoke activity in the motion processing centre of the visual cortex, one of the students asked: "Does anybody else hear something?"
The student, Johannes Pulst-Korenberg, reported hearing a distinct whooshing sound when he watched the display. "Everybody was looking at me, like, 'Are you crazy?'" he remembers.
Saenz could find no description of this variant of synaesthesia in the scientific literature. But to her surprise, after screening several hundred people in the Caltech community, she found three more who reported a similar experience.
"They're generally soft sounds, but they can't be ignored, even when they're distracting," says Saenz. "One of the synaesthetes told me that the moving images on computer screen savers are terribly annoying to her. She can't do anything about it but look away."
Pulst-Korenberg, who is pursuing a doctorate in neuroscience and economics at Caltech, says that he has experienced the same effect watching a butterfly fly. "For some reason, the jerky motion generates little clicks," he says.
The sound of change
Saenz and her lab director, Christof Koch, confirmed the four cases with a test that gave true 'hearing–motion' synaesthetes a distinct advantage. They asked 14 people, including the 4 with synaesthesia, to watch two quick sequences of Morse-code type flashes, and then determine whether the sequences were the same or subtly different. As they were able to 'hear' the sequences too, the synaesthetes, could distinguish them much more accurately than 10 people without synaesthesia. When the sequences instead comprised auditory beeps, the synaesthetes again performed well, but having lost their advantage they scored no better than the non-synaesthete controls.
Some neuroscientists attribute synaesthesia to remnant cross-links between closely-spaced cortical areas — links that develop in the early stages of life but are usually pruned away during childhood. In letter-to-colour synaesthesia, for example, the relevant cortical area for recognizing letters turns out to be immediately adjacent to the one for perceiving colour. Another leading theory explains the condition as an excess of feedback signals from multi-sensory regions, where perceptions are usually integrated, down to single-sensory areas.
Brain-imaging studies have provided evidence for both theories, says Hubbard. "But I don't think the type of synaesthesia that Melissa has discovered really fits neatly into either one."
Instead, it might be an enhanced form of the fast, cross-cortical correspondences the brain makes all the time, he suggests. "For example, we find it easier to understand what someone is saying if we can also see their mouth move. So the brain is constantly integrating audition and vision." Delineating how the brain performs this ordinary integration, says Hubbard, "is probably a part of what will be required if we're to explain the type of synaesthesia that Melissa has discovered here."
If hearing–motion synaesthesia is closely related to ordinary cross-sensory correspondence, it might also explain why it has taken so long to discover. "In real life," says Saenz, "things that move or flash usually do make a sound, so that association is more logical than, say, numbers-to-colours."
- References
- Saenz, M. and Koch, C. Current Biol. 18, R650-R651 (2008) | Article |
- Saenz, M. and Koch, C. Current Biol. 18, R650-R651 (2008) | Article |
http://www.nature.com/news/2008/080805/full/news.2008.1014.html?s=news_rss
Tuesday, August 5, 2008
OOO monster…
Nothing like a bizarre-looking sea "monster" to draw crowds to a tony resort town. The blogosphere has been abuzz since Gawker.com
early this week featured a story and photo of a bulky hairless corpse with sharp teeth and a snout that reportedly washed up in Montauk on the eastern tip of Long Island, N.Y. Another Big Foot or Loch Ness Monster, perhaps?
The report of the cryptid was picked up by Fox News, CNN and other TV nets, magazines and newspapers as far away as London hungry for a hot story to spice up the summer news doldrums.
"We were looking for a place to sit when we saw some people looking at something," Jenna Hewitt, 26, told Newsday. "We were kind of amazed, shocked and amazed." Hewitt was among a bunch of locals who insist they saw the odd-looking corpse. Most of those weighing in on the creature's identity subscribed to the theory that it was a dog. (A pit bull was the prevailing favorite.)
Others, some who only saw the snapshot, speculated it may have been a raccoon or, perhaps, a sea turtle that lost its shell.
But we may never know for sure. It seems, you see, that the body has been moved. And nobody (at least nobody talking) knows by whom—or where it was taken.
"They say an old guy came and carted it away," Hewitt said. "He said, "I'm going to mount it on my wall."
Charming.