Thursday, August 21, 2008

free will (the actual post)

ok i've been doing alot of random articles with a very small amount of actual reflection. mostly because there really isn't much to think about.

this article has a couple of flaws that fails. but, there is one thing it did cause me to think about. first and most importantly, do we have free wills? most people on the road would prefer to think that we do. otherwise, who's responsibility is it to preserve order in society? the scientists would prefer to think that we don't. cos, who's making this decisions then? then the question of a soul comes in, and what it's powered by.
our brain, as far as scientists know, is basically a couple hundred million neurons firing at indeterminable times. can such a thing create a free will? how does a couple trillion electrical impulses translate to, say anger? or happiness? or even an M16? if we see our brain as a series of on and off switches, then we can never say that we're anymore different than that of a computer. one that receives information and gives an output. if that's the case, then why should we be blamed for anything said or done by us? we couldn't have made another choice could we? if time were to go back, we would still have to make the same decision. that's cos the input is the same. however, if we were to have a free will, all is fine and well, except that it just doesn't tie up fundamentally with how our brain works. does it become more than a sum of its parts beyond a certain level? what is that level? an ant makes a fixed action pattern, even though some is learnt. we have free will. but really, how different are we from an ant, apart from the complexity? can we say the ant follows a pheromone trail by instinct, or is it their mode of communication? can we react to a song by instinct too?
to me, a brain is a collection of neurons, that receive, store, and sorts information. these information will always be accessed again and again as reference to assign priority to each new information we receive, ones with higher priority, of course, gets stored and the rest discarded. over time, a collection of these memories make up our habits and ultimately, our personalities. now, how does this result in whatever we see now and what our decisions are? very simply, our minds use the stored information, also known as memories, to determine our next step. so, for example, me thinking of this topic is a decision by referencing previous memories, collectively ranking philosophy as a higher priority than say, marking my assessments. makes sense don't you think? that's why people very close to you can actually double guess your actions and even your words. if something in the lab were to break, my supervisor would automatically decide it's probably me. that's also why experts can easily read your body behavior and thus, guess what you plan to do. everything fits right?
i have made a previous speech on why rules on morality can be kept and followed even though they might not be absolute. now, however, a new problem is up. if our responses are truly a response to a stimuli, then why should i be blamed for what happened as a result of my behavior? it's not like i have a choice right? my response is just a fixed pattern from a stimulus right? so if time rewinds, i would still make the same choice. this is quite difficult to put together. really, how can i blame someone who cannot attempt to do it intentionally? this is bad for society. who can we blame for a murder? should we blame the parents who brought up the murderer? do the parents have a choice to bring the murderer up in any other way? can i blame a computer for translating my documents into greek? it was simply programmed wrongly. blame the programmer? if the programmer was another computer, can i blame it? what to do?
i pondered over this for awhile, considering that perhaps, we can choose what the priorities are when we rank each information. that doesn't work. what are our priorities? the answers to these question is based on a previous priority and so forth. in the end, it's "turtles all the way down". right at the bottom, it's still the floor. so, how? very tough question.

i finally found the answer. it's not an assuring one, but it works. up till now, i've said that everything we aim for, is simply for advancement of society. this is just the same. if i were to blame you for the actions you did, other people will receive a stimulus that tells them that this specific action would result in pain or unpleasantness in general. to avoid unpleasantness, this stimulus would be registered as high priority. thus, people will avoid it. as such, this would contribute to society. thus, the idea of blame is not accountability, but a warning that such an action will result in undesirable effects. the aim of avoidance is based on a very shortsighted goal, but, on the whole, it serves to reduce these actions. so why do you want to avoid doing something wrong, because doing so would cause you to be blamed. simple as that.
not the most assuring of concepts. in fact, many would hate it. they not only lose the only feeling of control over their lives: free will, but much more, their perceived importance in society. it's not that you are blamed because you could choose to do otherwise, but that you're blamed to prevent it from happening again. once again, man ends up as a tiny gear in the grand scheme of chance.

P.S. grand scheme of chance IS an oxymoron.

free will?

Free Will vs. the Programmed Brain

If our actions are determined by prior events, then do we have a choice about anything—or any responsibility for what we do?

By Shaun Nichols

Many scientists and philosophers are convinced that free will doesn’t exist at all. According to these skeptics, everything that happens is determined by what happened before—our actions are inevitable consequences of the events leading up to the action—and this fact makes it impossible for anyone to do anything that is truly free. This kind of anti-free will stance stretches back to 18th century philosophy, but the idea has recently been getting much more exposure through popular science books and magazine articles. Should we worry? If people come to believe that they don’t have free will, what will the consequences be for moral responsibility?

In a clever new study, psychologists Kathleen Vohs at the University of Minnesota and Jonathan Schooler at the University of California at Santa Barbara tested this question by giving participants passages from The Astonishing Hypothesis, a popular science book by Francis Crick, a biochemist and Nobel laureate (as co-discoverer, with James Watson, of the DNA double helix). Half of the participants got a passage saying that there is no such thing as free will. The passage begins as follows: “‘You,’ your joys and your sorrows, your memories and your ambitions, your sense of personal identity and free will, are in fact no more than the behavior of a vast assembly of nerve cells and their associated molecules. Who you are is nothing but a pack of neurons.”
The passage then goes on to talk about the neural basis of decisions and claims that “…although we appear to have free will, in fact, our choices have already been predetermined for us and we cannot change that.” The other participants got a passage that was similarly scientific-sounding, but it was about the importance of studying consciousness, with no mention of free will.

After reading the passages, all participants completed a survey on their belief in free will. Then comes the inspired part of the experiment. Participants were told to complete 20 arithmetic problems that would appear on the computer screen. But they were also told that when the question appeared, they needed to press the space bar, otherwise a computer glitch would make the answer appear on the screen, too. The participants were told that no one would know whether they pushed the space bar, but they were asked not to cheat.

The results were clear: those who read the anti-free will text cheated more often! (That is, they pressed the space bar less often than the other participants.) Moreover, the researchers found that the amount a participant cheated correlated with the extent to which they rejected free will in their survey responses.

Varieties of Immorality

Philosophers have raised questions about some elements of the study. For one thing, the anti-free will text presents a bleak worldview, and that alone might lead one to cheat more in such a context (“OMG, if I’m just a pack of neurons, I have much bigger things to worry about than behaving on this experiment!”). It might be that one would also find increased cheating if you gave people a passage arguing that all sentient life will ultimately be destroyed in the heat death of the universe.

On the other hand, the results fit with what some philosophers had predicted. The Western conception idea of free will seems bound up with our sense of moral responsibility, guilt for misdeeds and pride in accomplishment. We hold ourselves responsible precisely when we think that our actions come from free will. In this light, it’s not surprising that people behave less morally as they become skeptical of free will. Further, the Vohs and Schooler result fits with the idea that people will behave less responsibly if they regard their actions as beyond their control. If I think that there’s no point in trying to be good, then I’m less likely to try.

Even if giving up on free will does have these deleterious effects, one might wonder how far they go. One question is whether the effects extend across the moral domain. Cheating in a psychology experiment doesn’t seem too terrible. Presumably the experiment didn’t also lead to a rash of criminal activity among those who read the anti-free will passage. Our moral revulsion at killing and hurting others is likely too strong to be dismantled by reflections about determinism. It might well turn out that other kinds of immoral behavior, like cheating in school, would be affected by the rejection of free will, however.

Is the Effect Permanent?

Another question is how long-lived the effect is. The Vohs and Schooler study suggests that immediately after people are made skeptical of free will, they cheat more. But what would happen if those people were brought back to the lab two weeks later? We might find that they would continue to be skeptical of free will but they would no longer cheat more.

There is no direct evidence on this question, but there is recent evidence on a related issue. Philosopher Hagop Sarkissian of the City Univeristy of New York and colleagues had people from Hong Kong, India, Colombia and the U.S. complete a survey on determinism and moral responsibility. Determinism was described in nontechnical terms, and participants were asked (in effect): whether our universe was a deterministic universe and whether people in a deterministic universe are morally responsible for their actions.

Across cultures, they found that most people said that our universe is not deterministic and also that people in the deterministic universe are not responsible for their actions. Although that isn’t particularly surprising—people want to believe they have free will—something pretty interesting emerges when you look at the smaller group of people who say that our universe is deterministic. Across all of the cultures, this substantial minority of free will skeptics were also much more likely to say that people are responsible even if determinism is true. One way to interpret this finding is that if you come to believe in determinism, you won’t drop your moral attitudes. Rather, you’ll simply reverse your view that determinism rules out moral responsibility.

Many philosophers and scientists reject free will and, while there has been no systematic study of the matter, there’s currently little reason to think that the philosophers and scientists who reject free will are generally less morally upright than those who believe in it. But this raises yet another puzzling question about the belief in free will. People who explicitly deny free will often continue to hold themselves responsible for their actions and feel guilty for doing wrong. Have such people managed to accommodate the rest of their attitudes to their rejection of free will? Have they adjusted their notion of guilt and responsibility so that it really doesn’t depend on the existence of free will? Or is it that when they are in the thick of things, trying to decide what to do, trying to do the right thing, they just fall back into the belief that they do have free will after all?

it appears that this idea also exists in bacteria.

Kamikaze bacteria illustrate evolution of co-operation

Suicidal Salmonella sacrifice themselves to allow their clones to get a foothold in the gut.

Bacteria can commit suicide to help their brethren establish more damaging infections — and scientists think that they can explain how this behaviour evolved.

The phenomenon, called self-destructive cooperation, can help bacteria such as Salmonella typhimurium and Clostridium difficile to establish a stronghold in the gut.

By studying mice infected with S. typhimurium, researchers from Switzerland and Canada have now demonstrated how this 'kamikaze' behaviour arose.

The team, led by Martin Ackermann of ETH Zurich in Switzerland, studied how S. typhimurium expresses the Type III secretion systems virulence factors (TTSS-1) that inflame the gut. This eradicates intestinal microflora that would otherwise compete for resources — but also kills most of the S. typhimurium cells in the vicinity. After this assault, the way is clear for remaining S. typhimurium to take advantage and further colonise the gut.

But in the middle of the gut cavity, or lumen, only about 15% of the S. typhimurium population actually expresses TTSS-1. In contrast, in the tissue of the gut wall, almost all bacteria express TTSS-1. As more bacteria invade the tissue, gut inflammation increases and kills off the invaders (especially those within the tissue) - along with the other competing gut flora.

"We thought it was a very strange phenomenon," says team member Wolf-Dietrich Hardt, also at ETH Zurich. "The bacteria in the gut lumen are genetically identical, but some of them are prepared to sacrifice themselves for the greater good. You could compare this act to Kamikaze fighter pilots of the Japanese army."

Kamikaze genes

This self-destructive cooperation relies on the genes controlling this suicidal behaviour not always being expressed. This 'phenotypic noise' means that only a fraction express TTSS-1, allowing the kamikaze genes to persist in the population. If every cell expressed the genes, they would all commit suicide, benefiting none of the population.

The team concluded that acts of self-destructive cooperation can arise, providing that the level of "public good" — in this case, the inflammation of the gut — is high enough. Crucially, cooperative individuals must also benefit from other cooperative acts more often than individuals who are not cooperating, a situation the scientists call 'assortment'.

In the case of gut bacteria, assortment can arise if the minimum number of pathogens required to infect a host is relatively small — as few as 100 cells, in cases such as Escherichia coli.

Change of strategy

The findings, published in Nature1, chime well with long-established theories on the evolution of altruism and co-operation.

If a gene for sibling altruism is always expressed, it will tend to disappear, because those members of a clutch or litter who possess it may sacrifice themselves for those who do not. However, if the gene is present but not always expressed, it can persist, because some of its carriers may survive to pass it on to subsequent generations.

The research could also aid the design of more potent strategies against pathogenic bacteria. The Salmonella bacterium causes one of the most common bacterial infections in western countries, and is highly dangerous among the elderly and frail. "There is no doubt that a vaccine for Salmonella in humans is needed," says Hardt. "And many strains infecting livestock are becoming resistant to antibiotics.

"But based on our results, I would suggest that the usual strategy of targeting the vaccine against a virulence factor might not be the best strategy, if only a small fraction of the bacteria express it."

  • References

    1. Ackermann, M. et al. Nature 454, 987-990 (2008).

hmm, so there's fat cells that remove fat.. interesting.

Boosting 'good' fat to burn off the bad

Origins of calorie-sizzling fat cells uncovered in mice.

To most dieters, no fat is good fat. But in work published this week in Nature, an insight into the origin of a special class of calorie-burning fat cells could lead to new ways of boosting metabolism and combating obesity, researchers say.

The sworn enemy of the dieter is the 'white' fat cell. Such cells are little more than sacks of fat, storing energy and providing padding. Less known — and less reviled — is brown fat, made up of heat-producing cells chock full of fat and energy-generating structures called mitochondria. The iron attached to proteins in these mitochondria gives brown fat its characteristic colour.

White fat is by far the more abundant of the two; adults carry many pounds of white fat, but only a few grams of brown fat, concentrated mainly in the front part of the neck and the upper chest. Brown-fat pads between the shoulder blades are thought to help newborns stay warm, but precisely what purpose the cells serve in adults is still unclear.

What is clear is that brown fat burns a tremendous amount of energy: about 50 grams of brown fat could burn up 20% of a person's daily caloric intake, says Ronald Kahn of the Joslin Diabetes Center at Harvard Medical School in Boston, Massachusetts, and one of the researchers involved in the latest study.

"It's a very efficient tissue at wasting energy," agrees Bruce Spiegelman of the Dana-Farber Cancer Institute and Harvard Medical School, another member of the team. "It's basically a fire that's just burning."

That would seem to prompt a simple solution to the growing obesity problem: find a way to generate a extra brown fat and let the body burn away the energy stored in its excess white fat.

Fuel for the fire

That notion surfaced last year when a team of researchers led by Spiegelman found that a protein called PRDM16 could trigger cells that usually produce white fat cells to make brown fat cells instead (see Metabolic switch delivers healthy fat).

Now, two papers in Nature extend that work. Spiegelman and his colleagues have traced the natural origin of brown fat cells1. They used a fluorescent protein to label a population of cells (called myoblasts) that usually generates muscle, and found that PRDM16 could trigger these cells to form brown, but not white, fat cells. Blocking production of the PRDM16 protein caused these brown fat cells to revert back to muscle. The result runs counter to the previous notion that brown and white fat cells shared similar origins.

Meanwhile, Kahn together with Yu-Hua Tseng and their colleagues have identified another protein, called 'bone morphogenic protein 7' or BMP7, that is crucial to the generation of brown fat cells. When researchers overexpressed the protein, mice developed slightly more brown fat, slightly higher body temperatures and exhibited slightly lower weight gain than untreated mice after just five days2.

Kahn feels that the change in weight gain could be more dramatic over a longer time period. His team is testing — thus far only in mice — a commercially available form of BMP7 that is used to encourage bone healing after some surgeries. Because BMP7 can also stimulate bone formation, it must be used with care, Kahn says. His lab is working out conditions that could encourage accumulation of brown fat without forming bone tissue in undesired locations. "Otherwise you could have rock hard abs but not in the way you'd expected," he says.

The work could open new therapeutic avenues, says Dominique Langin, a clinical biochemist at the National Institute of Health and Medical Research (INSERM) in Toulouse, France. But it will be important, he adds, to characterize the process further in humans. In large mammals such as humans, the brown fat present at birth disappears and then reforms in other locations, and the contribution that brown fat makes to overall metabolism is unclear. In mice, brown fat does not undergo the same shift, and it plays a clear role in regulating body temperature.

  • References

    1. Seale, P. et al. Nature 454, 961–967 (2008).
    2. Tseng, Y.-H. et al. Nature 454, 1000–1004 (2008).

Wednesday, August 20, 2008

free labour!!!

If You Use the Web, You May Have Already Been Enlisted as a Human Scanner

Those anti-bot security forms that slow you down when you're entering information just might serve a larger purpose

By Adam Hadhazy

You're just about ready to buy a pair of tickets on Ticketmaster, but before you can take the next step, an annoying box with wavy letters and numbers shows up on your screen. You dutifully enter in what you see—and what a bot presumably can't—in the name of security.

But what you may not know is that you also have helped archivists decipher distorted characters in old books and newspapers so that they can be posted on the Web.

You might think that computer scientists would have figured out a way to get computers to decipher those characters. But they haven't, so instead they've figured out a way to harness all that effort you're making to protect your security. "When you're reading those squiggly characters, you are doing something that computers cannot," says Luis von Ahn, a computer scientist at Carnegie Mellon University (C.M.U.) in Pittsburgh.

Von Ahn and colleagues reported last week in the journal Science that Web users have transcribed the equivalent of 160 books a day—that's more than 440 million words—in the year since researchers kicked off the program. The initiative is similar to "distributed computing" schemes like SETI@home, which take advantage of unused personal computer processing power to sift through signals received from space for those that might be generated by extraterrestrial intelligence or to figure out how proteins fold. But the difference with this system is that people, not processors, do the calculations.

"We are getting people to help us digitize books at the same time they are authenticating themselves as humans," von Ahn says. "Every time people are typing these [answers] out, they are actually taking old books or newspapers and helping to transcribe them."

Other large digitization projects, such as the Google Books Project and the Internet Archive, rely on optical character recognition (OCR) software. Basically, computers take a digital image of a book or newspaper page, then try to discern the individual letters, von Ahn says. But he and other C.M.U. researchers estimate that these programs misinterpret or fail to read up to one out of every five words on weathered, yellowed paper or on pages with faded or smeared ink. Such electronically illegible words and texts must then be manually transcribed by human workers at a relatively high cost, he says.

Von Ahn's team's method is a twist on the Web site tests known as CAPTCHAs (Completely Automated Public Turing test to tell Computers and Humans Apart), which have been in use since 2000. The new twist on CAPTCHAs is to use a set of letters from old, weathered books and newspapers that computerized transcribing programs cannot recognize. Much of the raw "fuel" comes courtesy of the Internet Archive project, which transmits words that its OCRs cannot recognize or do not appear in the dictionary.

About 40,000 Websites now use the service, called reCAPTCHA, which the project's site offers for free. Facebook was one of its first major patrons.

Von Ahn estimates that at reCAPTCHA's current rate of transcription (about four million words a day missed by OCR systems), the program does a week's worth of transcription from 1,500 professional transcribers in a single day. This data is stored on hard drives at C.M.U. and then sent back to the organization that requested the transcription. (The New York Times, for example, has enlisted reCAPTCHA to digitize the newspaper's archives dating back to 1851.)

Von Ahn acknowledges that the overall cost for reCAPTCHA is still a bit higher than just using OCR for more recently written, more easily scanned texts. He would not say exactly how much, citing nondisclosure agreements with clients using the software.

When the researchers compared how reCAPTCHA and OCR transcribed five Times articles, reCAPTCHA did a significantly better job—99.1 percent accuracy—than OCR of the sort that Google uses for its book project, which came in at 83.5 percent. (Google declined to comment for this story.)

But as is the way with most technology, today's innovation is tomorrow's VHS tape. Eventually computers will be able to decipher reCAPTCHAs, too. "We'll get a few good years out of reCAPTCHAs," says co-author Manuel Blum, a professor of computer science at Carnegie Mellon and key developer of some of the first CAPTCHAs.

OCR will continue to improve as well, Blum says, along with so-called machine learning in general.

Either way, with some 100 million books published prior to the dawn of the digital era, says von Ahn, that "makes for a lot of words."

http://www.sciam.com/article.cfm?id=human-book-scanners-on-the-web&sc=rss

Friday, August 15, 2008

ancient civilizations! XD

maybe...

http://news.nationalgeographic.com/news/2008/08/080815-sahara-video-vin.html

Wednesday, August 13, 2008

Ah, a scientific research to the phrase “learning from mistakes”

Minding Mistakes: How the Brain Monitors Errors and Learns from Goofs

Brain scientists have identified nerve cells that monitor performance, detect errors and govern the ability to learn from misfortunes

By Markus Ullsperger

April 26, 1986: During routine testing, reactor number 4 of the Chernobyl nuclear power plant explodes, triggering the worst catastrophe in the history of the civilian use of nuclear energy.

September 22, 2006: On a trial run, experimental maglev train Transrapid 08 plows into a maintenance vehicle at 125 mph near Lathen, Germany, spewing wreckage over hundreds of yards, killing 23 passengers and severely injuring 10 others.

Human error was behind both accidents. Of course, people make mistakes, both large and small, every day, and monitoring and fixing slipups is a regular part of life. Although people understandably would like to avoid serious errors, most goofs have a good side: they give the brain information about how to improve or fine-tune behavior. In fact, learning from mistakes is likely essential to the survival of our species.

In recent years researchers have identified a region of the brain called the medial frontal cortex that plays a central role in detecting mistakes and responding to them. These frontal neurons become active whenever people or monkeys change their behavior after the kind of negative feedback or diminished reward that results from errors.

Much of our ability to learn from flubs, the latest studies show, stems from the actions of the neurotransmitter dopamine. In fact, genetic variations that affect dopamine signaling may help explain differences between people in the extent to which they learn from past goofs. Meanwhile certain patterns of cerebral activity often foreshadow miscues, opening up the possibility of preventing blunders with portable devices that can detect error-prone brain states.

Error Detector
Hints of the brain's error-detection apparatus emerged serendipitously in the early 1990s. Psychologist Michael Falkenstein of the University of Dortmund in Germany and his colleagues were monitoring subjects' brains using electroencephalography (EEG) during a psychology experiment and noticed that whenever a subject pressed the wrong button, the electrical potential in the frontal lobe suddenly dropped by about 10 microvolts. Psychologist William J. Gehring of the University of Illinois and his colleagues confirmed this effect, which researchers refer to as error-related negativity, or ERN.

An ERN may appear after various types of errors, unfavorable outcomes or conflict situations. Action errors occur when a person's behavior produces an unintended result. Time pressure, for example, often leads to misspellings while typing or incorrect addresses on e-mails. An ERN quickly follows such action errors, peaking within 100 milliseconds after the incorrect muscle activity ends.

A slightly more delayed ERN, one that crests 250 to 300 milliseconds after an outcome, occurs in response to unfavorable feedback or monetary losses. This so-called feedback ERN also may appear in situations in which a person faces a difficult choice—known as decision uncertainty—and remains conflicted even after making a choice. For instance, a feedback ERN may occur after a person has picked a checkout line in a supermarket and then realizes that the line is moving slower than the adjacent queue.

Where in the brain does the ERN originate? Using functional magnetic resonance imaging, among other imaging methods, researchers have repeatedly found that error recognition takes place in the medial frontal cortex, a region on the surface of the brain in the middle of the frontal lobe, including the anterior cingulate. Such studies implicate this brain region as a monitor of negative feedback, action errors and decision uncertainty—and thus as an overall supervisor of human performance.

In a 2005 paper, along with psychologist Stefan Debener of the Institute of Hearing Research in Southampton, England, and our colleagues, I showed that the medial frontal cortex is the probable source of the ERN. In this study, subjects performed a so-called flanker task, in which they specified the direction of a central target arrow in the midst of surrounding decoy arrows while we monitored their brain activity using EEG and fMRI simultaneously. We found that as soon as an ERN occurs, activity in the medial frontal cortex increases and that the bigger the ERN the stronger the fMRI signal, suggesting that this brain region does indeed generate the classic error signal.

Learning from Lapses
In addition to recognizing errors, the brain must have a way of adaptively responding to them. In the 1970s psychologist Patrick Rabbitt of the University of Manchester in England, one of the first to systematically study such reactions, observed that typing misstrikes are made with slightly less keyboard pressure than are correct strokes, as if the typist were attempting to hold back at the last moment.

More generally, people often react to errors by slowing down after a mistake, presumably to more carefully analyze a problem and to switch to a different strategy for tackling a task. Such behavioral changes represent ways in which we learn from our mistakes in hopes of avoiding similar slipups in the future.

The medial frontal cortex seems to govern this process as well. Imaging studies show that neural activity in this region increases, for example, before a person slows down after an action error. Moreover, researchers have found responses from individual neurons in the medial frontal cortex in monkeys that implicate these cells in an animal's behavioral response to negative feedback, akin to that which results from an error.

In 1998 neuroscientists Keisetsu Shima and Jun Tanji of the Tohoku University School of Medicine in Sendai, Japan, trained three monkeys to either push or turn a handle in response to a visual signal. A monkey chose its response based on the reward it expected: it would, say, push the handle if that action had been consistently followed by a reward. But when the researchers successively reduced the reward for pushing—a type of negative feedback or error signal—the animals would within a few trials switch to turning the handle instead. Meanwhile researchers were recording the electrical activity of single neurons in part of the monkeys' cingulate.

Shima and Tanji found that four types of neurons altered their activity after a reduced reward but only if the monkey used that reduction as a cue to push instead of turn, or vice versa. These neurons did not flinch if the monkey did not decide to switch actions or if it did so in response to a tone rather than to a lesser reward. And when the researchers temporarily deactivated neurons in this region, the monkey no longer switched movements after a dip in its incentive. Thus, these neurons relay information about the degree of reward for the purpose of altering behavior and can use negative feedback as a guide to improvement.

In 2004 neurosurgeon Ziv M. Williams and his colleagues at Massachusetts General Hospital reported finding a set of neurons in the human anterior cingulate with similar properties. The researchers recorded from these neurons in five patients who were scheduled for surgical removal of that brain region. While these neurons were tapped, the patients did a task in which they had to choose one of two directions to move a joystick based on a visual cue that also specified a monetary reward: either nine or 15 cents. On the nine-cent trials, participants were supposed to change the direction in which they moved the joystick.

Similar to the responses of monkey neurons, activity among the anterior cingulate neurons rose to the highest levels when the cue indicated a reduced reward along with a change in the direction of movement. In addition, the level of neuronal activity predicted whether a person would act as instructed or make an error. After surgical removal of those cells, the patients made more errors when they were cued to change their behavior in the face of a reduced payment. These neurons, therefore, seem to link information about rewards to behavior. After detecting discrepancies between actual and desired outcomes, the cells determine the corrective action needed to optimize reward.

But unless instructed to do so, animals do not generally alter their behavior after just one mishap. Rather they change strategies only after a pattern of failed attempts. The anterior cingulate also seems to work in this more practical fashion in arbitrating the response to errors. In a 2006 study experimental psychologists Stephen Kennerley and Matthew Rushworth and their colleagues at the University of Oxford taught rhesus monkeys to pull a lever to get food. After 25 trials, the researchers changed the rules, dispensing treats when the monkeys turned the lever instead of pulling it. The monkeys adapted and switched to turning the lever. After a while, the researchers changed the rules once more, and the monkeys again altered their behavior.

Each time the monkeys did not immediately switch actions, but did so only after a few false starts, using the previous four or five trials as a guide. After damage to the anterior cingulate, however, the animals lost this longer-term view and instead used only their most recent success or failure as a guide. Thus, the anterior cingulate seems to control an animal's ability to evaluate a short history of hits and misses as a guide to future decisions.

Chemical Incentive
Such evaluations may depend on dopamine, which conveys success signals in the brain. Neurophysiologist Wolfram Schultz, now at the University of Cambridge, and his colleagues have shown over the past 15 years that dopamine-producing nerve cells alter their activity when a reward is either greater or less than anticipated. When a monkey is rewarded unexpectedly, say, for a correct response, the cells become excited, releasing dopamine, whereas their activity drops when the monkey fails to get a treat after an error. And if dopamine quantity stably altered the connections between nerve cells, its differential release could thereby promote learning from successes and failures.

Indeed, changes in dopamine levels may help to explain how we learn from positive as well as negative reinforcement. Dopamine excites the brain's so-called Go pathway, which promotes a response while also inhibiting the action-suppressing "NoGo" pathway. Thus, bursts of dopamine resulting from positive reinforcement promote learning by both activating the Go channel and blocking NoGo. In contrast, dips in dopamine after negative outcomes should promote avoidance behavior by inactivating the Go pathway while releasing inhibition of NoGo.

In 2004 psychologist Michael J. Frank, then at the University of Colorado at Boulder, and his colleagues reported evidence for dopamine's influence on learning in a study of patients with Parkinson's disease, who produce too little of the neurotransmitter. Frank theorized that Parkinson's patients may have trouble generating the dopamine needed to learn from positive feedback but that their low dopamine levels may facilitate training based on negative feedback.

In the study the researchers displayed pairs of symbols on a computer screen and asked 19 healthy people and 30 Parkinson's patients to choose one symbol from each pair. The word "correct" appeared whenever a subject had chosen an arbitrarily correct symbol, whereas the word "incorrect" flashed after every "wrong" selection. (No symbol was invariably correct or incorrect.) One of them was deemed right 80 percent of the time, and another 20 percent. For other pairs, the probabilities were 70:30 and 60:40. The subjects were expected to learn from this feedback and thereby increase the number of correct choices in later test runs.

As expected, the healthy people learned to prefer the correct symbols and avoid the incorrect ones with about equal proficiency. Parkinson's patients, on the other hand, showed a stronger tendency to reject negative symbols than to select the positive ones—that is, they learned more from their errors than from their hits, showing that the lack of dopamine did bias their learning in the expected way. In addition, the patients' ability to learn from positive feedback outpaced that from negative feedback after they took medication that boosted brain levels of dopamine, underscoring the importance of dopamine in positive reinforcement.

Dopamine-based discrepancies in learning ability also appear within the healthy population. Last December, along with psychology graduate student Tilmann A. Klein and our colleagues, I showed that such variations are partly based on individual differences in a gene for the D2 dopamine receptor. A variant of this gene, called A1, results in up to a 30 percent reduction in the density of those receptors on nerve cell membranes.

We asked 12 males with the A1 variant and 14 males who had the more common form of this gene to perform a symbol-based learning test like the one Frank used. We found that A1 carriers were less able to remember, and avoid, the negative symbols than were the participants who did not have this form of the gene. The A1 carriers also avoided the negative symbols less often than they picked the positive ones. Noncarriers learned about equally well from the good and bad symbols.

Thus, fewer D2 receptors may impair a person's ability to learn from mistakes or negative outcomes. (This molecular quirk is just one of many factors that influence such learning.) Accordingly, our fMRI results show that the medial frontal cortex of A1 carriers generates a weaker response to errors than it does in other people, suggesting that this brain area is one site at which dopamine exerts its effect on learning from negative feedback.

But if fewer D2 receptors leads to impaired avoidance learning, why do drugs that boost dopamine signaling also lead to such impairments in Parkinson's patients? In both scenarios, dopamine signaling may, in fact, be increased through other dopamine receptors; research indicates that A1 carriers produce an unusually large amount of dopamine, perhaps as a way to compensate for their lack of D2 receptors. Whatever the reason, insensitivity to unpleasant consequences may contribute to the slightly higher rates of obesity, compulsive gambling and addiction among A1 carriers than in the general population.

Foreshadowing Faults
Although learning from mistakes may help us avoid future missteps, inexperience or inattention can still lead to errors. Many such goofs turn out to be predictable, however, foreshadowed by telltale changes in brain metabolism, according to research my team published in April in the Proceedings of the National Academy of Sciences USA.

Along with cognitive neuroscientist Tom Eichele of the University of Bergen in Norway and several colleagues, I asked 13 young adults to perform a flanker task while we monitored their brain activity using fMRI. Starting about 30 seconds before our subjects made an error, we found distinct but gradual changes in the activation of two brain networks.

One of the networks, called the default mode region, is usually more active when a person is at rest and quiets down when a person is engaged in a task. But before an error, the posterior part of this network—which includes the retrosplenial cortex, located near the center of the brain at the surface—became more active, indicating that the mind was relaxing. Meanwhile activity declined in areas of the frontal lobe that spring to life whenever a person is working hard at something, suggesting that the person was also becoming less engaged in the task at hand.

Our results show that errors are the product of gradual changes in the brain rather than unpredictable blips in brain activity. Such adjustments could be used to foretell errors, particularly those that occur during monotonous tasks. In the future, people might wear portable devices that monitor these brain states as a first step toward preventing mistakes where they are most likely to occur—and when they matter most.

Editor's Note: This story was originally published with the title "Minding Mistakes"


http://www.sciam.com/article.cfm?id=minding-mistakes

Monday, August 11, 2008

Are Viruses Alive?

So, are they?

Although viruses challenge our concept of what "living" means, they are vital members of the web of life

By Luis P. Villarreal

In an episode of the classic 1950s television comedy The Honeymooners, Brooklyn bus driver Ralph Kramden loudly explains to his wife, Alice, "You know that I know how easy you get the virus." Half a century ago even regular folks like the Kramdens had some knowledge of viruses—as microscopic bringers of disease. Yet it is almost certain that they did not know exactly what a virus was. They were, and are, not alone.

For about 100 years, the scientifi c community has repeatedly changed its collective mind over what viruses are. First seen as poisons, then as life-forms, then biological chemicals, viruses today are thought of as being in a gray area between living and nonliving: they cannot replicate on their own but can do so in truly living cells and can also affect the behavior of their hosts profoundly. The categorization of viruses as nonliving during much of the modern era of biological science has had an unintended consequence: it has led most researchers to ignore viruses in the study of evolution. Finally, however, scientists are beginning to appreciate viruses as fundamental players in the history of life.

Coming to Terms

It is easy to see why viruses have been diffi cult to pigeonhole. They seem to vary with each lens applied to examine them. The initial interest in viruses stemmed from their association with diseases—the word "virus" has its roots in the Latin term for "poison." In the late 19th century researchers realized that certain diseases, including rabies and foot-and-mouth, were caused by particles that seemed to behave like bacteria but were much smaller. Because they were clearly biological themselves and could be spread from one victim to another with obvious biological effects, viruses were then thought to be the simplest of all living, gene-bearing life-forms.

Their demotion to inert chemicals came after 1935, when Wendell M. Stanley and his colleagues, at what is now the Rockefeller University in New York City, crystallized a virus— tobacco mosaic virus—for the fi rst time. They saw that it consisted of a package of complex biochemicals. But it lacked essential systems necessary for metabolic functions, the biochemical activity of life. Stanley shared the 1946 Nobel Prize— in chemistry, not in physiology or medicine—for this work.

Further research by Stanley and others established that a virus consists of nucleic acids (DNA or RNA) enclosed in a protein coat that may also shelter viral proteins involved in infection. By that description, a virus seems more like a chemistry set than an organism. But when a virus enters a cell (called a host after infection), it is far from inactive. It sheds its coat, bares its genes and induces the cell's own replication machinery to reproduce the intruder's DNA or RNA and manufacture more viral protein based on the instructions in the viral nucleic acid. The newly created viral bits assemble and, voilà, more virus arises, which also may infect other cells.

These behaviors are what led many to think of viruses as existing at the border between chemistry and life. More poetically, virologists Marc H. V. van Regenmortel of the University of Strasbourg in France and Brian W. J. Mahy of the Centers for Disease Control and Prevention have recently said that with their dependence on host cells, viruses lead "a kind of borrowed life." Interestingly, even though biologists long favored the view that viruses were mere boxes of chemicals, they took advantage of viral activity in host cells to determine how nucleic acids code for proteins: indeed, modern molecular biology rests on a foundation of information gained through viruses.

Molecular biologists went on to crystallize most of the essential components of cells and are today accustomed to thinking about cellular constituents—for example, ribosomes, mitochondria, membranes, DNA and proteins—as either chemical machinery or the stuff that the machinery uses or produces. This exposure to multiple complex chemical structures that carry out the processes of life is probably a reason that most molecular biologists do not spend a lot of time puzzling over whether viruses are alive. For them, that exercise might seem equivalent to pondering whether those individual subcellular constituents are alive on their own. This myopic view allows them to see only how viruses co-opt cells or cause disease. The more sweeping question of viral contributions to the history of life on earth, which I will address shortly, remains for the most part unanswered and even unasked.

To Be or Not to Be
The seemingly simple question of whether or not viruses are alive, which my students often ask, has probably defi ed a simple answer all these years because it raises a fundamental issue: What exactly defi nes "life?" A precise scientifi c defi nition of life is an elusive thing, but most observers would agree that life includes certain qualities in addition to an ability to replicate. For example, a living entity is in a state bounded by birth and death. Living organisms also are thought to require a degree of biochemical autonomy, carrying on the metabolic activities that produce the molecules and energy needed to sustain the organism. This level of autonomy is essential to most definitions.

Viruses, however, parasitize essentially all biomolecular aspects of life. That is, they depend on the host cell for the raw materials and energy necessary for nucleic acid synthesis, protein synthesis, processing and transport, and all other biochemical activities that allow the virus to multiply and spread. One might then conclude that even though these processes come under viral direction, viruses are simply nonliving parasites of living metabolic systems. But a spectrum may exist between what is certainly alive and what is not.

A rock is not alive. A metabolically active sack, devoid of genetic material and the potential for propagation, is also not alive. A bacterium, though, is alive. Although it is a single cell, it can generate energy and the molecules needed to sustain itself, and it can reproduce. But what about a seed? A seed might not be considered alive. Yet it has a potential for life, and it may be destroyed. In this regard, viruses resemble seeds more than they do live cells. They have a certain potential, which can be snuffed out, but they do not attain the more autonomous state of life.

Another way to think about life is as an emergent property of a collection of certain nonliving things. Both life and consciousness are examples of emergent complex systems. They each require a critical level of complexity or interaction to achieve their respective states. A neuron by itself, or even in a network of nerves, is not conscious—whole brain complexity is needed. Yet even an intact human brain can be biologically alive but incapable of consciousness, or "brain-dead." Similarly, neither cellular nor viral individual genes or proteins are by themselves alive. The enucleated cell is akin to the state of being braindead, in that it lacks a full critical complexity. A virus, too, fails to reach a critical complexity. So life itself is an emergent, complex state, but it is made from the same fundamental, physical building blocks that constitute a virus. Approached from this perspective, viruses, though not fully alive, may be thought of as being more than inert matter: they verge on life.

In fact, in October, French researchers announced fi ndings that illustrate afresh just how close some viruses might come. Didier Raoult and his colleagues at the University of the Mediterranean in Marseille announced that they had sequenced the genome of the largest known virus, Mimivirus, which was discovered in 1992. The virus, about the same size as a small bacterium, infects amoebae. Sequence analysis of the virus revealed numerous genes previously thought to exist only in cellular organisms. Some of these genes are involved in making the proteins encoded by the viral DNA and may make it easier for Mimivirus to co-opt host cell replication systems. As the research team noted in its report in the journal Science, the enormous complexity of the Mimivirus's genetic complement "challenges the established frontier between viruses and parasitic cellular organisms."

Impact on Evolution
Debates over whether to label viruses as living lead naturally to another question: Is pondering the status of viruses as living or nonliving more than a philosophical exercise, the basis of a lively and heated rhetorical debate but with little real consequence? I think the issue is important, because how scientists regard this question infl uences their thinking about the mechanisms of evolution.

Viruses have their own, ancient evolutionary history, dating to the very origin of cellular life. For example, some viral- repair enzymes—which excise and resynthesize damaged DNA, mend oxygen radical damage, and so on— are unique to certain viruses and have existed almost unchanged probably for billions of years.

Nevertheless, most evolutionary biologists hold that because viruses are not alive, they are unworthy of serious consideration when trying to understand evolution. They also look on viruses as coming from host genes that somehow escaped the host and acquired a protein coat. In this view, viruses are fugitive host genes that have degenerated into parasites. And with viruses thus dismissed from the web of life, important contributions they may have made to the origin of species and the maintenance of life may go unrecognized. (Indeed, only four of the 1,205 pages of the 2002 volume The Encyclopedia of Evolution are devoted to viruses.)

Of course, evolutionary biologists do not deny that viruses have had some role in evolution. But by viewing viruses as inanimate, these investigators place them in the same category of infl uences as, say, climate change. Such external infl uences select among individuals having varied, genetically controlled traits; those individuals most able to survive and thrive when faced with these challenges go on to reproduce most successfully and hence spread their genes to future generations.

But viruses directly exchange genetic information with living organisms—that is, within the web of life itself. A possible surprise to most physicians, and perhaps to most evolutionary biologists as well, is that most known viruses are persistent and innocuous, not pathogenic. They take up residence in cells, where they may remain dormant for long periods or take advantage of the cells' replication apparatus to reproduce at a slow and steady rate. These viruses have developed many clever ways to avoid detection by the host immune system— essentially every step in the immune process can be altered or controlled by various genes found in one virus or another.

Furthermore, a virus genome (the entire complement of DNA or RNA) can permanently colonize its host, adding viral genes to host lineages and ultimately becoming a critical part of the host species' genome. Viruses therefore surely have effects that are faster and more direct than those of external forces that simply select among more slowly generated, internal genetic variations. The huge population of viruses, combined with their rapid rates of replication and mutation, makes them the world's leading source of genetic innovation: they constantly "invent" new genes. And unique genes of viral origin may travel, finding their way into other organisms and contributing to evolutionary change.

Data published by the International Human Genome Sequencing Consortium indicate that somewhere between 113 and 223 genes present in bacteria and in the human genome are absent in well-studied organisms—such as the yeast Saccharomyces cerevisiae, the fruit fly Drosophila melanogaster and the nematode Caenorhabditis elegans—that lie in between those two evolutionary extremes. Some researchers thought that these organisms, which arose after bacteria but before vertebrates, simply lost the genes in question at some point in their evolutionary history. Others suggested that these genes had been transferred directly to the human lineage by invading bacteria.

My colleague Victor DeFilippis of the Vaccine and Gene Therapy Institute of the Oregon Health and Science University and I suggested a third alternative: viruses may originate genes, then colonize two different lineages—for example, bacteria and vertebrates. A gene apparently bestowed on humanity by bacteria may have been given to both by a virus.

In fact, along with other researchers, Philip Bell of Macquarie University in Sydney, Australia, and I contend that the cell nucleus itself is of viral origin. The advent of the nucleus— which differentiates eukaryotes (organisms whose cells contain a true nucleus), including humans, from prokaryotes, such as bacteria—cannot be satisfactorily explained solely by the gradual adaptation of prokaryotic cells until they became eukaryotic. Rather the nucleus may have evolved from a persisting large DNA virus that made a permanent home within prokaryotes. Some support for this idea comes from sequence data showing that the gene for a DNA polymerase (a DNAcopying enzyme) in the virus called T4, which infects bacteria, is closely related to other DNA polymerase genes in both eukaryotes and the viruses that infect them. Patrick Forterre of the University of Paris-Sud has also analyzed enzymes responsible for DNA replication and has concluded that the genes for such enzymes in eukaryotes probably have a viral origin.

From single-celled organisms to human populations, viruses affect all life on earth, often determining what will survive. But viruses themselves also evolve. New viruses, such as the AIDS-causing HIV-1, may be the only biological entities that researchers can actually witness come into being, providing a real-time example of evolution in action.

Viruses matter to life. They are the constantly changing boundary between the worlds of biology and biochemistry. As we continue to unravel the genomes of more and more organisms, the contributions from this dynamic and ancient gene pool should become apparent. Nobel laureate Salvador Luria mused about the viral infl uence on evolution in 1959. "May we not feel," he wrote, "that in the virus, in their merging with the cellular genome and reemerging from them, we observe the units and process which, in the course of evolution, have created the successful genetic patterns that underlie all living cells?" Regardless of whether or not we consider viruses to be alive, it is time to acknowledge and study them in their natural context—within the web of life.


 

http://www.sciam.com/article.cfm?id=are-viruses-alive-2004&print=true

Thursday, August 7, 2008

Must read! All who lack sleep!

Interesting.. why do we need sleep? I think the second argument is more likely yea?

Sleep on It: How Snoozing Makes You Smarter

During slumber, our brain engages in data analysis, from strengthening memories to solving problems

By Robert Stickgold and Jeffrey M. Ellenbogen

In 1865 Friedrich August Kekulé woke up from a strange dream: he imagined a snake forming a circle and biting its own tail. Like many organic chemists of the time, Kekulé had been working feverishly to describe the true chemical structure of benzene, a problem that continually eluded understanding. But Kekulé's dream of a snake swallowing its tail, so the story goes, helped him to accurately realize that benzene's structure formed a ring. This insight paved the way for a new understanding of organic chemistry and earned Kekulé a title of nobility in Germany.

Although most of us have not been ennobled, there is something undeniably familiar about Kekulé's problem-solving method. Whether deciding to go to a particular college, accept a challenging job offer or propose to a future spouse, "sleeping on it" seems to provide the clarity we need to piece together life's puzzles. But how does slumber present us with answers?

The latest research suggests that while we are peacefully asleep our brain is busily processing the day's information. It combs through recently formed memories, stabilizing, copying and filing them, so that they will be more useful the next day. A night of sleep can make memories resistant to interference from other information and allow us to recall them for use more effectively the next morning. And sleep not only strengthens memories, it also lets the brain sift through newly formed memories, possibly even identifying what is worth keeping and selectively maintaining or enhancing these aspects of a memory. When a picture contains both emotional and unemotional elements, sleep can save the important emotional parts and let the less relevant background drift away. It can analyze collections of memories to discover relations among them or identify the gist of a memory while the unnecessary details fade—perhaps even helping us find the meaning in what we have learned.

Not Merely Resting
If you find this news surprising, you are not alone. Until the mid-1950s, scientists generally assumed that the brain was shut down while we snoozed. Although German psychologist Hermann Ebbinghaus had evidence in 1885 that sleep protects simple memories from decay, for decades researchers attributed the effect to a passive protection against interference. We forget things, they argued, because all the new information coming in pushes out the existing memories. But because there is nothing coming in while we get shut-eye, we simply do not forget as much.

Then, in 1953, the late physiologists Eugene Aserinsky and Nathaniel Kleitman of the University of Chicago discovered the rich variations in brain activity during sleep, and scientists realized they had been missing something important. Aserinsky and Kleitman found that our sleep follows a 90-minute cycle, in and out of rapid-eye-movement (REM) sleep. During REM sleep, our brain waves—the oscillating electromagnetic signals that result from large-scale brain activity—look similar to those produced while we are awake. And in subsequent decades, the late Mircea Steriade of Laval University in Quebec and other neuroscientists discovered that individual collections of neurons were independently firing in between these REM phases, during periods known as slow-wave sleep, when large populations of brain cells fire synchronously in a steady rhythm of one to four beats each second. So it became clear that the sleeping brain was not merely "resting," either in REM sleep or in slow-wave sleep. Sleep was doing something different. Something active.

Sleep to Remember
The turning point in our understanding of sleep and memory came in 1994 in a groundbreaking study. Neurobiologists Avi Karni, Dov Sagi and their colleagues at the Weizmann Institute of Science in Israel showed that when volunteers got a night of sleep, they improved at a task that involved rapidly discriminating between objects they saw—but only when they had had normal amounts of REM sleep. When the subjects were deprived of REM sleep, the improvement disappeared. The fact that performance actually rose overnight negated the idea of passive protection. Something had to be happening within the sleeping brain that altered the memories formed the day before. But Karni and Sagi described REM sleep as a permissive state—one that could allow changes to happen—rather than a necessary one. They proposed that such unconscious improvements could happen across the day or the night. What was important, they argued, was that improvements could only occur during part of the night, during REM.

It was not until one of us (Stickgold) revisited this question in 2000 that it became clear that sleep could, in fact, be necessary for this improvement to occur. Using the same rapid visual discrimination task, we found that only with more than six hours of sleep did people's performance improve over the 24 hours following the learning session. And REM sleep was not the only important component: slow-wave sleep was equally crucial. In other words, sleep—in all its phases—does something to improve memory that being awake does not do.

To understand how that could be so, it helps to review a few memory basics. When we "encode" information in our brain, the newly minted memory is actually just beginning a long journey during which it will be stabilized, enhanced and qualitatively altered, until it bears only faint resemblance to its original form. Over the first few hours, a memory can become more stable, resistant to interference from competing memories. But over longer periods, the brain seems to decide what is important to remember and what is not—and a detailed memory evolves into something more like a story.

In 2006 we demonstrated the powerful ability of sleep to stabilize memories and provided further evidence against the myth that sleep only passively (and, therefore, transiently) protects memories from interference. We reasoned that if sleep merely provides a transient benefit for memory, then memories after sleep should be, once again, susceptible to interference. We first trained people to memorize pairs of words in an A-B pattern (for example, "blanket-window") and then allowed some of the volunteers to sleep. Later they all learned pairs in an A-C pattern ("blanket-sneaker"), which were meant to interfere with their memories of the A-B pairs. As expected, the people who slept could remember more of the A-B pairs than people who had stayed awake could. And when we introduced interfering A-C pairs, it was even more apparent that those who slept had a stronger, more stable memory for the A-B sets. Sleep changed the memory, making it robust and more resistant to interference in the coming day.

But sleep's effects on memory are not limited to stabilization. Over just the past few years, a number of studies have demonstrated the sophistication of the memory processing that happens during slumber. In fact, it appears that as we sleep, the brain might even be dissecting our memories and retaining only the most salient details. In one study we created a series of pictures that included either unpleasant or neutral objects on a neutral background and then had people view the pictures one after another. Twelve hours later we tested their memories for the objects and the backgrounds. The results were quite surprising. Whether the subjects had stayed awake or slept, the accuracy of their memories dropped by 10 percent for everything. Everything, that is, except for the memory of the emotionally evocative objects after night of sleep. Instead of deteriorating, memories for the emotional objects actually seemed to improve by a few percent overnight, showing about a 15 percent improvement relative to the deteriorating backgrounds. After a few more nights, one could imagine that little but the emotional objects would be left. We know this culling happens over time with real-life events, but now it appears that sleep may play a crucial role in this evolution of emotional memories.

Precisely how the brain strengthens and enhances memories remains largely a mystery, although we can make some educated guesses at the basic mechanism. We know that memories are created by altering the strengths of connections among hundreds, thousands or perhaps even millions of neurons, making certain patterns of activity more likely to recur. These patterns of activity, when reactivated, lead to the recall of a memory—whether that memory is where we left the car keys or a pair of words such as "blanket-window." These changes in synaptic strength are thought to arise from a molecular process known as long-term potentiation, which strengthens the connections between pairs of neurons that fire at the same time. Thus, cells that fire together wire together, locking the pattern in place for future recall.

During sleep, the brain reactivates patterns of neural activity that it performed during the day, thus strengthening the memories by long-term potentiation. In 1994 neuroscientists Matthew Wilson and Bruce McNaughton, both then at the University of Arizona, showed this effect for the first time using rats fitted with implants that monitored their brain activity. They taught these rats to circle a track to find food, recording neuronal firing patterns from the rodents' brains all the while. Cells in the hippocampus—a brain structure critical for spatial memory—created a map of the track, with different "place cells" firing as the rats traversed each region of the track [see "The Matrix in Your Head," by James J. Knierim; Scientific American Mind, June/July 2007]. Place cells correspond so closely to exact physical locations that the researchers could monitor the rats' progress around the track simply by watching which place cells were firing at any given time. And here is where it gets even more interesting: when Wilson and McNaughton continued to record from these place cells as the rats slept, they saw the cells continuing to fire in the same order—as if the rats were "practicing" running around the track in their sleep.

As this unconscious rehearsing strengthens memory, something more complex is happening as well—the brain may be selectively rehearsing the more difficult aspects of a task. For instance, Matthew P. Walker's work at Harvard Medical School in 2005 demonstrated that when subjects learned to type complicated sequences such as 4-1-3-2-4 on a keyboard (much like learning a new piano score), sleeping between practice sessions led to faster and more coordinated finger movements. But on more careful examination, he found that people were not simply getting faster overall on this typing task. Instead each subject was getting faster on those particular keystroke sequences at which he or she was worst.

The brain accomplishes this improvement, at least in part, by moving the memory for these sequences overnight. Using functional magnetic resonance imaging, Walker showed that his subjects used different brain regions to control their typing after they had slept. The next day typing elicited more activity in the right primary motor cortex, medial prefrontal lobe, hippocampus and left cerebellum—places that would support faster and more precise key-press movements—and less activity in the parietal cortices, left insula, temporal pole and frontopolar region, areas whose suppression indicates reduced conscious and emotional effort. The entire memory got strengthened, but especially the parts that needed it most, and sleep was doing this work by using different parts of the brain than were used while learning the task.

Solutions in the Dark
These effects of sleep on memory are impressive. Adding to the excitement, recent discoveries show that sleep also facilitates the active analysis of new memories, enabling the brain to solve problems and infer new information. In 2007 one of us (Ellenbogen) showed that the brain learns while we are asleep. The study used a transitive inference task; for example, if Bill is older than Carol and Carol is older than Pierre, the laws of transitivity make it clear that Bill is older than Pierre. Making this inference requires stitching those two fragments of information together. People and animals tend to make these transitive inferences without much conscious thought, and the ability to do so serves as an enormously helpful cognitive skill: we discover new information (Bill is older than Pierre) without ever learning it directly.

The inference seems obvious in Bill and Pierre's case, but in the experiment, we used abstract colored shapes that have no intuitive relation to one another, making the task more challenging. We taught people so-called premise pairs—they learned to choose, for example, the orange oval over the turquoise one, turquoise over green, green over paisley, and so on. The premise pairs imply a hierarchy—if orange is a better choice than turquoise and turquoise is preferred to green, then orange should win over green. But when we tested the subjects on these novel pairings 20 minutes after they learned the premise pairs, they had not yet discovered these hidden relations. They chose green just as often as they chose orange, performing no better than chance.

When we tested subjects 12 hours later on the same day, however, they made the correct choice 70 percent of the time. Simply allowing time to pass enabled the brain to calculate and learn these transitive inferences. And people who slept during the 12 hours performed significantly better, linking the most distant pairs (such as orange versus paisley) with 90 percent accuracy. So it seems the brain needs time after we learn information to process it, connecting the dots, so to speak—and sleep provides the maximum benefit.

In a 2004 study Ullrich Wagner and others in Jan Born's laboratory at the University of Lübeck in Germany elegantly demonstrated just how powerful sleep's processing of memories can be. They taught subjects how to solve a particular type of mathematical problem by using a long and tedious procedure and had them practice it about 100 times. The subjects were then sent away and told to come back 12 hours later, when they were instructed to try it another 200 times.

What the researchers had not told their subjects was that there is a much simpler way to solve these problems. The researchers could tell if and when subjects gained insight into this shortcut, because their speed would suddenly increase. Many of the subjects did, in fact, discover the trick during the second session. But when they got a night's worth of sleep between the two sessions, they were more than two and a half times more likely to figure it out—59 percent of the subjects who slept found the trick, compared with only 23 percent of those who stayed awake between the sessions. Somehow the sleeping brain was solving this problem, without even knowing that there was a problem to solve.

The Need to Sleep
As exciting findings such as these come in more and more rapidly, we are becoming sure of one thing: while we sleep, our brain is anything but inactive. It is now clear that sleep can consolidate memories by enhancing and stabilizing them and by finding patterns within studied material even when we do not know that patterns might be there. It is also obvious that skimping on sleep stymies these crucial cognitive processes: some aspects of memory consolidation only happen with more than six hours of sleep. Miss a night, and the day's memories might be compromised—an unsettling thought in our fast-paced, sleep-deprived society.

But the question remains: Why did we evolve in such a way that certain cognitive functions happen only while we are asleep? Would it not seem to make more sense to have these operations going on in the daytime? Part of the answer might be that the evolutionary pressures for sleep existed long before higher cognition—functions such as immune system regulation and efficient energy usage (for instance, hunt in the day and rest at night) are only two of the many reasons it makes sense to sleep on a planet that alternates between light and darkness. And because we already had evolutionary pressure to sleep, the theory goes, the brain evolved to use that time wisely by processing information from the previous day: acquire by day; process by night.

Or it might have been the other way around. Memory processing seems to be the only function of sleep that actually requires an organism to truly sleep—that is, to become unaware of its surroundings and stop processing incoming sensory signals. This unconscious cognition appears to demand the same brain resources used for processing incoming signals when awake. The brain, therefore, might have to shut off external inputs to get this job done. In contrast, although other functions such as immune system regulation might be more readily performed when an organism is inactive, there does not seem to be any reason why the organism would need to lose awareness. Thus, it may be these other functions that have been added to take advantage of the sleep that had already evolved for memory.

Many other questions remain about our nighttime cognition, however it might have evolved. Exactly how does the brain accomplish this memory processing? What are the chemical or molecular activities that account for these effects? These questions raise a larger issue about memory in general: What makes the brain remember certain pieces of information and forget others? We think the lesson here is that understanding sleep will ultimately help us to better understand memory.

The task might seem daunting, but these puzzles are the kind on which scientists thrive—and they can be answered. First, we will have to design and carry out more and more experiments, slowly teasing out answers. But equally important, we are going to have to sleep on it.

Note: This article was originally published with the title, "Quiet! Sleeping Brain at Work."

HA i knew viruses were alive! not that this makes it any clearer -.-

'Virophage' suggests viruses are alive

Evidence of illness enhances case for life.

The discovery of a giant virus that falls ill through infection by another virus1 is fuelling the debate about whether viruses are alive.

“There’s no doubt this is a living organism,” says Jean-Michel Claverie, a virologist at the the CNRS UPR laboratories in Marseilles, part of France’s basic-research agency. “The fact that it can get sick makes it more alive.”

Giant viruses have been captivating virologists since 2003, when a team led by Claverie and Didier Raoult at CNRS UMR, also in Marseilles, reported the discovery of the first monster2. The virus had been isolated more than a decade earlier in amoebae from a cooling tower in Bradford, UK, but was initially mistaken for a bacterium because of its size, and was relegated to the freezer.

Closer inspection showed the microbe to be a huge virus with, as later work revealed, a genome harbouring more than 900 protein-coding genes3 — at least three times more than that of the biggest previously known viruses and bigger than that of some bacteria. It was named Acanthamoeba polyphaga mimivirus (for mimicking microbe), and is thought to be part of a much larger family. “It was the cause of great excitement in virology,” says Eugene Koonin at the National Center for Biotechnology Information in Bethesda, Maryland. “It crossed the imaginary boundary between viruses and cellular organisms.”

Now Raoult, Koonin and their colleagues report the isolation of a new strain of giant virus from a cooling tower in Paris, which they have named mamavirus because it seemed slightly larger than mimivirus. Their electron microscopy studies also revealed a second, small virus closely associated with mamavirus that has earned the name Sputnik, after the first man-made satellite.

With just 21 genes, Sputnik is tiny compared with its mama — but insidious. When the giant mamavirus infects an amoeba, it uses its large array of genes to build a ‘viral factory’, a hub where new viral particles are made. Sputnik infects this viral factory and seems to hijack its machinery in order to replicate. The team found that cells co-infected with Sputnik produce fewer and often deformed mamavirus particles, making the virus less infective. This suggests that Sputnik is effectively a viral parasite that sickens its host — seemingly the first such example.

The team suggests that Sputnik is a ‘virophage’, much like the bacteriophage viruses that infect and sicken bacteria. “It infects this factory like a phage infects a bacterium,” Koonin says. “It’s doing what every parasite can — exploiting its host for its own replication.”

Sputnik’s genome reveals further insight into its biology. Although 13 of its genes show little similarity to any other known genes, three are closely related to mimivirus and mamavirus genes, perhaps cannibalized by the tiny virus as it packaged up particles sometime in its history. This suggests that the satellite virus could perform horizontal gene transfer between viruses — paralleling the way that bacteriophages ferry genes between bacteria.

The findings may have global implications, according to some virologists. A metagenomic study of ocean water4 has revealed an abundance of genetic sequences closely related to giant viruses, leading to a suspicion that they are a common parasite of plankton. These viruses had been missed for many years, Claverie says, because the filters used to remove bacteria screened out giant viruses as well. Raoult’s team also found genes related to Sputnik’s in an ocean-sampling data set, so this could be the first of a new, common family of viruses. “It suggests there are other representatives of this viral family out there in the environment,” Koonin says.

By regulating the growth and death of plankton, giant viruses — and satellite viruses such as Sputnik — could be having major effects on ocean nutrient cycles and climate. “These viruses could be major players in global systems,” says Curtis Suttle, an expert in marine viruses at the University of British Columbia in Vancouver.

“I think ultimately we will find a huge number of novel viruses in the ocean and other places,” Suttle says — 70% of viral genes identified in ocean surveys have never been seen before. “It emphasizes how little is known about these organisms — and I use that term deliberately.”

  • References

    1. La Scola, B. et al. Nature doi:10.1038/nature07218 (2008).
    2. La Scola, B. et al. Science 299, 2033 (2003).
    3. Raoult, D. et al. Science 306, 1344–1350 (2004).
    4. Monier, A. , Claverie, J.-M. & Ogata, H. Genome Biol. 9, R106 (2008).
http://www.nature.com/news/2008/080806/full/454677a.html?s=news_rss

Wednesday, August 6, 2008

Synaesthesia: a world of wonders

People with synaesthesia can't help but get two sensory perceptions for the price of one. Some perceive colours when they hear words or musical notes, or read numbers; rarer individuals can even get tastes from shapes.


Neuroscientists have now reported1 another variant, in which flashes and moving images trigger the perception of sounds. The finding could help to identify the precise neural causes of the phenomenon, reportedly experienced by at least one in every hundred people, and suggests that at least some types of synaesthesia are closely related to ordinary perception.

"This [study] will make a big impact," says synaesthesia expert Edward Hubbard of France's biomedical research organization INSERM. "It will affect not just the synaesthesia community, but also researchers interested in how the brain handles information from multiple senses."

Neuroscientist Melissa Saenz of the California Institute of Technology (Caltech) in Pasadena stumbled across the variant last year while giving a group of undergraduate students a tour of her perception research lab. In front of a silent display (see picture) designed to evoke activity in the motion processing centre of the visual cortex, one of the students asked: "Does anybody else hear something?"

The student, Johannes Pulst-Korenberg, reported hearing a distinct whooshing sound when he watched the display. "Everybody was looking at me, like, 'Are you crazy?'" he remembers.

Saenz could find no description of this variant of synaesthesia in the scientific literature. But to her surprise, after screening several hundred people in the Caltech community, she found three more who reported a similar experience.

"They're generally soft sounds, but they can't be ignored, even when they're distracting," says Saenz. "One of the synaesthetes told me that the moving images on computer screen savers are terribly annoying to her. She can't do anything about it but look away."

Pulst-Korenberg, who is pursuing a doctorate in neuroscience and economics at Caltech, says that he has experienced the same effect watching a butterfly fly. "For some reason, the jerky motion generates little clicks," he says.

The sound of change

Saenz and her lab director, Christof Koch, confirmed the four cases with a test that gave true 'hearing–motion' synaesthetes a distinct advantage. They asked 14 people, including the 4 with synaesthesia, to watch two quick sequences of Morse-code type flashes, and then determine whether the sequences were the same or subtly different. As they were able to 'hear' the sequences too, the synaesthetes, could distinguish them much more accurately than 10 people without synaesthesia. When the sequences instead comprised auditory beeps, the synaesthetes again performed well, but having lost their advantage they scored no better than the non-synaesthete controls.

Some neuroscientists attribute synaesthesia to remnant cross-links between closely-spaced cortical areas — links that develop in the early stages of life but are usually pruned away during childhood. In letter-to-colour synaesthesia, for example, the relevant cortical area for recognizing letters turns out to be immediately adjacent to the one for perceiving colour. Another leading theory explains the condition as an excess of feedback signals from multi-sensory regions, where perceptions are usually integrated, down to single-sensory areas.

Brain-imaging studies have provided evidence for both theories, says Hubbard. "But I don't think the type of synaesthesia that Melissa has discovered really fits neatly into either one."

Instead, it might be an enhanced form of the fast, cross-cortical correspondences the brain makes all the time, he suggests. "For example, we find it easier to understand what someone is saying if we can also see their mouth move. So the brain is constantly integrating audition and vision." Delineating how the brain performs this ordinary integration, says Hubbard, "is probably a part of what will be required if we're to explain the type of synaesthesia that Melissa has discovered here."

If hearing–motion synaesthesia is closely related to ordinary cross-sensory correspondence, it might also explain why it has taken so long to discover. "In real life," says Saenz, "things that move or flash usually do make a sound, so that association is more logical than, say, numbers-to-colours."


 

http://www.nature.com/news/2008/080805/full/news.2008.1014.html?s=news_rss

Tuesday, August 5, 2008

OOO monster…


 

Nothing like a bizarre-looking sea "monster" to draw crowds to a tony resort town. The blogosphere has been abuzz since Gawker.com
early this week featured a story and photo of a bulky hairless corpse with sharp teeth and a snout that reportedly washed up in Montauk on the eastern tip of Long Island, N.Y. Another Big Foot or Loch Ness Monster, perhaps?

The report of the cryptid was picked up by Fox News, CNN and other TV nets, magazines and newspapers as far away as London hungry for a hot story to spice up the summer news doldrums.

"We were looking for a place to sit when we saw some people looking at something," Jenna Hewitt, 26, told Newsday. "We were kind of amazed, shocked and amazed." Hewitt was among a bunch of locals who insist they saw the odd-looking corpse. Most of those weighing in on the creature's identity subscribed to the theory that it was a dog. (A pit bull was the prevailing favorite.)

Others, some who only saw the snapshot, speculated it may have been a raccoon or, perhaps, a sea turtle that lost its shell.

But we may never know for sure. It seems, you see, that the body has been moved. And nobody (at least nobody talking) knows by whom—or where it was taken.

"They say an old guy came and carted it away," Hewitt said. "He said, "I'm going to mount it on my wall."

Charming.


 

http://www.sciam.com/blog/60-second-science/post.cfm?id=mystery-of-the-montauk-monster-2008-08-01&sc=rss