Wednesday, September 24, 2008

Cool stuff XD

Looking Vs. Seeing


 

Hafed designed a series of experiments where the subjects had to infer the invisible center of a visual target consisting of two peripheral features and track it for several seconds. (Credit: Image courtesy of Salk Institute)

ScienceDaily (Sep. 23, 2008) — The superior colliculus has long been thought of as a rapid orienting center of the brain that allows the eyes and head to turn swiftly either toward or away from the sights and sounds in our environment. Now a team of scientists at the Salk Institute for Biological Studies has shown that the superior colliculus does more than send out motor control commands to eye and neck muscles.

Two complementary studies, both led by Richard Krauzlis, Ph.D., an associate professor in the Systems Neurobiology Laboratory at the Salk Institute, have revealed that the superior colliculus performs supervisory functions in addition to the motor control it has long been known for. The results are published in the Aug. 6 and Sept. 17 issues of the Journal of Neuroscience.

"Beyond its classic role in motor control, the primate superior colliculus signals to other brain areas the location of behaviorally relevant visual objects by providing a 'neural pointer' to these objects," says Krauzlis.

The superior colliculus is currently under renewed scrutiny because recent findings have suggested that it does more than help orient the head and eyes toward something seen or heard. Results hinted that the superior colliculus might play a role in analyzing the current environment and deciding whether one specific aspect is worth paying closer attention to than another. Definitive proof, however, has been lacking.

The Salk scientists adopted a more "naturalistic" approach in their experiments to understand this role of the superior colliculus. Historically, physiological studies of eye movement control have relied on individual spots of light representing visual targets, but the real world is much more complex than a single dot on a computer screen. "For example, we can smoothly track a large airplane, with all its intricate visual details, by directing our gaze at its center," explains Ziad Hafed, Ph.D., Sloan-Swartz Fellow in the Systems Neurobiology Laboratory and lead author on both studies. "At night, we might only be able to see the strobe lights on the wing tips, but we are still able to track the object's invisible center."

Hafed designed a series of experiments where the subjects had to infer the invisible center of a visual target consisting of two peripheral features — much like the above airplane's strobe lights in the night sky — and track it for several seconds (http://www.cnl.salk.edu/~zhafed/tracking.mov) or fixate on a stationary dot while the peripheral features were moving back and forth (http://www.cnl.salk.edu/~zhafed/fixation.mov). (The green crosshair indicates the subject's eye position.)

For one study, the Salk researchers recorded the activity of single neurons in the superior colliculus while the subjects either fixated on the stationary dot or tracked the invisible center of the moving object. "The SC contains a topographic map of the visual space around us just as conventional maps mirror geographical areas," explains Hafed. "This allowed us to record either from peripheral neurons, representing one of the 'wing tips,' or central neurons, representing the foveal location of the invisible center that was tracked," he adds. (The fovea, which is responsible for sharp, central vision, is located in the center of the macular region of the retina, while peripheral vision occurs outside the center of our gaze.)

Surprisingly, the central neurons were the most active during this tracking behavior, despite the lack of a visual stimulus in the center of gaze. "These neurons highlighted the behavioral importance of the location of the invisible center, because it is this location that was the most important for the subjects to successfully track the object," says Krauzlis (http://www.cnl.salk.edu/~zhafed/rostral_neuron_track.mov). When the subjects ignored the invisible center, the same neurons were significantly less active (http://www.cnl.salk.edu/~zhafed/rostral_neuron_fix.mov).

As part of the second study, the Salk researchers, in collaboration with Laurent Goffart, Ph.D., a professor at the Institut de Neurosciences Cognitives de la Méditerranée in Marseille, France, temporarily inactivated a subset of superior colliculus neurons and analyzed the resulting changes in tracking performance. While the subjects still tracked well, their gaze consistently and predictably shifted away from the center, demonstrating clearly that the superior colliculus is essential for defining the object location (http://www.cnl.salk.edu/~zhafed/sample_inactivation.mov).

"By showing that the SC is not just a motor map, but also a map of behaviorally relevant object locations, our results provide a conceptual framework for understanding the role of the SC in non-motor functions such as visual attention and the functional links between motor control and sensory processing," says Hafed.

Zen Meditation reduces stress. Sort of..

Turns out that Zen Meditation, a practice of detachment from an emotional state is more than just beneficial to your spiritual well-being, but also to your emotional well-being.

Step Back To Move Forward Emotionally, Study Suggests

ScienceDaily (Sep. 24, 2008) — When you're upset or depressed, should you analyze your feelings to figure out what's wrong? Or should you just forget about it and move on?

New research suggests a solution to these questions and to a related psychological paradox: Pocessing emotions is supposed to facilitate coping, but attempts to understand painful feelings often backfire and perpetuate or strengthen negative moods and emotions.*

The solution is not denial or distraction. According to University of Michigan psychologist Ethan Kross, the best way to move ahead emotionally is to analyze one's feelings from a psychologically distanced perspective.

With University of California, Berkeley, colleague Ozlem Ayduk, Kross has conducted a series of studies that provide the first experimental evidence of the benefits of analyzing depressive feelings from a psychologically distanced perspective.

"We aren't very good at trying to analyze our feelings to make ourselves feel better," said Kross, a faculty associate at the U-M Institute for Social Research (ISR) and an assistant professor of psychology. "It's an invaluable human ability to think about what we do, but reviewing our mistakes over and over, re-experiencing the same negative emotions we felt the first time around, tends to keep us stuck in negativity. It can be very helpful to take a sort of mental time-out, to sit back and try to review the situation from a distance."

This approach is widely associated with eastern philosophies such as Buddhism and Taoism, and with practices like Transcendental Meditation. But according to Kross, anyone can do it with a little practice.

"Using a thermostat metaphor is helpful to many people. When negative emotions become overwhelming, simply dial the emotional temperature down a bit in order to think about the problem rationally and clearly," he said.

Kross, who is teaching a class on self-control this fall at U-M, has published two papers on the topic this year. One provides experimental evidence that self-distancing techniques improve cardiovascular recovery from negative emotions. Another shows that the technique helps protect against depression.

In the July 2008 issue of Personality and Social Psychology Bulletin, Kross and Ayduk randomly assigned 141 participants to one of three groups that required them to focus (or not focus) on their feelings using different strategies in a guided imagery exercise that led them to recall an experience that made them feel overwhelmed by sadness and depression.

In the immersed-analysis condition, participants were told, "Go back to the time and place of the experience, and relive the situation as if it were happening to you all over again…try to understand the emotions that you felt as the experience unfolded…why did you have those feelings? What were the underlying causes and reasons?"

In the distanced-analysis condition, they were told, "Go back to the time and place of the experience…take a few steps back and move away from your experience…watch the experience unfold as if it were happening all over again to the distant you… try to understand the emotions that the distant you felt as the experience unfolded…why did he (she) have those feelings? What were the underlying causes and reasons?"

In the distraction condition, participants were asked to think about a series of non-emotional facts that were unrelated to their recalled depression experience. Among the statements: "Pencils are made with graphite" and "Scotland is north of England."

After the experience, participants completed a questionnaire asking how they felt at the moment, and wrote a stream-of-thought essay about their thoughts during the memory recall phase of the experiment.

Immediately after the session those who used the distanced-analysis approach reported lower levels of depression than those who used immersed-analysis, but not distraction. Thus distraction and distanced-analysis were found to be equally effective in the short-term. Participants then returned to the lab either one day or one week later. At that time, they were asked to think about the same sad or depressing experience, and their mood was reassessed.

Those who had used the distanced-analysis approach continued to show lower levels of depression than those who had used self-immersed analysis and distraction, providing evidence to support the hypothesis that distanced-analysis not only helps people cope with intense feelings adaptively in the short-term, but critically also helps people work-through negative experiences over time.

In a related study, published earlier this year in Psychological Science, Ayduk and Kross showed that participants who adopted a self-distanced perspective while analyzing feelings surrounding a time when they were angry showed smaller increases in blood pressure than those who used a self-immersed approach.

In future research, Kross plans to investigate whether self-distancing is helpful in coping with other types of emotions, including anxiety, and the best ways of teaching people how to engage in self-distanced analysis as they proceed with their lives, not just when they are asked to recall negative experiences in a laboratory setting.

*The studies were supported by funding from the National Institutes of Health.

Tuesday, September 23, 2008

Tree produces electricity

Interesting. I was thinking along the lines of linking up 200 million trees as a power source though.. I wonder what the practicality in that is. Obviously the cost would be a killer, but if the price can drop?

Preventing Forest Fires With Tree Power: Sensor System Runs On Electricity Generated By Trees

ScienceDaily (Sep. 23, 2008) — MIT researchers and colleagues are working to find out whether energy from trees can power a network of sensors to prevent spreading forest fires.

What they learn also could raise the possibility of using trees as silent sentinels along the nation's borders to detect potential threats such as smuggled radioactive materials.

The U.S. Forest Service currently predicts and tracks fires with a variety of tools, including remote automated weather stations. But these stations are expensive and sparsely distributed. Additional sensors could save trees by providing better local climate data to be used in fire prediction models and earlier alerts. However, manually recharging or replacing batteries at often very hard-to-reach locations makes this impractical and costly.

The new sensor system seeks to avoid this problem by tapping into trees as a self-sustaining power supply. Each sensor is equipped with an off-the-shelf battery that can be slowly recharged using electricity generated by the tree. A single tree doesn't generate a lot of power, but over time the "trickle charge" adds up, "just like a dripping faucet can fill a bucket over time," said Shuguang Zhang, one of the researchers on the project and the associate director of MIT's Center for Biomedical Engineering (CBE).

The system produces enough electricity to allow the temperature and humidity sensors to wirelessly transmit signals four times a day, or immediately if there's a fire. Each signal hops from one sensor to another, until it reaches an existing weather station that beams the data by satellite to a forestry command center in Boise, Idaho.

Scientists have long known that trees can produce extremely small amounts of electricity. But no one knew exactly how the energy was produced or how to take advantage of the power.

In a recent issue of the Public Library of Science ONE, Zhang and MIT colleagues report the answer. "It's really a fairly simple phenomenon: An imbalance in pH between a tree and the soil it grows in," said Andreas Mershin, a postdoctoral associate at the CBE.¬ The first author of the paper is Christopher J. Love, an MIT senior in chemistry who has been working on the project since his freshman year.

To solve the puzzle of where the voltage comes from, the team had to test a number of theories - many of them exotic. That meant a slew of experiments that showed, among other things, that the electricity was not due to a simple electrochemical redox reaction (the type that powers the 'potato batteries' common in high school science labs,http://en.wikipedia.org/wiki/Lemon_battery). The team also ruled out the source as due to coupling to underground power lines, radio waves or other electromagnetic interference.

Testing of the wireless sensor network, which is being developed by Voltree Power, is slated to begin in the spring on a 10-acre plot of land provided by the Forest Service.

According to Love, who with Mershin has a financial interest in Voltree, the bioenergy harvester battery charger module and sensors are ready. "We expect that we'll need to instrument four trees per acre," he said, noting that the system is designed for easy installation by unskilled workers.

"Right now we're finalizing exactly how the wireless sensor network will be configured to use the minimum amount of power," he concluded.

The original experiments were funded by MagCap Engineering, LLC, through MIT's Undergraduate Research Opportunities Program.

Friday, September 19, 2008

Muscle stem cells exist! -.-

And i remembered certain teachers telling me that muscle cells do not regenerate when lost. -.-

Muscle Stem Cell Identity Confirmed By Researchers

ScienceDaily (Sep. 19, 2008) — A single cell can repopulate damaged skeletal muscle in mice, say scientists at the Stanford University School of Medicine, who devised a way to track the cell's fate in living animals. The research is the first to confirm that so-called satellite cells encircling muscle fibers harbor an elusive muscle stem cell.

Identifying and isolating such a cell in humans would have profound therapeutic implications for disorders such as muscular dystrophy, injury and muscle wasting due to aging, disuse or disease.

"We were able to show at the single-cell level that these cells are true, multipotent stem cells," said Helen Blau, PhD, the Donald E. and Delia B. Baxter Professor of Pharmacology. "They fit the classic definition: they can both self-renew and give rise to specialized progeny." Blau is the senior author of the research, which will be published Sept. 17 in the online issue of Nature.

"We are thrilled with the results," said Alessandra Sacco, PhD, senior research scientist in Blau's laboratory and first author of the research. "It's been known that these satellite cells are crucial for the regeneration of muscle tissue, but this is the first demonstration of self-renewal of a single cell."

One-tenth of the body's mass is skeletal muscle. Satellite cells hang out between a muscle fiber and its thin, membrane-like sheath, waiting to spring into action when the fiber is damaged by exercise or trauma. When necessary, they begin to divide to make more specialized muscle cells. This property alone, however, doesn't qualify them as stem cells. That designation requires them to be able to also make copies of themselves for future use.

Although many researchers suspected that the satellite cell population included muscle stem cells, it was difficult to prove because not all satellite cells are identical. It was possible that one subpopulation was responsible for making lots of specialized muscle cells, while another replenished the supply of satellite cells.

This divide-and-conquer approach might be efficient, but doesn't have the same exciting clinical applications as identifying a true stem cell. However, analyzing the specific properties of a single cell is technically difficult, and usually requires hundreds of hours of painstaking microscopic analysis of tissue slices from many laboratory animals.

Sacco used a trick to overcome these hurdles. She isolated satellite cells from a mouse genetically engineered to express a glowing protein, luciferase, first identified in fireflies. She then used a novel imaging technique developed at Stanford to follow their fate after transplantation into living animals that did not express the protein. Because this non-invasive method allows repeated imaging of the same animal, fewer mice are needed for the research.

"To be able to detect the presence of the cells by bioluminescence was really a breakthrough," said Blau, the director of the Baxter Laboratory of Genetic Pharmacology. "It taught us so much more. We could see how the cells were responding, and really monitor their dynamics."

Sacco transplanted a single satellite cell expressing the glowing protein into the hind leg muscles of each of 144 mice; in six of the mice, these cells went on to proliferate and self-renew in the recipient's existing muscle. The relatively low success rate is most likely due in part to the fact that not all of the satellite cells are stem cells and also to the difficulty of keeping a lone cell alive and happy during isolation and transplantation.

The leg muscles of these six mice were repopulated with between 20,000 to 80,000 glowing progeny of the original satellite cell. Many cells made new muscle fibers or contributed to the recipient's muscle fibers. Most exciting, several of the glowing cells expressed cell markers specific only to satellite cells, indicating the original cell was also making more copies of itself and confirming that it was a stem cell.

In another set of experiments, Sacco and her colleagues transplanted between 10 and 500 satellite cells expressing the glowing protein into each mouse leg muscle. These cells also engrafted and proliferated extensively, increasing approximately a hundredfold in number after transplantation and a hundredfold more in response to muscle damage. They contributed extensively to the recipient's muscle, both by forming new fibers and by fusing with injured fibers. Furthermore, once the need for reinforcements had been met, the satellite stem cells stopped proliferating; that is, unlike tumor cells, the transplanted cells were responsive to local cues.

Finally, the researchers were able to induce a second and third wave of proliferation of the glowing satellite cells with repeated incidences of damage, showing that the stem cell function persisted over time.

"Now we can monitor the same mouse over time, and see how various treatments affect muscle regeneration," said Sacco. She and her collaborators are now turning their attention to isolating similar muscle stem cells from humans.

In addition to visually following the fate of the glowing cells, researchers can also use the intensity of the signal to assess the speed and strength of the stem cells' rescue response under a variety of conditions - an important feature that will allow researchers to directly compare the function of putative stem cells in a variety of injury and disease models.

"This technique provides the first quantitative way to compare stem cells in solid tissues," said Blau. "By providing a means of assessing the efficacy of a range of stem cell therapies in a variety of tissues, I think it will greatly impact not only the study of muscle stem cells in regenerative medicine, but also the stem cell field in general."

Sacco and Blau's Stanford collaborators included Regis Doyonnas, PhD, senior scientist; Peggy Kraft, research assistant; and Stefan Vitorovic, a research assistant and Stanford undergraduate student. The research was funded by the National Institutes of Health and by the Baxter Foundation.

Thursday, September 18, 2008

where command economy fails and market economy succeeds

Although i always believed that coorperation will always beat competition, but if you think about it, without competition, this would never have been possible..

From Xbox To T-cells: Borrowing Video Game Technology To Model Human Biology

Within a few minutes, the GPU-driven software developed by the Michigan Tech team provides a 3-D model of the human immune response to a TB infection. (Credit: Image courtesy of Michigan Technological University)
ScienceDaily (Sep. 18, 2008) — A team of researchers at Michigan Technological University is harnessing the computing muscle behind the leading video games to understand the most intricate of real-life systems.
Led by Roshan D'Souza, the group has supercharged agent-based modeling, a powerful but computationally massive forecasting technique, by using graphic processing units (GPUs), which drive the spectacular imagery beloved of video gamers. In particular, the team aims to model complex biological systems, such as the human immune response to a tuberculosis bacterium.
Computer science student Mikola Lysenko, who wrote the software, demonstrates. On his computer monitor, a swarm of bright green immune cells surrounds and contains a yellow TB germ. These busy specks look like 3D-animations from a PBS documentary, but they are actually virtual T-cells and macrophages—the visual reflection of millions of real-time calculations.
"I've been asked if we ran this on a supercomputer or if it's a movie," says D'Souza, an assistant professor of mechanical engineering–engineering mechanics. He notes that their model is several orders of magnitude faster than state-of-the art agent modeling toolkits. According to the researchers, however, this current effort is small potatoes.
"We can do it much bigger," says D'Souza. "This is nowhere near as complex as real life." Next, he hopes to model how a TB infection could spread from the lung to the patient's lymphatic system, blood and vital organs.
Dr. Denise Kirschner, of the University of Michigan in Ann Arbor, developed the TB model and gave it to D'Souza's team, which programmed it into a graphic processing unit. Agent-based modeling hasn't replaced test tubes, she says, but it is providing a powerful new tool for medical research.
Computer models offer significant advantages. "You can create a mouse that's missing a gene and see how important that gene is," says Kirschner. "But with agent-based modeling, we can knock out two or three genes at once." In particular, agent-based modeling allows researchers to do something other methodologies can't: virtually test the human response to serious insults, such as injury and infection.
While agent-based modeling may never replace the laboratory entirely, it could reduce the number of dead-end experiments. "It really helps scientists focus their thinking," Kirschner said. "The limiting factor has been that these models take a long time to run, and [D'Souza's] method works very quickly and efficiently," she said.
Dr. Gary An, a surgeon specializing in trauma and critical care in Northwestern University's Feinberg School of Medicine, is a pioneer in the use of agent-based modeling to understand another matter of life and death: sepsis. With billions of agents, including a variety of cells and bacteria, these massive, often fatal infections have been too complex to model economically on a large scale, at least until now.
"The GPU technology may make this possible," said An. "This is very interesting stuff, and I'm excited about it."
About agent-based modeling
Agent-based modeling simulates the behaviors of complex systems. It can be used to predict the outcomes of anything from pandemics to the price of pork bellies. It is, as the name suggests, based on individual agents: e.g., sick people and well people, predators and prey, etc. It applies rules that govern how those agents behave under various conditions, sets them loose, and tracks how the system changes over time. The outcomes are unpredictable and can be as surprising as real life.
Agent-based modeling has been around since the 1950s, but the process has always been handicapped by a shortage of computing power. Until recently, the only way to run large models quickly was on multi-million-dollar supercomputers, a costly proposition.
D'Souza's team sidestepped the problem by using GPUs, which can run models with tens of millions of agents with blazing speed.
"With a $1,400 desktop, we can beat a computing cluster," says D'Souza. "We are effectively democratizing supercomputing and putting these powerful tools into the hands of any researcher. Every time I present this research, I make it a point to thank the millions of video gamers who have inadvertently made this possible."
The Tech team also looks forward to applying their model in other ways. "We can do very complex ecosystems right now," said Ryan Richards, a computer science senior. "If you're looking at epidemiology, we could easily simulate an epidemic in the US, Canada and Mexico."
"GPUs are very difficult to program. It is completely different from regular programming," said D'Souza, who deflects credit to the students. "All of this work was done by CS undergrads, and they are all from Michigan Tech. I've had phenomenal success with these guys—you can't put a price tag on it."
D'Souza's work was supported by a grant from the National Science Foundation. In addition to Lysenko and Richards, computer science undergraduate Nick Smolinske also contributed to the research.

Tuesday, September 16, 2008

I can't believe this

ok, i don't wanna do this, but i just feel like the school just plain wanted to mock me. -.-

"some of the year 6 students have been requested to attend a tea session by "

ok, so i didn't get selected, fine. even though 51 of the students in the level were, fine. i'm in the lower 37 of the batch, fine. but you really didn't have to drive in the final nail by inviting 2 students who are not even in the school anymore. am i that unwanted? "oh i'd rather place the name of someone who can't even make it anymore than to let you go in"? to be placed in the same category as
i probably wouldn't be interested in the talk? so can someone tell me why more than 7 students from the B class are invited to a tea session on clinical sciences? why in the world would anyone of them be interested in MBBS-Phd scholarship?

i guess it's time for me to be disappointed in the school as well? or maybe i should take back whatever that i have ever given to the school?

Monday, September 15, 2008

So we google what we hear…

Scientists Watch As Listener's Brain Predicts Speaker's Words

ScienceDaily (Sep. 15, 2008) — Scientists at the University of Rochester have shown for the first time that our brains automatically consider many possible words and their meanings before we've even heard the final sound of the word.

Previous theories have proposed that listeners can only keep pace with the rapid rate of spoken language—up to 5 syllables per second—by anticipating a small subset of all words known by the listener, much like Google search anticipates words and phrases as you type. This subset consists of all words that begin with the same sounds, such as "candle", "candy," and "cantaloupe," and makes the task of understanding the specific word more efficient than waiting until all the sounds of the word have been presented.

But until now, researchers had no way to know if the brain also considers the meanings of these possible words. The new findings are the first time that scientists, using an MRI scanner, have been able to actually see this split-second brain activity. The study was a team effort among former Rochester graduate student Kathleen Pirog Revill, now a postdoctoral researcher at Georgia Tech, and three faculty members in the Department of Brain and Cognitive Sciences at the University of Rochester.

"We had to figure out a way to catch the brain doing something so fast that it happens literally between spoken syllables," says Michael Tanenhaus, the Beverly Petterson Bishop and Charles W. Bishop Professor. "The best tool we have for brain imaging of this sort is functional MRI, but an fMRI takes a few seconds to capture an image, so people thought it just couldn't be done."

But it could be done. It just took inventing a new language to do it.

With William R. Kenan Professor Richard Aslin and Professor Daphne Bavelier, Pirog Revill focused on a tiny part of the brain called "V5," which is known to be activated when a person sees motion. The idea was to teach undergraduates a set of invented words, some of which meant "movement," and then to watch and see if the V5 area became activated when the subject heard words that sounded similar to the ones that meant "movement."

For instance, as a person hears the word "kitchen," the Rochester team would expect areas of the brain that would normally become active when a person thought of words like "kick" to momentarily show increased blood flow in an fMRI scan. But the team couldn't use English words because a word as simple as "kick" has so many nuances of meaning. To one person it might mean to kick someone in anger, to another it might mean to be kicked, or to kick a winning goal. The team had to create a set of words that had similar beginning syllables, but with different ending syllables and distinct meanings—one of which meant motion of the sort that would activate the V5 area.

The team created a computer program that showed irregular shapes and gave the shapes specific names, like "goki." They also created new verb words. Some, like "biduko" meant "the shape will move across the screen," whereas some, like "biduka," meant the shape would just change color.

After a number of students learned the new words well enough, the team tested them as they lay in an fMRI scanner. The students would see one of the shapes on a monitor and hear "biduko," or "biduka." Though only one of the words actually meant "motion," the V5 area of the brain still activated for both, although less so for the color word than for the motion word. The presence of some activation to the color word shows that the brain, for a split-second, considered the motion meaning of both possible words before it heard the final, discriminating syllable—ka rather than ko.

"Frankly, we're amazed we could detect something so subtle," says Aslin. "But it just makes sense that your brain would do it this way. Why wait until the end of the word to try to figure out what its meaning is? Choosing from a little subset is much faster than trying to match a finished word against every word in your vocabulary."

The Rochester team is already planning more sophisticated versions of the test that focus on other areas of the brain besides V5—such as areas that activate for specific sounds or touch sensations. Bavelier says they're also planning to watch the brain sort out meaning when it is forced to take syntax into account. For instance, "blind venetian" and "venetian blind" are the same words but mean completely different things. How does the brain narrow down the meaning in such a case? How does the brain take the conversation's context into consideration when zeroing in on meaning?

"This opens a doorway into how we derive meaning from language," says Tanenhaus. "This is a new paradigm that can be used in countless ways to study how the brain responds to very brief events. We're very excited to see where it will lead us."

http://www.sciencedaily.com/releases/2008/09/080911140815.htm

Saturday, September 13, 2008

stupid things done by stupid people

they hack into a potential black hole/antimatter making machine, play around with it and say that the security is lacking? that's like a random person sneaking into a nuclear powerplant, playing with all the controls, and blaming it on the owner for not having good security. -.-

article goes like this:

As the first particles began circulating in the Large Hadron Collider (LHC) this week, a group of hackers calling themselves the "Greek Security Team" penetrated computer systems inside CERN's Geneva, Switzerland, facility, where the world's biggest particle accelerator is housed, the Telegraph.co.uk reported today.

The hackers were reportedly targeting the Compact Muon Solenoid Experiment (CMS), a device in Cessy, France, built to monitor a wide range of particles and phenomena produced in high-energy collisions in the LHC. The 12,500-ton detector's different layers (weighing, according to CERN, as much as 30 jumbo jets or 2,500 African elephants) stop and measure the different particles, and use this data to form a picture of events at the heart of the collision. Scientists plan to use the info to help answer questions about what the university is really made of and what forces act within it.

On Wednesday, as the LHC was revving up, CMS engineers searched computers for half a dozen files uploaded by the hackers. The  interlopers accessed the computer that monitors the CMS software system as the CMS collects data during particle collisions.

CERN scientists says no harm was done but that the break-in raises security concerns, given that intruders were able to penetrate so close to the CMS's computer control system, according to the Telegraph.co.uk. In other words, the hackers came this close to being able to switch off some CMS controls.

"We are 2600 - dont mess with us. (sic)," the group warned in a message to CERN engineers. The "2600" refers to a U.S. magazine published quarterly that appeals to the hackers worldwide by publishing technical information about telephone switching systems, the Internet and other technology, as well as computer-related news. The mindset behind the sharing of this information is to find vulnerabilities in the computer systems used by government and industry and force them to improve their security by exploiting their flaws. In fact, 2600 has become a brand in the hacker world: in addition to 2600: The Hacker Quarterly; an organization known as 2600 hosts hacker conferences and there's even a film company of that name that's made a documentary on legendary hacker Kevin Mitnick.

Given the huge interest not to mention the enormity of the LHC's task, it's "highly disturbing" that hackers were able to compromise and change data on its Web site, Graham Cluley, security researcher with Sophos Plc (a security services firm based in both the UK and Burlington, Mass.) wrote in his blog today. "Theoretically," he noted, "hackers could have planted malicious code which could have stolen identities or installed malware onto the computers of millions of web visitors."

http://www.sciam.com/blog/60-second-science/post.cfm?id=hackers-attack-large-hadron-collide-2008-09-12

Wednesday, September 3, 2008

Smart people? (not me)

So.. are smart people smart because they have faster impulses, or because they have more efficient ones?


 

High-Aptitude Minds: The Neurological Roots of Genius

Researchers are finding clues to the basis of brilliance in the brain

By Christian Hoppe and Jelena Stojanovic

Within hours of his demise in 1955, Albert Einstein's brain was salvaged, sliced into 240 pieces and stored in jars for safekeeping. Since then, researchers have weighed, measured and otherwise inspected these biological specimens of genius in hopes of uncovering clues to Einstein's spectacular intellect.

Their cerebral explorations are part of a century-long effort to uncover the neural basis of high intelligence or, in children, giftedness. Traditionally, 2 to 5 percent of kids qualify as gifted, with the top 2 percent scoring above 130 on an intelligence quotient (IQ) test. (The statistical average is 100. See the box on the opposite page.) A high IQ increases the probability of success in various academic areas. Children who are good at reading, writing or math also tend to be facile at the other two areas and to grow into adults who are skilled at diverse intellectual tasks [see "Solving the IQ Puzzle," by James R. Flynn; Scientific American Mind, October/November 2007].

Most studies show that smarter brains are typically bigger—at least in certain locations. Part of Einstein's parietal lobe (at the top of the head, behind the ears) was 15 percent wider than the same region was in 35 men of normal cognitive ability, according to a 1999 study by researchers at McMaster University in Ontario. This area is thought to be critical for visual and mathematical thinking. It is also within the constellation of brain regions fingered as important for superior cognition. These neural territories include parts of the parietal and frontal lobes as well as a structure called the anterior cingulate.

But the functional consequences of such enlargement are controversial. In 1883 English anthropologist and polymath Sir Francis Galton dubbed intelligence an inherited feature of an efficiently functioning central nervous system. Since then, neuroscientists have garnered support for this efficiency hypothesis using modern neuroimaging techniques. They found that the brains of brighter people use less energy to solve certain prob lems than those of people with lower aptitudes do.

In other cases, scientists have observed higher neuronal power consumption in individuals with superior mental capacities. Musical prodigies may also sport an unusually energetic brain [see box on page 67]. That flurry of activity may occur when a task is unusually challenging, some researchers speculate, whereas a gifted mind might be more efficient only when it is pondering a relatively painless puzzle.

Despite the quest to unravel the roots of high IQ, researchers say that people often overestimate the significance of intellectual ability [see "Coaching the Gifted Child," by Christian Fischer]. Studies show that practice and perseverance contribute more to accomplishment than being smart does.

Size Matters
In humans, brain size correlates, albeit somewhat weakly, with intelligence, at least when researchers control for a person's sex (male brains are bigger) and age (older brains are smaller). Many modern studies have linked a larger brain, as measured by magnetic resonance imaging, to higher intellect, with total brain volume accounting for about 16 percent of the variance in IQ. But, as Einstein's brain illustrates, the size of some brain areas may matter for intelligence much more than that of others does.

In 2004 psychologist Richard J. Haier of the University of California, Irvine, and his colleagues reported evidence to support the notion that discrete brain regions mediate scholarly aptitude. Studying the brains of 47 adults, Haier's team found an association between the amount of gray matter (tissue containing the cell bodies of neurons) and higher IQ in 10 discrete regions, including three in the frontal lobe and two in the parietal lobe just behind it. Other scientists have also seen more white matter, which is made up of nerve axons (or fibers), in these same regions among people with higher IQs. The results point to a widely distributed—but discrete—neural basis of intelligence.

The neural hubs of general intelligence may change with age. Among the younger adults in Haier's study—his subjects ranged in age from 18 to 84—IQ correlated with the size of brain regions near a central structure called the cingulate, which participates in various cognitive and emotional tasks. That result jibed with the findings, published a year earlier, of pediatric neurologist Marko Wilke, then at Cincinnati Children's Hospital Medical Center, and his colleagues. In its survey of 146 children ages five to 18 with a range of IQs, the Cincinnati group discovered a strong connection between IQ and gray matter volume in the cingulate but not in any other brain structure the researchers examined.

Scientists have identified other shifting neural patterns that could signal high IQ. In a 2006 study child psychiatrist Philip Shaw of the National Institute of Mental Health and his colleagues scanned the brains of 307 children of varying intelligence multiple times to determine the thickness of their cerebral cortex, the brain's exterior part. They discovered that academic prodigies younger than eight had an unusually thin cerebral cortex, which then thickened rapidly so that by late childhood it was chunkier than that of less clever kids. Consistent with other studies, that pattern was particularly pronounced in the frontal brain regions that govern rational thought processes.

The brain structures responsible for high IQ may vary by sex as well as by age. A recent study by Haier, for example, suggests that men and women achieve similar results on IQ tests with the aid of different brain regions. Thus, more than one type of brain architecture may underlie high aptitude.

Low Effort Required
Meanwhile researchers are debating the functional consequences of these structural findings. Over the years brain scientists have garnered evidence supporting the idea that high intelligence stems from faster information processing in the brain. Underlying such speed, some psychologists argue, is unusually efficient neural circuitry in the brains of gifted individuals.

Experimental psychologist Werner Krause, formerly at the University of Jena in Germany, for example, has proposed that the highly gifted solve puzzles more elegantly than other people do: they rapidly identify the key information in them and the best way to solve them. Such people thereby make optimal use of the brain's limited working memory, the short-term buffer that holds items just long enough for the mind to process them.

Starting in the late 1980s, Haier and his colleagues have gathered data that buttress this so-called efficiency hypothesis. The researchers used positron-emission tomography, which measures glucose metabolism of cells, to scan the brains of eight young men while they performed a nonverbal abstract reasoning task for half an hour. They found that the better an individual's performance on the task, the lower the metabolic rate in widespread areas of the brain, supporting the notion that efficient neural processing may underlie brilliance. And in the 1990s the same group observed the flip side of this phenomenon: higher glucose metabolism in the brains of a small group of subjects who had below-average IQs, suggesting that slower minds operate less economically.

More recently, in 2004 psychologist Aljoscha Neubauer of the University of Graz in Austria and his colleagues linked aptitude to diminished cortical activity after learning. The researchers used electroencephalography (EEG), a technique that detects electrical brain activity at precise time points using an array of electrodes affixed to the scalp, to monitor the brains of 27 individuals while they took two reasoning tests, one of them given before test-related training and the other after it. During the second test, frontal brain regions—many of which are involved in higher- order cognitive skills—were less active in the more intelligent individuals than in the less astute subjects. In fact, the higher a subject's mental ability, the bigger the dip in cortical activation between the pretraining and posttraining tests, suggesting that the brains of brighter individuals streamline the processing of new information faster than those of their less intelligent counterparts do.

The cerebrums of smart kids may also be more efficient at rest, according to a 2006 study by psychologist Joel Alexander of Western Oregon University and his colleagues. Using EEG, Alexander's team found that resting eight- to 12-hertz alpha brain waves were significantly more powerful in 30 adolescents of average ability than they were in 30 gifted adolescents, whose alpha-wave signal resembled those of older, college-age students. The results suggest that gifted kids' brains use relatively little energy while idle and in this respect resemble more developmentally advanced human brains.

Some researchers speculate that greater energy efficiency in the brains of gifted individuals could arise from increased gray matter, which might provide more resources for data processing, lessening the strain on the brain. But others, such as economist Edward Miller, formerly of the University of New Orleans, have proposed that the efficiency boost could also result from thicker myelin, the substance that insulates nerves and ensures rapid conduction of nerve signals. No one knows if the brains of the quick-witted generally contain more myelin, although Einstein's might have. Scientists probing Einstein's brain in the 1980s discovered an unusual number of glia, the cells that make up myelin, relative to neurons in one area of his parietal cortex.

Hardworking Minds
And yet gifted brains are not always in a state of relative calm. In some situations, they appear to be more energetic, not less, than those of people of more ordinary intellect. What is more, the energy-gobbling brain areas roughly correspond to those boasting more gray matter, suggesting that the gifted may simply be endowed with more brainpower in this intelligence network.

In a 2003 trial psychologist Jeremy Gray, then at Washington University in St. Louis, and his colleagues scanned the brains of 48 individuals using functional MRI, which detects neural activity by tracking the flow of oxygenated blood in brain tissue, while the subjects completed hard tasks that taxed working memory. The researchers saw higher levels of activity in prefrontal and parietal brain regions in the participants who had received high scores on an intelligence test, as compared with low scorers.

In a 2005 study a team led by neuroscientist Michael O'Boyle of Texas Tech University found a similar brain activity pattern in young male math geniuses. The researchers used fMRI to map the brains of mathematically gifted adolescents while they mentally rotated objects to try to match them to a target item. Compared with adolescent boys of average math ability, the brains of the mathematically talented boys were more metabolically active—and that activity was concentrated in the parietal lobes, the frontal cortex and the anterior cingulate.

A year later biologist Kun Ho Lee of Seoul National University in Korea similarly linked elevated activity in a frontoparietal neural network to superior intellect. Lee and his co-workers measured brain activity in 18 gifted adolescents and 18 less intelligent young people while they performed difficult reasoning tasks. These tasks, once again, excited activity in areas of the frontal and parietal lobes, including the anterior cingulate, and this neural commotion was significantly more intense in the gifted individuals' brains.

No one is sure why some experiments indicate that a bright brain is a hardworking one, whereas others suggest it is one that can afford to relax. Some, such as Haier—who has found higher brain metabolic rates in more astute individuals in some of his studies but not in others—speculate one reason could relate to the difficulty of the tasks. When a problem is very complex, even a gifted person's brain has to work to solve it. The brain's relatively high metabolic rate in this instance might reflect greater engagement with the task. If that task was out of reach for someone of average intellect, that person's brain might be relatively inactive because of an inability to tackle the problem. And yet a bright individual's brain might nonetheless solve a less difficult problem efficiently and with little effort as compared with someone who has a lower IQ.

Perfection from Practice
Whatever the neurological roots of genius, being brilliant only increases the probability of success; it does not ensure accomplishment in any endeavor. Even for academic achievement, IQ is not as important as self-discipline and a willingness to work hard.

University of Pennsylvania psychologists Angela Duckworth and Martin Seligman examined final grades of 164 eighth-grade students, along with their admission to (or rejection from) a prestigious high school. By such measures, the researchers determined that scholarly success was more than twice as dependent on assessments of self-discipline as on IQ. What is more, they reported in 2005, students with more self-discipline—a willingness to sacrifice short-term pleasure for long-term gain—were more likely than those lacking this skill to improve their grades during the school year. A high IQ, on the other hand, did not predict a climb in grades.

A 2007 study by Neubauer's team of 90 adult tournament chess players similarly shows that practice and experience are more important to expertise than general intelligence is, although the latter is related to chess-playing ability. Even Einstein's spectacular success as a mathematician and a physicist cannot be attributed to intellectual prowess alone. His education, dedication to the problem of relativity, willingness to take risks, and support from family and friends probably helped to push him ahead of any contemporaries with comparable cognitive gifts.

Note: This article was originally published with the title, "High-Aptitude Minds".

Quiz for everyone!

If you ever swim or paddle upstream, you will notice two things. First, a river's speed varies a lot. Second, those variations should cause you to pull harder when you hit rapidly flowing water. If you don't, you will simply make no progress. This puzzle replaces your muscles with a motor, but still asks you to figure out how to trade off energy for time.

Here are the facts:

•  You want to go 72 kilometers (km) upriver.
•  The first 24 km has a downstream speed of 7 kilometers per hour (kmh).
•  The next 18 km has a downstream speed of 2 kmh.
•  The last 30 km has a downstream speed of 0 kmh (the river becomes a lake).

You have an electric motor with three settings that can push the boat forward at a water speed of:

•  5 kmh using 1 kilowatt (kW) of power
•  10 kmh using 3 kW
•  15 kmh using 5 kW

Recall that land speed = water speed - downstream speed.
So, for example, if your water speed upstream is 15 kmh but the river has a downstream speed of 2 kmh, then your land speed is 13 kmh.

Warm-up:
Suppose you went full speed on all legs of the voyage. How long would the journey take and how much energy would you expend?

Solution to Warm-Up

Here now are the challenges for you.

1. What is the least energy you could use to make the entire trip, assuming you were in absolutely no rush? How would you do it?

Hint: On a lake, you would use the slowest speed, but this may not hold on all parts of the trip.

2. Suppose you have a battery that holds 30 kWh. How could you arrange to arrive as quickly as possible without consuming more than 30 kWh?

Click here for the solution


 

ABOUT THE AUTHOR(S)
Dennis Shasha is at the Courant Institute of Mathematical Sciences, New York University. His most recent puzzle book, Puzzles for Programmers and Pros, was published in 2007 by John Wiley and Sons/Wrox.


 

http://www.sciam.com/article.cfm?id=puzzling-adventures-river-run-sept-08&sc=rss

Monday, September 1, 2008

Putting known concepts together..

I know that changing levels of UV exposure would result in changes in melanin levels. I also know that heavy exposure to UV can cause massive extinction. But I couldn't put the 2 together… It's not even rocket science. -.-


 

Spores may fill gap in atmospheric records.

A gaping hole in atmospheric scientists' records could soon be filled thanks to the spores of a primitive moss.

Led by Barry Lomax, at the University of Nottingham, UK, a team of researchers has devised a way to reconstruct past levels of ozone by measuring levels of the chemicals that act as protective 'sunscreens' in the spores.

Lomax had previously used this technique to try to find out whether massive exposure to UV radiation caused the Permian extinction around 250 million years ago, after volcanic eruptions triggered a massive loss of ozone (see 'Plant pollen records ozone holes'). Although that attempt failed because of a lack of suitable fossils, the technique may end up answering wider questions about the evolution of our atmosphere.

Until now, atmospheric scientists have been limited to ozone measurements made by satellites that date back only to the late 1970s and data from ground-based spectrophotometers going back to the 1920s. Lomax's team say that the levels of ultraviolet-absorbing compounds in plant spores can show how much of this radiation they were exposed to, and hence show what the ozone levels were in the atmosphere millennia ago.

"At the moment it's very much an unknown how ozone has changed over recent and geological time," says Lomax. "[This method] could help address issues of climate change and whether we're seeing recovery [in the ozone layer] now, or whether it's natural variation."

The story's in the spores

In a paper published online by Nature Geoscience Lomax details how spores can be use as a 'biological proxy' for ozone levels. Plants subjected to increased levels of UV-B radiation make more of the natural phenolic 'sunscreen' compounds that absorb the potentially harmful rays1.

By analysing the concentration of these UV-absorbing compounds in the walls of spores from herbarium collections, it is possible to work out how much radiation they have been subjected to. From this it is possible to work out how much ozone was in the atmosphere between the plant and the Sun, says Lomax.

Traces of these compounds are even preserved in fossils. "We certainly should be able to go back into the Tertiary, about 55 million years ago, without a problem," says Lomax. "It will work on fossil spores provided they haven't been heated to over 200 ºC."

The paper details analysis of spores of clubmosses from high- and low-latitude locations, which were found to contain concentrations of UV-absorbing compounds that strongly correlate with known historical changes in UV-B levels. Spores from Ecuador, where there have been no historic changes in UV-B, showed no change in 'sunscreen' concentrations over the same time period.

Using spores from Greenland, Lomax reconstructed historical ozone levels between 1907 and 1993 and found strong correlations between their reconstructions and atmospheric ozone measurements made within this period.

"It would be extremely interesting if we could reconstruct the UV climates in the past," says Geir Braathen, an atmospheric chemist and senior scientific officer at the World Meteorological Organization in Geneva, Switzerland. "It would be very interesting to see how the ozone layer has developed."

Lomax now hopes to do just that. "We're planning to look through to the Holocene and the Quaternary to see just how far back we can take it," he says.

  • References
    • Lomax, B. et al. Nature Geosci. advance online publiction doi:10.1038/ngeo278 (2008)

http://www.nature.com/news/2008/080831/full/news.2008.1071.html?s=news_rss