+ All Categories
Home > Documents > 01-05-2012 NEWS

01-05-2012 NEWS

Date post: 03-Oct-2014
Category:
Upload: florin-munteanu
View: 180 times
Download: 2 times
Share this document with a friend
Description:
evenimente semnalate in agentiile de stiri de stiinta pentru perioada dec 2011 - ian 2012. O parte din ele au fost comentate in emisiunea Pasaport pentru Stiinta
Popular Tags:
86
'Nanowiggles:' Scientists discover graphene nanomaterials with tunable functionality in electronics January 4, 2012 Enlarge This is an image of a nanowiggle. Credit: Rensselaer Polytechnic Institute Electronics are getting smaller and smaller, flirting with new devices at the atomic scale. However, many scientists predict that the shrinking of our technology is reaching an end. Without an alternative to silicon- based technologies, the miniaturization of our electronics will stop. One promising alternative is graphene the thinnest material known to man. Pure graphene is not a semiconductor, but it can be altered to display exceptional electrical behavior. Finding the best graphene-based nanomaterials could usher in a new era of nanoelectronics, optics, and spintronics (an emerging technology that uses the spin of electrons to store and process information in exceptionally small electronics). Scientists at Rensselaer Polytechnic Institute have used the capabilities of one of the world's most powerful university-based supercomputers, the Rensselaer Center for Nanotechnology Innovations (CCNI), to uncover the properties of a promising form of graphene, known as graphene nanowiggles. What they found was that graphitic nanoribbons can be segmented into several different surface structures called nanowiggles. Each of these structures produces highly different magnetic and conductive properties. The findings provide a blueprint that scientists can use to literally pick and choose a graphene nanostructure that is tuned and customized for a different task or device. The work provides an important base of knowledge on these highly useful nanomaterials. The findings were published in the journal Physical Review Letters in a paper titled "Emergence of Atypical Properties in Assembled Graphene Nanoribbons." "Graphene nanomaterials have plenty of nice properties, but to date it has been very difficult to build defect-free graphene nanostructures. So these hard-to-reproduce nanostructures created a near insurmountable barrier between innovation and the market," said Vincent Meunier, the Gail and Jeffrey L. Kodosky '70 Constellation Professor of Physics, Information Technology, and Entrepreneurship at Rensselaer. "The advantage of graphene nanowiggles is that they can easily and quickly be produced very long and clean." Nanowiggles were only recently discovered by a group led by scientists at EMPA, Switzerland. These particular nanoribbons are formed using a bottom-up approach, since they are chemically assembled atom by atom. This represents a very different approach to the standard graphene material design process that takes an existing material and attempts to cut it into a new structure. The process often creates a material that is not perfectly straight, but has small zigzags on its edges. Meunier and his research team saw the potential of this new material. The nanowiggles could be easily manufactured and modified to display exceptional electrical conductive properties. Meunier and his team immediately set to work to dissect the nanowiggles to better understand possible future applications. "What we found in our analysis of the nanowiggles' properties was even more surprising than previously thought," Meunier said. The scientists used computational analysis to study several different nanowiggle structures. The structures are named based on the shape of their edges and include armchair, armchair/zigzag, zigzag, and zigzag/armchair. All of the nanoribbon-edge structures have a wiggly appearance like a caterpillar inching across a leaf. Meunier named the four structures nanowiggles and each wiggle produced exceptionally different properties. They found that the different nanowiggles produced highly varied band gaps. A band gap determines the levels of electrical conductivity of a solid material. They also found that different nanowiggles exhibited up to five highly varied magnetic properties. With this knowledge, scientists will be able to tune the bandgap and magnetic properties of a nanostructure based on their application, according to Meunier.
Transcript

'Nanowiggles:' Scientists discover graphene nanomaterials with tunable functionality in electronics January 4, 2012

Enlarge This is an image of a nanowiggle. Credit: Rensselaer Polytechnic Institute Electronics are getting smaller and smaller, flirting with new devices at the atomic scale. However, many scientists predict that the shrinking of our technology is reaching an end. Without an alternative to silicon-based technologies, the miniaturization of our electronics will stop. One promising alternative is graphene — the thinnest material known to man. Pure graphene is not a semiconductor, but it can be altered to display exceptional electrical behavior. Finding the best graphene-based nanomaterials could usher in a new era of nanoelectronics, optics, and spintronics (an emerging technology that uses the spin of electrons to store and process information in exceptionally small electronics). Scientists at Rensselaer Polytechnic Institute have used

the capabilities of one of the world's most powerful university-based supercomputers, the Rensselaer Center for Nanotechnology Innovations (CCNI), to uncover the properties of a promising form of graphene, known as graphene nanowiggles. What they found was that graphitic nanoribbons can be segmented into several different surface structures called nanowiggles. Each of these structures produces highly different magnetic and conductive properties. The findings provide a blueprint that scientists can use to literally pick and choose a graphene nanostructure that is tuned and customized for a different task or device. The work provides an important base of knowledge on these highly useful nanomaterials. The findings were published in the journal Physical Review Letters in a paper titled "Emergence of Atypical Properties in Assembled Graphene Nanoribbons." "Graphene nanomaterials have plenty of nice properties, but to date it has been very difficult to build defect-free graphene nanostructures. So these hard-to-reproduce nanostructures created a near insurmountable barrier between innovation and the market," said Vincent Meunier, the Gail and Jeffrey L. Kodosky '70 Constellation Professor of Physics, Information Technology, and Entrepreneurship at Rensselaer. "The advantage of graphene nanowiggles is that they can easily and quickly be produced very long and clean." Nanowiggles were only recently discovered by a group led by scientists at EMPA, Switzerland. These particular nanoribbons are formed using a bottom-up approach, since they are chemically assembled atom by atom. This represents a very different approach to the standard graphene material design process that takes an existing material and attempts to cut it into a new structure. The process often creates a material that is not perfectly straight, but has small zigzags on its edges. Meunier and his research team saw the potential of this new material. The nanowiggles could be easily manufactured and modified to display exceptional electrical conductive properties. Meunier and his team immediately set to work to dissect the nanowiggles to better understand possible future applications. "What we found in our analysis of the nanowiggles' properties was even more surprising than previously thought," Meunier said. The scientists used computational analysis to study several different nanowiggle structures. The structures are named based on the shape of their edges and include armchair, armchair/zigzag, zigzag, and zigzag/armchair. All of the nanoribbon-edge structures have a wiggly appearance like a caterpillar inching across a leaf. Meunier named the four structures nanowiggles and each wiggle produced exceptionally different properties. They found that the different nanowiggles produced highly varied band gaps. A band gap determines the levels of electrical conductivity of a solid material. They also found that different nanowiggles exhibited up to five highly varied magnetic properties. With this knowledge, scientists will be able to tune the bandgap and magnetic properties of a nanostructure based on their application, according to Meunier.

Meunier would like the research to inform the design of new and better devices. "We have created a roadmap that can allow for nanomaterials to be easily built and customized for applications from photovoltaics to semiconductors and, importantly, spintronics," he said. By using CCNI, Meunier was able to complete these sophisticated calculations in a few months. "Without CCNI, these calculations would still be continuing a year later and we would not yet have made this exciting discovery. Clearly this research is an excellent example illustrating the key role of CCNI in predictive fundamental science," he said. Provided by Rensselaer Polytechnic Institute (news : web) Pentagon-backed 'time cloak' stops the clock January 4, 2012 By SETH BORENSTEIN , AP Science Writer

Enlarge In this 2011 illustration, provided by Cornell University, scientists demonstrate how they have have created, a new invisibility technique that doesn’t just cloak an object _ like in Harry Potter books and movies _ but masks an entire event. It is a time masker that works by briefly bending the speed of light around an event. Cornell scientists explain what they are talking about in this 2011 illustration that shows that if this technique is ever scaled up an art thief can walk into a museum and steal a painting without setting of laser beam alarms or even showing up on surveillance cameras or your eyes. (AP Photo/Heather Deal, Cornell University) Pentagon-supported physicists on Wednesday said they had devised a "time cloak" that briefly makes an event undetectable.

It's one thing to make an object invisible, like Harry Potter's mythical cloak. But scientists have made an entire event impossible to see. They have invented a time masker. Think of it as an art heist that takes place before your eyes and surveillance cameras. You don't see the thief strolling into the museum, taking the painting down or walking away, but he did. It's not just that the thief is invisible - his whole activity is. What scientists at Cornell University did was on a much smaller scale, both in terms of events and time. It happened so quickly that it's not even a blink of an eye. Their time cloak lasts an incredibly tiny fraction of a fraction of a second. They hid an event for 40 picoseconds (trillionths of a second), according to a study appearing in Thursday's edition of the journal Nature. We see events happening as light from them reaches our eyes. Usually it's a continuous flow of light. In the new research, however, scientists were able to interrupt that flow for just an instant. Other newly created invisibility cloaks fashioned by scientists move the light beams away in the traditional three dimensions. The Cornell team alters not where the light flows but how fast it moves, changing in the dimension of time, not space. They tinkered with the speed of beams of light in a way that would make it appear to surveillance cameras or laser security beams that an event, such as an art heist, isn't happening. Another way to think of it is as if scientists edited or erased a split second of history. It's as if you are watching a movie with a scene inserted that you don't see or notice. It's there in the movie, but it's not something you saw, said study co-author Moti Fridman, a physics researcher at Cornell. The scientists created a lens of not just light, but time. Their method splits light, speeding up one part of light and slowing down another. It creates a gap and that gap is where an event is masked. "You kind of create a hole in time where an event takes place," said study co-author Alexander Gaeta, director of Cornell's School of Applied and Engineering Physics. "You just don't know that anything ever happened." This is all happening in beams of light that move too fast for the human eye to see. Using fiber optics, the hole in time is created as light moves along inside a fiber much thinner than a human hair. The scientists shoot the beam of light out, and then with other beams, they create a time lens that splits the light into two different speed beams that create the effect of invisibility by being too fast or too slow. The whole work is a mess of fibers on a long table and almost looks like a pile of spaghetti, Fridman said. It is the first time that scientists have been able to mask an event in time, a concept only first theorized by Martin McCall, a professor of theoretical optics at Imperial College in London. Gaeta, Fridman and

others at Cornell, who had already been working on time lenses, decided to see if they could do what McCall envisioned. It only took a few months, a blink of an eye in scientific research time. "It is significant because it opens up a whole new realm to ideas involving invisibility," McCall said. Researchers at Duke University and in Germany's Karlsruhe Institute of Technology have made progress on making an object appear invisible spatially. The earlier invisibility cloak work bent light around an object in three dimensions. Between those two approaches, the idea of invisibility will work its way into useful technology, predicts McCall, who wasn't part of either team. The science is legitimate, but it's still only a fraction of a second, added City College of New York physicist Michio Kaku, who specializes in the physics of science fiction. "That's not enough time to wander around Hogwarts," Kaku wrote in an email. "The next step therefore will be to increase this time interval, perhaps to a millionth of a second. So we see that there's a long way to go before we have true invisibility as seen in science fiction." Gaeta said he thinks he can get make the cloak last a millionth of a second or maybe even a thousandth of a second. But McCall said the mathematics dictate that it would take too big a machine - about 18,600 miles long - to make the cloak last a full second. "You have to start somewhere and this is a proof of concept," Gaeta said. Still, there are practical applications, Gaeta and Fridman said. This is a way of adding a packet of information to high-speed data unseen without interrupting the flow of information. But that may not be a good thing if used for computer viruses, Fridman conceded. There may be good uses of this technology, Gaeta said, but "for some reason people are more interested in the more illicit applications." Fridman's work was part-supported by the Defense Advanced Research Project Agency, or DARPA, a Pentagon unit which develops futuristic technology that can have a military use. Its achievements include DARPANet, a predecessor of the Internet. More information: Nature journal announcement: Now you detect it, now you don’t (pp 62-65; N&V) A ‘time cloak’ that makes an event temporarily undetectable, albeit on the picosecond scale, is described in this week’s Nature. The work could represent a step towards the development of spatio-temporal cloaking. Recent developments in spatial cloaking show that it is possible to hide an object by manipulating electromagnetic waves around it, creating a ‘hole in space’. Such devices currently have limited functionality. Here Moti Fridman and colleagues demonstrate that a related effect, temporal cloaking, can be achieved. They manage to create a ‘hole in time’ for around 40 trillionths of a second (40 picoseconds). The fibre-based system steers light ‘around’ an event so that no evidence (a change in the temporal or spectral properties of the light beam) of the event is detectable, by speeding up and slowing down different parts of a light beam. This effect is achieved using a split time-lens that breaks light up into its slower (red) and faster (blue) components, thereby creating a tiny temporal gap. ©2012 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed. Isaacsname They took a signal, sent it down a fiber optic cable, split the signal, slowed or speed half, re-joined them, ..that's " cloaking an event in time "..? ..I know I'm pretty dense sometimes, but I don't get it ? Raygunner The "time cloak" label is misleading. This is a pure optical illusion - nothing else. Sure, photon speeds have been adjusted but, other than using time as a measuring stick for the delay, no true "time cloak" exists. A more accurate title for this would be "Light Cloak" Xbw Time to start perfecting my sleight of hand routines. Leonardo da Vinci's tree rule may be explained by wind January 4, 2012 by Lisa Zyga

Enlarge (Left) A model of tree branching. (Middle) A tree skeleton with all branches having the same

thickness. (Right) The same tree with branch diameters calculated from a model accounting for wind-induced stress, which closely follows Leonardo’s rule. Image credit: Christophe Eloy. ©2011 American Physical Society (PhysOrg.com) -- More than 500 years ago, Leonardo da Vinci observed a particular relationship between the size of a tree’s trunk and the size of its branches. Specifically, the combined cross-sectional areas of a tree’s daughter branches are equal to the cross-sectional area of the mother branch. However, da Vinci didn’t know why tree branching followed this rule, and few explanations have been proposed since then. But now in a new study, physicist Christophe Eloy from Aix-Marseille University in Aix-en-Provence, France, has shown that this tree structure may be optimal for enabling trees to resist wind-induced stresses. In his study, which is published in a recent issue of Physical Review Letters, Eloy explains that Leonardo’s rule is so natural to the eye that it is often used in computer-generated trees. Although researchers have previously proposed explanations for the rule based on hydraulics or structure, none of these explanations have been fully convincing. For instance, the hydraulic explanation called the “pipe model” proposes that the branching proportions have to do with the way that vascular vessels connect the tree’s roots to its leaves to provide water and nutrients. But since vascular vessels account for as little as 5% of the branch cross section (for large trunks in some tree species), it seems unlikely that they would govern the tree’s entire architecture. “The usual textbook explanation for Leonardo's rule (and, more generally, for the relation between branch diameters) involves hydraulic considerations,” Eloy said. “My study shows that an alternative explanation can be given by considering external loads, such as wind-induced forces.” Eloy has proposed that Leonardo’s rule is a consequence of trees adapting their growth to optimally resist wind-induced stresses. It’s well-known that plants can alter their growth patterns in response to mechanical sensation, such as wind. The phenomenon, called “thigmomorphogenesis,” means that wind can influence the trunk and branch diameters of a tree as its growing. The underlying cellular mechanisms of this phenomenon are largely unknown. Building on this line of thinking, Eloy used two models to predict the probability of a fracture at a certain point in a tree due to strong winds. He found that, when the probability of fracture is the same everywhere on the tree, so that each part bears the stress equally, Leonardo’s rule is recovered. He also showed that the diameters of each branch on a tree can be calculated by knowing the parameters of a simple tree skeleton. Although some of the most common tree species, such as maples and oaks, seem to follow Leonardo’s rule, there are many species that don’t follow the rule, and many more that scientists have yet to analyze. “Actually, Leonardo’s rule has not been assessed for that many species,” Eloy said. “So far, it seems to be hold for about 10 species. The problem is that it takes a lot of time to measure a single tree, which has thousands of branches, and the data are usually very scattered. Besides, some species clearly do not satisfy Leonardo's rule, such as baobabs, koas, and most bushes.” The finding that trees seem to follow Leonardo’s rule when adapting their growth to tolerate wind-induced stresses could have applications both in nature and technology. “It has obvious applications to the forestry industry to calculate the yields of tree stands and to evaluate the risks of breakage during storms,” Eloy said. “It could also be applied to manmade branching structures such as antennas.” He added that there is still much more to understand about tree design, including the self-similarity shared by large trunks and smaller branches. “I am still working on this subject, in particular to try to relate growth to external loads,” he said. “In other words, I would like to understand the dynamical growth mechanisms that lead to the intricate fractal structures of trees.” More information: Christophe Eloy. “Leonardo’s Rule, Self-Similarity, and Wind-Induced Stresses in Trees.” Physical Review Letters 107, 258101 (2011). DOI: 10.1103/PhysRevLett.107.258101 Returners there is still much more to understand about tree design, including the self-similarity shared by large trunks and smaller branches. Well, it's an architecture which requires almost no "blue print". You simply repeat the same basic shapes and relationships over and over again, which is from a certain point of view more efficient than the architecture of animals, which in most cases is less fractal in design. I think this is also related to the breaking mass of a branch when considering gravity. The thicker a branch, the heavier it is, but also the stronger it is, but increasing length of a branch only adds weight, not strength. Branching increases surface area and mass, which makes a tree weaker to wind and gravity, BUT increases the amount of moisture and sunlight available both to the bark and the leaves. It's a trade-off between energy and moisture absorbtion vs structural integrity of individual parent branch structure.

Cin5456 Nice explanation. Thanks. Noumenon Brings back memories of writing code to generate Lindenmayer system type fractals from simple axioms, for plant models; very elegant and allowed much control. ronwagn Doesn't seem to apply to Bradford Pears, Southern Oaks and some others I have seen. They also seem to do well in winds! Way to many assumptions, without scientific method. An overly broad thesis. A true study of a large and wide variety of species would be of value however. He does state that it does not seem to apply to all species, but that greatly undermines the initial thesis. ronwagn Recent wind storms in Southern California have destroyed a lot of trees in the Los Angeles Arboretum. They could supply a lot of information. They may have said that a lot of Bradford Pears fell. Also a lot of semi tropical trees. The winds were severe. LariAnn IMHO, if the "rule" does not apply to all true trees, then the use of the word is a misnomer. I'd prefer "Leonardo's pattern" if he was the first one to observe and describe it. Clearly it is not a "rule". Isaacsname A lot of trees that are grafted to rootstock these days are grafted to different species, ie, pear is grafted on quince roots. Pears typically come from windier environments than quinces, so I could see drawbacks. Researchers discover a compound that controls Listeria

Saturn's largest moon, Titan, is an intriguing, alien world that's covered in a thick atmosphere with abundant methane. With an average surface temperature of a brisk -297 degrees Fahrenheit (about 90 kelvins) ... Space & Earth / Space Exploration

1 hour ago | 5 / 5 (1) | 0 |

Research team suggests rock found in Russia an extraterrestrial quasicrystal

(PhysOrg.com) -- Sometimes in science, the journey is just as interesting as the findings, and that certainly appears to be the case with a disparate group of scientists and their involvement with a simple ... Chemistry / Materials Science

6 hours ago | 4.7 / 5 (6) | 2 | New research shows how male spiders use eavesdropping to one-up their rivals Researchers have made a new discovery into the complex world of spiders that

reflects what some might perceive as similar behavior in human society. As male wolf spiders go searching for a mate, it appears they eavesdrop, ... Biology / Plants & Animals

3 hours ago | 5 / 5 (1) | 2 | Experimental vaccine partially protects monkeys from HIV-like infection Results from a recent study show that novel vaccine combinations can provide partial protection against infection by Simian Immunodeficiency Virus (SIV) in rhesus monkeys. In addition, in the animals that became infected, ... Medicine & Health / HIV & AIDS

1 hour ago | not rated yet | 0 |

The case of the missing gas mileage

Contrary to common perception, the major automakers have produced large increases in fuel efficiency through better technology in recent decades. There’s just one catch: All those advances have barely ... Technology / Energy & Green Tech

9 hours ago | 4.2 / 5 (6) | 35 |

NSF turns to ancient pottery to improve modern heat resistant ceramics (PhysOrg.com) -- In order to better understand how ceramics are able to resist heat, the National Science Foundation has awarded grants totaling half a million dollars to three research groups to look into ... Chemistry / Materials Science

4 hours ago | 4.5 / 5 (2) | 1 |

Researchers discover one of the most porous materials to date

The delivery of pharmaceuticals into the human body or the storage of voluminous quantities of gas molecules could now be better controlled, thanks to a study by University of Pittsburgh researchers. In a paper published ... Chemistry / Materials Science

5 hours ago | not rated yet | 0 | Researchers discover a compound that controls Listeria

In a year when cantaloupe tainted with the bacterium Listeria monocytogenes killed 30 people, the discovery of a compound that controls this deadly bacteria -- and possibly others -- is great news. Biology / Cell & Microbiology

9 hours ago | 5 / 5 (1) | 1 |

Scientists create first 3-D map of human genome January 4, 2012 By Robert Perkins

Enlarge Understanding the structure of the human genome is critical to understanding its function as a whole, according to USC Dornsife’s Lin Chen. (PhysOrg.com) -- For the first time, scientists have developed a method for generating accurate three-dimensional models of the entire DNA strand of a cell, known as a genome. The genome plays a central role in the functions of almost all human cells, and flaws in its structure are thought to cause various disorders, including cancer.

Understanding the structure of the genome is crucial to understanding its function as a whole, said Lin Chen, professor of molecular biology in USC Dornsife. “Everything biological works in the three dimensions,” Chen said. “Therefore, to understand it completely, you have to understand it three-dimensionally.” The genome inside a cell can be thought of as a bowl of angel hair pasta. Different cells are like different bowls of pasta in which the noodles are organized differently overall, but they share certain features. The technique adds a crucial piece of the puzzle for scientists trying to understand the genome — the cornerstone of life — in normal and diseased cells. One of the most likely applications of this research will be to identify potentially cancerous cells based on structural defects in the cell’s genome, Chen said. “Hopefully in the future, these studies allow scientists to better understand how the genome is involved in disease and how its function can be regulated in those circumstances,” Chen said. Because of its tiny size and monstrously long length, creating a three-dimensional image of a genome is not as simple as taking a photograph. The genomic DNA strand is so long that if a nucleus was the size of a soccer ball, the strand of DNA inside it could be unraveled to stretch more than 30 miles long. Nothing biologists normally use for studying the structure of biomolecules works well for the human genome. Scrunched up inside the nucleus, the DNA forms hundreds of millions of contacts with itself. Using a new technique, USC researchers plotted out the location of each of those DNA-on-DNA contacts and used sophisticated computer algorithms to model the results in 3-D. “It provides you with a completely new prospective in the genome,” Chen said. The study appeared on the Nature Biotechnology Web site on Dec. 25, ahead of its publication in the print edition. By analyzing the differences and similarities in genome structure between various cells, scientists are able to discern what basic principles of 3-D organization are. In addition, the structure allows scientists to see where each gene is located relative to any other gene and how this arrangement is important to cellular functions. The method used by the USC team takes into account the fact that each cell is slightly different — the DNA does not always scrunch in the exact same way. “There is not a single structure of a genome,” said Frank Alber, assistant professor of computational biology in USC Dornsife. Chen and Alber led a team of USC researchers, including Reza Kalhor, Harianto Tjong and Nimanthi Jayathilaka, that solved the problem. By doing a statistical analysis of many genomes, the team was able to determine “preferred positions” for the DNA strand, providing an idea of how it most likely is to appear.

Provided by USC College Close encounters: When Daniel123 met Jane234 (w/ video) January 4, 2012 by Nancy Owano Enlarge (PhysOrg.com) -- Qbo robots created a stir recently when their developers succeeded in

demonstrating that a Qbo can be trained to recognize itself in the mirror. Now the developers have taken their explorations into simulated consciousness a step further. A pair of Qbo robots, colored differently but still two Qbo entities, can recognize each other. Just as human earthling Harry met Sally, Qbo Daniel can meet Jane and they can exchange similarly empty-headed conversation. Always posing the question what-if, Francisco Paz and his Madrid based team, The Corpora, developers of the Qbo, work with Qbo as a robot project. The accent is not on robots with human

consciousness but on robots with simulated consciousness. Nonetheless, always asking the question what-if, they posed a teaser for themselves. Now that they got the robot to recognize itself in the mirror, what about when one Qbo is faced with another Qbo, stacking them both with sensors and recognition software? The Qbo is generally described as open source; it runs on Linux, has two cameras with stereoscopic vision and uses recognition software. They developed bots that talk to each other through Festival, a speech synthesis system, and Julius, a speech recognition engine. In their latest Qbo scenario, a green Daniel123, unaware that a Jane might be on life’s table, is told by its master to turn around, and that is when it encounters blue Jane 234. Daniel appears to be aware that Jane is a Qbo. Daniel and Jane sniff each other out, so to speak, by being programmed to generate nose flashes, to distinguish that there is another individual robot. The sniffing explanations make it tempting to imagine that the robots are independently flirting. The danger is to attribute human consciousness to robots that are not designed that way. Daniel may be able to understand it’s Jane, not himself, in the mirror, but only because it has been programmed that way by a clever human. The Corpora team is the first to dispel any magical human consciousness. They detail what makes Daniel and Jane see each other as separate, but approachable, on the team blog: “Inspired by this process of self-recognition in humans, we developed a new ROS [robot operating system] that is executed when the node “Object Recognizer,” previously trained, has identified a Qbo in the image. Using nose signals to see if the image seen by the robot matches its action, a Qbo can tell in real time whether he sees his image reflected in a mirror or he is watching another Qbo robot in front of him. The sequence of flashes of the nose is randomly generated in each process of recognition, so the probability that two robots generate the same sequence is very low, and even lower that they start to transmit it at the same time." November Qbo 2011 video More information: http://thecorpora. … /blog/?p=854 © 2011 PhysOrg.com

A quantum leap in computing January 4, 2012 Enlarge The the D-Wave One 128 qubit "Rainer" processor will be used by researchers in USC Dornsife and USC Viterbi to help advance the understanding of quantum computing. Photo by Ziva Santop/Steve Cohn Photography. When American physicist Richard Feynman in 1982 proposed creating a quantum computer that could solve complex problems, the idea was merely a theory scientists believed was far off in the future.

A few decades later, USC Dornsife researchers are closing in on harnessing quantum computing, a system that takes advantage of computational quirks such as quantum coherence. In the past, quantum decoherence had hindered researchers’ attempts to construct a durable quantum computer because the process interferes with quantum properties and renders the system no better than a classical computer. Once a deterrent, decoherence has become an obstacle that can be overcome using quantum tricks developed by USC researchers. USC scientists can now vet their theories on the world’s first commercially available operational quantum computer. In October, USC founded the USC-Lockheed Martin Quantum Computing Center, which houses the D-Wave One, worth about $10 million and owned by Lockheed Martin. USC and Lockheed Martin will work together to explore the potential of the groundbreaking technology. The center and the adiabatic quantum computer that uses quantum annealing to solve optimization problems operating on a 128 qubit chip-set are located on the USC Information Sciences Institute campus in Marina del Rey, Calif. “We have been strong in quantum computing for years but this development really is a ‘quantum leap’ for us,” said Daniel Lidar, a professor of chemistry in USC Dornsife with a joint appointment in USC Viterbi School of Engineering, who serves as the center’s scientific and technical director, and who initiated the efforts culminating in the arrival of the D-Wave One. “We believe the ‘Rainier’ processor can pave the way toward solving some interesting algorithm issues such as optimization problems — problems such as machine learning automatic image recognition, and software validation.” Lidar is separately leading a team conducting research with the support of a $6.25 million Department of Defense Multidisciplinary Research Initiative (MURI) grant issued to five academic institutions under USC leadership. The USC award is part of a $151 million MURI program involving 27 institutions. “The D-Wave chip is not uncontroversial: many researchers in the community are skeptical regarding its quantum powers,” Lidar said. “An important aspect of the USC research effort will be to settle this controversy.” Stephan Haas and Paolo Zanardi of USC Dornsife, USC Viterbi faculty members and researchers on the Marina del Rey campus, along with scholars from several universities are working with Lidar. The center’s findings could lead to designs for superfast computers. “This center is very big for the quantum information community,” said Zanardi, professor of physics in USC Dornsife and a newly elected fellow of the American Physical Society. “Rather than just writing our theories on the board we can finally check them on a real concrete system.” Fifteen USC Dornsife and USC Viterbi researchers, along with USC graduates and postdoctoral students are collaborating through the center, trying to better understand the perplexing questions of quantum systems. The group is part of the USC Center for Quantum Information and Science and Technology (CQIST), which serves as the umbrella organization for quantum computing at USC. Unlike a classical computer, which encodes either a one or a zero using traditional bits, quantum computers rely on qubits, a unit of quantum information associated with the quantum properties of a physical atom. Quantum mechanics can encode the one and zero digits simultaneously — greatly speeding up the system. This property known as superposition — coupled with the quantum states ability to “tunnel” through energy barriers — allows the computer to perform optimization calculations far faster than classical computers. By taking advantage of these properties, a quantum computer in theory could process every possible answer at the same time rather than one at a time. Researchers will utilize the D-Wave processor to develop methods to construct new quantum optimization algorithms, study the fundamental physics of entanglement and lead experiments in adiabatic quantum computing. They will also focus on managing decoherence. The same ingredient that drives quantum computers to operate at fast speeds can also be a troublesome stumbling block that kicks quantum particles off superposition, knocking the quantum system back down to that of a classical computer. Envision the quantum system as a point in space and you want the point to follow a precise trajectory. Simple enough except the quantum system’s continuous interaction with the environment randomly kicks the points around and off the trajectory. The key is to protect quantum information and control decoherence. “In order to allow us to outperform classical information processing devices, quantum components have to be very stable,” Zanardi said. “It turns out this quantum weirdness is the extra ingredient that gives us computational speedups, compared to classical algorithms that are very fragile.” The development of optimization algorithms can help detect bugs in computer programs. In addition, optimization has the power to find a needle in a haystack, said Haas, professor of physics and astronomy, and vice dean for research in USC Dornsife.

“A model has countless solutions but only one of these is optimal,” Haas said. “The optimal solution can be one in a billion. If you have a classical computer it would take forever to find the optimal solution. With a quantum computer the search is very much accelerated.” Haas and Tameem Albash, a postdoctoral research associate in the department of physics in USC Dornsife, are addressing how to control a quantum computing system — or manipulating the inputs to a system to obtain the desired effect on the system’s output. By manipulating the magnetic field surrounding the device, the researchers are attempting to find the lowest-energy state of a specific quantum mechanical system, or the ground state property of a specific problem of interest. “This chip gives us a new opportunity to flex our theoretical muscles,” Albash said. “The most interesting aspect of the chip is using it to solve problems we have never been able to answer.” USC researchers are studying the various challenges associated with constructing a quantum computer, so it can be more easily built in the future. The new center paves the way for scholars to advance knowledge in a potentially revolutionary field. “This technology is going to be a great testing ground for our theories and will enable us to develop our theories in new directions,” Zanardi said. “I expect great things out of the processor. We are all excited.” Provided by USC College fmfbrestel Excellent. University buy in is one of the keys to D-Wave's legitimacy. I look forward to the research papers that will be created with the use of this system. These systems aren't good at most problems, but they are very good at a small (but important) set of problems. NS Stephen Hawking at 70: Exclusive interview

04 January 2012 When he was diagnosed with motor neurone disease aged just 21, Stephen Hawking was only expected to live a few years. He will be 70 this month, and in an exclusive interview with New Scientist he looks back on his life and work STEPHEN HAWKING is one of the world's greatest physicists, famous for his work on black holes. His condition means that he can now only communicate by twitching his cheek (see "The man who saves Stephen Hawking's voice"). His responses to the questions are followed by our own elaboration of the concepts he describes. What has been the most exciting development in physics during the course of your career? COBE's discovery of tiny variations in the temperature of the cosmic microwave background and the subsequent confirmation by WMAP that these are in excellent agreement with the predictions of inflation. The Planck satellite may detect the imprint of the gravitational waves predicted by inflation. This would be quantum gravity written across the sky. New Scientist writes: The COBE and WMAP satellites measured the cosmic microwave background (CMB), the afterglow of the big bang that pervades all of space. Its temperature is almost completely uniform – a big boost to the theory of inflation, which predicts that the universe underwent a period of breakneck expansion shortly after the big bang that would have evened out its wrinkles. If inflation did happen, it should have sent ripples through space-time – gravitational waves – that would cause variations in the CMB too subtle to have been spotted so far. The Planck satellite, the European Space Agency's mission to study the CMB even more precisely, could well see them. Einstein referred to the cosmological constant as his "biggest blunder". What was yours? I used to think that information was destroyed in black holes. But the AdS/CFT correspondence led me to change my mind. This was my biggest blunder, or at least my biggest blunder in science. NS: Black holes consume everything, including information, that strays too close. But in 1975, together with the Israeli physicist Jakob Bekenstein, Hawking showed that black holes slowly emit radiation, causing them to evaporate and eventually disappear. So what happens to the information they swallow? Hawking argued for decades that it was destroyed – a major challenge to ideas of continuity, and cause and effect. In 1997, however, theorist Juan Maldacena developed a mathematical shortcut, the "Anti-de-Sitter/conformal field theory correspondence", or AdS/CFT. This links events within a contorted space-time geometry, such as in a black hole, with simpler physics at that space's boundary. In 2004, Hawking used this to show how a black hole's information leaks back into our universe through quantum-mechanical perturbations at its boundary, or event horizon. The recantation cost Hawking a bet made with fellow theorist John Preskill a decade earlier. What discovery would do most to revolutionise our understanding of the universe? The discovery of supersymmetric partners for the known fundamental particles, perhaps at the Large Hadron Collider. This would be strong evidence in favour of M-theory. NS: The search for supersymmetric particles is a major goal of the LHC at CERN. The standard model of particle physics would be completed by finding the Higgs boson, but has a number of

problems that would be solved if all known elementary particles had a heavier "superpartner". Evidence of supersymmetry would support M-theory, the 11-dimensional version of string theory that is the best stab so far at a "theory of everything", uniting gravity with the other forces of nature. If you were a young physicist just starting out today, what would you study? I would have a new idea that would open up a new field. What do you think most about during the day? Women. They are a complete mystery. To mark Hawking's birthday, the Centre for Theoretical Cosmology, University of Cambridge, is hosting a symposium entitled "The State of the Universe" on 8 January (watch live at ctc.cam.ac.uk/hawking70/multimedia.html). An exhibition of his life and work opens at the Science Museum, London, on 20 January Faster-than-light neutrinos dealt another blow

15:06 04 January 2012 by Lisa Grossman Read more: "Neutrinos: Complete guide to the ghostly particle" Faster-than-light neutrinos can't catch a break. If they exist they would not only flout special relativity but also the fundamental tenet that energy is conserved in the universe. This suggests that either the speedy neutrino claim is wrong or that new physics is needed to account for it. In September, physicists with the OPERA experiment in Gran Sasso, Italy, reported that neutrinos had apparently travelled there from CERN near Geneva, Switzerland, faster than light. The claim threatened to blow a hole in modern physics – chiefly Einstein's special theory of relativity, which set the speed of light as the absolute limit for all particles in the universe. Now a team including Shmuel Nussinov of Tel Aviv University in Israel says it could also put a dent in the principle of the conservation of energy. "This is such a holy principle that has been verified in so many ways," he says. The speeding neutrinos were born in a particle accelerator at CERN, when protons slammed into a stationary target and produced a shower of unstable particles called pions. Each pion quickly decayed into both a neutrino and a heavier particle called a muon. The muons stopped at the end of a tunnel, but the neutrinos, which slip through most matter like ghosts, continued 730 kilometres through the Earth to the OPERA experiment. Unequal inheritance The neutrinos apparently outpaced light by 60 nanoseconds. But if energy is conserved – that is, if the daughter muon and neutrino together have the same amount of energy as the pion they decayed from – the neutrinos could not have gone so fast, the team say. That's because the rules for inheriting energy treat slow-moving particles differently from those moving close to the speed of light. If the neutrinos did begin their lives moving faster than light, then their slower muon siblings should have gotten the lion's share of their parents' energy. "It's like a very rich father with a son who wants to go into business and continue by the rules, and a son who's a black sheep," Nussinov says. "He bequeaths everything to the good son, and gives the other one literally nothing." The energy available to faster-than-light neutrinos from the CERN pions is too small by a factor of 10 to explain the speeds reported, the team say. Could the neutrinos have started off slow, thereby getting a larger share of the inheritance, and then been accelerated somehow? Possibly, but Nussinov says this is unlikely because neutrinos usually do not interact with anything, making it hard to understand what could be doing the accelerating. 'Absolute contradiction' This unequal energy inheritance is even more pronounced if the parent pion has more energy – the more energetic the pion is, the more energy it gives to the muon. This can be seen in atmospheric pions, which are produced when cosmic rays slam into the atmosphere. Atmospheric pions are 100 to 1000 times more energetic than the pions produced at CERN, and their muon and neutrino progeny have correspondingly high energies. If neutrinos really can outstrip light as much as the OPERA measurement suggests, the atmospheric pions should decay completely into muons. Curtailing the number of possible decay routes means the pions should decay less often, and the team calculates that they should slam into the Earth before they get a chance to decay. In that case, we should not see any high-energy muons or neutrinos at all. But we do. Experiments like the IceCube neutrino telescope at the South Pole have seen high-energy neutrinos – and billions of muons to boot. "We have an absolute contradiction right there," Nussinov says. "We know there are many, many neutrinos produced with high energy." Exotic explanations

The argument adds to a growing list of theoretical strikes against faster-than-light neutrinos. The most popular so far claims that if the particles ever broke the speed of light, they would quickly radiate away their energy and slow down. "We say energetic neutrinos of this deviant type will never be born; they say if they are born, they will die quickly," Nussinov says. Both arguments are worrisome, agrees OPERA team member Luca Stanco of the National Institute of Nuclear Physics in Italy, who did not sign the original version of the paper because he thought it was too preliminary. "As a result, you may understand that the physics community (and myself, in particular) is still quite embarrassed by the OPERA measurement: it does not fit well in any physics frame now known," he says. "We urgently require an independent confirmation of the OPERA measurement." If the measurement holds up, the only way out may be to break all the rules, Nussinov says. "I would have loved to have [the result be true], but it's just inconsistent with basic, basic things," he says. "The only way to avoid this thing is to assume that, well, maybe on the way they went to other dimensions or something." Journal reference: Physical Review Letters, DOI: 10.1103/PhysRevLett.107.251801 !!!!!!!!!!!!!!!! Phys. Rev. Lett. 107, 251801 (2011) [4 pages] Superluminal Neutrinos at OPERA Confront Pion Decay Kinematics

Abstract References

No Citing Articles Download:PDF (105 kB) Buy this articleExport:BibTeX or EndNote (RIS) Ramanath Cowsik

1, Shmuel Nussinov

2,3, and Utpal Sarkar

1,4

1Department of Physics and McDonnell Center for the Space Sciences, Washington University, St.

Louis, Missouri 63130, USA 2School of Physics and Astronomy, Tel Aviv University, Ramat Aviv, Tel Aviv 69978, Israel

3Schmidt College of Science, Chapman University,Orange California 92866, USA

4Physical Research Laboratory, Ahmedabad 380009, India

Received 2 October 2011; revised 5 December 2011; published 16 December 2011 Violation of Lorentz invariance (VLI) has been suggested as an explanation of the superluminal velocities of muon neutrinos reported by OPERA. In this Letter, we show that the amount of VLI required to explain this result poses severe difficulties with the kinematics of the pion decay, extending its lifetime and reducing the momentum carried away by the neutrinos. We show that the OPERA experiment limits α=(vν-c)/c<4×10

-6. We then take recourse to cosmic-ray data on the spectrum of

muons and neutrinos generated in Earth’s atmosphere to provide a stronger bound on VLI: (v-c)/c<10-

12.

© 2011 American Physical Society (Einstein's theory of relativity holds several things sacred. One is the idea that if you rotate a particle or object, or boost it up to a high velocity, the laws of physics affecting the object should stay the same. This is called Lorentz invariance. But in some "extensions" of the standard model of particle physics, interactions of particles with certain hypothetical universal fields (very roughly analogous to the way in which Higgs bosons are supposed to make some particles massive) might lead to subtle violations of Lorentz invariance.) Stealth tactics of bacteria revealed

04 January 2012 WE ARE now privy to the ways bacteria outsmart antibiotics, thanks to a technique which measures the evolution of antibiotic resistance. A team led by Erdal Toprak and Adrian Veres at Harvard University developed the "morbidostat", a device that constantly monitors the growth of bacteria in the presence of an antibiotic, increasing the concentration of the drug as the bacteria evolve resistance. Using the morbidostat, the team investigated how Escherichia coli responded to three different antibiotics over 25 days. Levels of resistance increased for all three drugs. However, resistance to chloramphenicol and doxycycline developed smoothly over time, whereas resistance to trimethoprim happened in discrete steps. The team sequenced the genome of E. coli from the final stage of the experiment. Bacteria resistant to chloramphenicol and doxycycline had a large number of changes all over their genome, suggesting that lots of small mutations outsmart the drugs. For trimethoprim resistance, most changes took place

in just one gene. The bacteria had to wait for mutations to occur in this small area, which explains why resistance evolved in stages. Further sequencing revealed the same mutations occurring in the same order in every trimethoprim-resistant population (Nature Genetics, DOI: 10.1038/ng.1034). Knowing about these pathways of resistance could help to find drug doses to minimise resistance. A scorpion's body serves as a basic eye

12:57 04 January 2012 by Zoë Corbyn

Extreme clubbing accessories (Image: Douglas D. Gaffin ) Scorpions don't need to use their eyes to get a full picture of their surroundings: their body seems to function as a basic eye under ultraviolet light. To test the idea that the waxy cuticle covering a scorpion's body can detect light, Doug Gaffin of the University of Oklahoma in Norman exposed 40 of the arachnids to visible or UV light. He studied their behaviour both with and without "eye-blocks" – pieces of foil placed over their eyes to act like opaque glasses. Wearing their shades, the scorpions did not move around much when illuminated by green light. But under UV light they scuttled around freely with or without the glasses, suggesting they did not rely on their eyes to see. The larva of the fruit fly is thought to be the only other creature whose body can detect light. Carl Kloock at California State University in Bakersfield says the idea complements his own work. He found that the ability of scorpions' cuticles to fluoresce in UV light affects their behaviour at night, since moonlight contains a modest amount of UV. Journal reference: Animal Behaviour, DOI: 10.1016/j.anbehav.2011.11.014 !!!!!!!!!!!!!!! Level-up life: how gaming can enhance your reality

04 January 2012 by Sally Adee For similar stories, visit the Christmas Science and The Human Brain Topic Guides

Playing Angry Birds or Mario Kart can warp your perception of what's real and what's virtual – and that might be just what you need IT WAS mid-January and the roads in New York were slick with ice. I was driving aimlessly in search of a parking space when, while turning an especially tight bend, I went into a sickening sideways skid and headed straight for a row of snow-covered cars. I wasn't expecting what happened next. Without thinking about what I was doing, I twisted the wheel in a way that I had never done before. It worked: I came out of the skid and drove away unscathed. It was only after I had parked, legs shaking and heart pounding, that I recognised the reflexes that had kicked in during my moment of panic. This wasn't the first time I had made that emergency steering movement, after all. I had done so countless times before, but on those occasions the wheel in my hands had been a white plastic controller. I had been saved by Mario Kart. My experience was given a name earlier this year by psychologists at Nottingham Trent University in the UK and Stockholm University in Sweden. They call it "game transfer phenomenon", or GTP. In a controversial study, they described a brief mental hiccup during which a person reacts in the real world the way they would in a game. For some people, reality itself seems to temporarily warp. Could this effect be real? Most of us are gamers now. The stereotype of a guy living in his parents' basement on a diet of Cheetos and soda is long gone. The average gamer is 34 years old, gainfully employed and around

40 per cent are female. They play, on average, 8 hours a week and not just on consoles; around half of the gaming activity today is on smartphones. Still, the idea of Angry Birds spilling into reality does sound far-fetched. Indeed, if you read some of the descriptions of GTP, they can seem a little silly. After dropping his sandwich with the buttered side down, for example, one person interviewed said that he "instantly reached" for the "R2" controller button he had been using to retrieve items within PlayStation games. "My middle finger twitched, trying to reach it," he told the researchers (International Journal of Cyber Behavior, Psychology and Learning, vol 1, p 15). Mark Griffiths, one of the study's authors at Nottingham Trent, says the work provoked a torrent of letters. Half accused the researchers of disingenuously formalising idiosyncratic experiences reported by a small sample of 42 - that charge was countered by their subsequent study replicating the findings in 2000 gamers. The other half asked why Griffiths was rebranding a familiar finding. "They said, 'we've known about this for ages'," he recalls. "It's called the Tetris effect." That term was coined in 1996 to refer to a peculiar effect caused by spending a long time moving the game's falling blocks into place. Play long enough and you could encounter all sorts of strange hallucinatory residuals: some reported witnessing bathroom tiles trembling, for example, or a floor-to-ceiling bookcase lurching down the wall. In less extreme but far more common cases, people saw moving images at the edge of their visual field when they closed their eyes. Most researchers agreed that the Tetris effect was the well-known consequence of engaging in a repetitive task. Chances are you have experienced this yourself, most commonly after a long day of driving, when you can see the road moving in front of you as you close your eyes. In fact, sleep researcher Robert Stickgold at Harvard University showed in 2000 that the Tetris effect was most pronounced during hypnagogia, the state between wakefulness and sleeping (Science, vol 290, p 350). If GTP was an extension of the Tetris effect, then it would be nothing new - which is exactly what many researchers think. "I'm not sure I see this as unique to video games," says C. Shawn Green, a cognitive scientist at the University of Wisconsin-Madison. "It's more of a consequence of a thing you do a lot." But Angelica Ortiz de Gortari of Nottingham Trent University - one of the GTP study co-authors - says that GTP affects people in ways that the Tetris effect does not. For one thing, the habituation effect that Stickgold reported tends to be felt mainly when other sensory stimulation is removed. The brief reality excursions that characterise GTP, by contrast, take place in broad daylight in the middle of other activities. "Some instances of GTP are closer to synaesthesia," she says, in which two or more senses are involuntarily and automatically scrambled. "You're literally overlaying the rules of one reality onto a different one." What's more, these are more than hallucinations, she says. As I found on that icy day in New York, GTP can change your behaviour. What causes that to happen? For one thing, gaming has changed profoundly since Tetris. Better graphics cards and bigger displays are deepening the illusion of reality in games. They are not just about moving joysticks and mice anymore - thanks to the Nintendo Wii and the Xbox Kinect, physical movements that mirror the real thing are involved. Brain train Sophisticated video games have had demonstrable effects on their players. For example, people who frequently play action games often outperform non-gamers on measures of perception and cognition. Other studies have found that intense video game practice improved players' ability to carry out complex hand-eye coordination tasks and their contrast sensitivity. This shouldn't be a surprise, says Mark Burgess, a neuropsychologist at University College London. "The brain is constantly reconfiguring itself. Everything new that we learn means that some connections in the brain have been added or altered, even if temporarily," he says. As a result, Burgess says, the brain is continually creating "schemata" - behavioural ruts that have become entrenched through repetition. For example, when an experienced driver approaches a set of traffic lights and they turn red, the driver does not have to think about changing gear, braking and so on. New drivers, by contrast, must consider each step. Though he doesn't think the synaesthesia interpretation of GTP holds water, he does think it is inevitable that the brain's plasticity can "cause inefficiency when one switches to a different environment". In other words, we are only hardwired to deal with one reality at a time. A rapid switchover will have consequences. One thing is for sure: we are spending an increasing amount of time in a digital space, the rules of which don't entirely mirror those of reality, so Ortiz de Gortari expects more of the virtual world to leak into the real one.

As for me, I am just glad that my Mario Kart driving expertise kicked in when it did. Fingers crossed that's the only thing it gave me. Throwing bananas and exploding shells at opponents are vital tactics - so if we happen to meet one day, you may want to duck. Sally Adee is a features editor at New Scientist BBC GM silk worms make Spider-Man web closer to reality

By Pallab Ghosh Science correspondent, BBC News

Spider-Man's webbing is something that researchers have been trying to reproduce for decades Continue reading the main story Related Stories

Fossil spider 'biggest on record'

Ancient spiders yield 3D secrets

Tiny spider 'digitally dissected' US researchers have created silkworms that are genetically modified to spin much stronger silk. Writing in the PNAS journal, scientists from the University of Wyoming say that their eventual aim is to produce silk from worms that has the toughness of spider silk. In weight-for-weight terms, spider silk is stronger than steel. Comic book hero Spider-Man generated spider silk to snare bad guys and swing among the city's skyscrapers. Researchers have been trying to reproduce such silk for decades. But it is unfeasible to "farm" spiders for the commercial production of their silk because the arachnids don't produce enough of it - coupled with their proclivity for eating each other. Silk worms, however, are easy to farm and produce vast amounts of silk - but the material is fragile. Continue reading the main story “Start Quote They are able to take a component of spider silk and make a silkworm spin it into a fibre alongside its own” End Quote Dr Christopher Holland Oxford University Researchers have tried for years to get the best of both worlds - super-strong silk in industrial quantities - by transplanting genes from spiders into worms. But the resulting genetically modified worms have not produced enough spider silk until now. GM worms produced by a team led by Professor Don Jarvis of Wyoming University seem to be producing a composite of worm and spider silk in large amounts - which the researchers say is just as tough as spider silk. Commenting on the work, Dr Christopher Holland from the University of Oxford, said that the development represented a step toward being able to produce toughened silk commercially. "Essentially, what this paper has shown is that they are able to take a component of spider silk and make a silkworm spin it into a fibre alongside its own silk," he said.

The best of both worlds - scientists want to produce super strong silk in industrial quantities

They have also managed to show that this composite, which contains bits of spider silk and mainly the silkworms' own silk, has improved mechanical properties." The main applications could be in the the medical sector creating stronger sutures, implants and ligaments. But the GM spider silk could also be used as a greener substitute for toughened plastics, which require a lot of energy to produce. There are concerns, though, about creating GM worms for industrial applications in case they escape into the wild. But according to Professor Guy Poppy of Southampton University, they would not pose an environmental threat and he believes the benefits would outweigh any risk. "It's hard to see how a silkworm producing spider silk would have any advantage in nature," he said. Follow Pallab on Twitter More on This Story Fossil spider 'biggest on record' 20 APRIL 2011, Ancient spiders yield 3D secrets 05 AUGUST 2009, Tiny spider 'digitally dissected' Pregnancy theory over black widow 17 JUNE 2011. INOVATION !!! Plan to 'Catapult' UK space tech By Jonathan Amos Science correspondent, BBC News Continue reading the main story TECHDEMOSAT-1 (TDS-1)

The 150kg satellite should be ready to launch later this year

Payload participants aim to prove their technology is market ready

If successful, TechDemoSat is likely to become an ongoing Catapult programme Continue reading the main story Related Stories

Private science university plan

UK space radar project initiated

China and UK strike space deal The UK Science Minister David Willetts says the next Catapult technology and innovation centre will be dedicated to developing new space applications. Three such centres have already been initiated by the government, in advanced manufacturing, cell biology and offshore renewables. Their aim is to find the next big idea that, with the right support, can be turned into an economic success story. Ministers see space activity as an area where "UK plc" can excel. The space industry is already very healthy, and managed to sustain growth right through the recent recession. The new Catapult centre in satellite applications is intended to provide businesses with "access to orbit test facilities, to develop and demonstrate new technologies". "It will also provide access to advanced systems for data capture and analysis, supporting the development of new services delivered by satellites," Mr Willetts said in a speech on Wednesday to the thinktank Policy Exchange. "These could be in a wide range of areas such as distance learning and telemedicine, urban planning, precision agriculture, traffic management and meteorology." Jobs creator The government's Technology Strategy Board (TSB), which is driving the Catapult concept, is already developing what is expected to be a series of spacecraft that will act as the demonstration platforms. TechDemoSat-1 (TDS-1) is the first and should be ready for launch later this year. It incorporates novel hardware and software systems that their designers hope can prove their worth in orbit and go on to win export orders.

TDS-1 payloads include instruments to track ships and monitor the sea surface for freak waves, and even a self-destruct "sail" that will pull the spacecraft out of the sky at the end of its mission. A tender will be issued shortly for a consortium to set up and run the new Catapult centre. A workshop for those interested in becoming the leadership team will take place at the end of the month. The TSB envisages a third-third-third funding model in which the money to operate it comes partly from government funding, partly from competitively won institutional contracts, and partly from the private sector. The government's contribution will come out of the £200m that Prime Minister David Cameron announced last October when he initiated the technology and innovation centres programme in a speech to the Confederation of British Industry. If the satellite Catapult follows the model of the cell therapies centre, the public funding is likely to be on the order of about £10m a year. The space industry has agreed a path forward with government that aims to boost the sector. Many of the ideas were enshrined in the Space Innovation and Growth Strategy (Space-IGS) published in 2010. This document laid out a path it believed could take the UK from a position where it currently claims 6% of the global market in space products and services to 10%, by 2030, creating perhaps 100,000 new hi-tech jobs in the process. 'Private university' As part of this push, the government last year inaugurated the International Space Innovation Centre (ISIC) based at Harwell in Oxfordshire. Although its name suggests there might be considerable overlap with the Catapult centre, the former's remit is much more geared to exploiting "today's technology" to the maximum, while the latter will try to pull "tomorrow's technology" more to the near-term. The Catapult is likely to focus on applications of R&D in four growth areas: communications, broadcasting, positioning and Earth observation. These are all areas where British technology is already globally very competitive. Mr Willetts announced the space Catapult in a speech that sought to lay out a vision for how the UK could maintain excellence in science and technology. He spoke about his desire to see a new type of university for cutting edge research that would be set up with international partners and funded by business. And the minister also said he would set up leadership councils to help focus government policy in e-infrastructure and in synthetic biology. Mr Willetts believes the successes of his space leadership council, which brings together key players in the sector into an advisory forum, can be mirrored in other science-related industries. [email protected] and follow me on Twitter More on This Story Private science university plan 04 JANUARY 2012, UK space radar project initiated 29 NOVEMBER 2011, China and UK strike space deal 29 JUNE 2011, Centre champions space innovation 07 MAY 2011, UK space earnings now at £7.5bn 08 NOVEMBER 2010, Satellite to demonstrate UK tech 18 OCTOBER 2010, Strategy to grow UK space sector 10 FEBRUARY 2010. Related Internet links Technology Strategy Board UK Space Agency SSTL SEEDA Privately-funded science university plan By Sean Coughlan BBC News education correspondent

There will also be plans announced for a specialist centre for satellite technology Continue reading the main story

Related Stories

Plan to 'Catapult' UK space tech

Battle of the knowledge superpowers

Looking for the next Google A new type of privately-funded science and technology university has been announced by the universities minister. The graduate institution, intended to promote cutting edge science research, could be set up with international partners and funded by business, David Willetts has said. It follows a similar initiative in New York, where leading universities were invited to set up a research campus. Labour says Britain risks losing its world-beating position in science. In a speech on Wednesday morning, Mr Willetts set out plans for an advanced science research centre, to be created against a background of increasing globalisation and international competition in higher education. New York project "The next round of new institutions may well link existing British universities with international partners," the minister said. "The surge in international investment in science and technology would make this a key part of the mission of a new foundation." Mr Willetts invited applications to set up this new type of university - but without any additional government funding. "This time we will be looking to private finance and perhaps sponsorship from some of the businesses that are keen to recruit more British graduates," Mr Willetts said.

David Willetts wants to promote a science research base to match international competitors Labour's shadow minister for innovation and science, Chi Onwurah, says Britain is "in danger" of losing its world-beating position in science. Ms Onwurah says the announcement from Mr Willetts "underlines the government's failure so far to support science in the UK". "Britain is a leading scientific nation: we have a world-beating position in science but we are in danger of losing it," she said in a statement. "In their 20 months in power, ministers have made the wrong decisions for the long term future of science in Britain: they dismantled the RDAs without an effective replacement, scrapped Labour's long-term research investment framework, cut investment in research and have weakened the Office for Life Sciences." "Today David Willetts claimed that George Osborne's investment in science capital facilities showed his understanding of the importance of scientific research to our economy, but it was George Osborne who cut it by 40% in his first budget." Under changes to higher education proposed by the government it will become easier for overseas institutions to set up in this country - in the way that an increasing number of UK and US universities have set up campuses overseas. The minister says he is taking inspiration from a competition in New York to set up a science research institution which will help to develop hi-tech digital industries. In the wake of the financial crisis, New York city authorities were concerned about an over-dependence on banking and finance. There were worries that New York was falling behind the "knowledge hubs" that had developed around universities in Boston and California's Silicon Valley - and that New York might miss out on the creation of jobs in rising digital industries.

As such New York invited universities around the world to bid in a competition to build a science campus in the city - with Cornell University recently announced as the winner, in a partnership with the Technion-Israel Institute of Technology. 'Knowledge hubs' This new campus will be built on an 11-acre site on the city's Roosevelt Island. It aims to generate £15bn in economic activity in the next three decades. This New York project is part of a global pattern of investment in research and innovation as a way of protecting future economic competitiveness. The French government has launched a £30bn grand project to set up a series of "innovation clusters" - in which universities, major companies and research institutions are brought together to develop knowledge-based industries. Mr Willetts wants a major city in England to offer a site for a technology campus. He also set out plans to increase non-government funding for universities by 10% and to increase the number of English institutions in the top 100 of university rankings. There will also be plans for another so-called "catapult centre" for science research, which will focus on satellite technology. "This will provide business with access to in-orbit test facilities to develop and demonstrate new satellite technologies," Mr Willetts said. Imran Khan, director of the Campaign for Science and Engineering, said: "The minister is right to underline the challenge facing the UK: we should aim to be the best place in the world for science, but we're currently way behind nations such as Germany, Japan, and the US in terms of business and industry investment in research. "Today David Willetts reiterated a whole series of positive measures the coalition is taking to incentivise more private sector investment - but no political party has yet outlined a clear alternative vision for the UK economy." More on This Story Plan to 'Catapult' UK space tech 04 JANUARY 2012, Battle of the knowledge superpowers 28 SEPTEMBER 2011, Looking for the next Google 15 MARCH 2011, UK university's New York ambition 12 APRIL 2011. Related Internet links BIS MLU Autism may be linked to abnormal immune system Wednesday, 04 January 2012 Story Source

Image credit: Rodolfo Clix/STOCK.XCHNG Immune system abnormalities that mimic those seen with autism spectrum disorders have been linked to the amyloid precursor protein (APP), reports a research team from the University of South Florida's Department of Psychiatry and the Silver Child Development Center. The study, conducted with mouse models of autism, suggests that elevated levels of an APP fragment circulating in the blood could explain the aberrations in immune cell populations and function – both observed in some autism patients. The findings were recently published online in the Journal of the Federation of American Societies for Experimental Biology. The USF researchers concluded that the protein fragment might be both a biomarker for autism and a new research target for understanding the physiology of the disorder. "Autism affects one in 110 children in the United States today," said research team leader Jun Tan, MD, PhD, professor of psychiatry and the Robert A. Silver Chair, Rashid Laboratory for Developmental Neurobiology at USF's Silver Child Development Center. "While there are reports of abnormal T-cell numbers and function in some persons affected with autism, no specific cause has been identified. The disorder is diagnosed by behavioral observation and to date no associated biomarkers have been identified."

"Not only are there no associated biomarkers, but the prognosis for autism is poor and the costs associated with care are climbing," said Francisco Fernandez, MD, department chair and head of the Silver Center. "The work of Dr. Tan and his team is a start that may lead to earlier diagnosis and more effective treatments." The amyloid precursor protein is typically the focus of research related to Alzheimer's disease. However, recent scientific reports have identified elevated levels of the particular protein fragment, called, sAPP-α, in the blood of autistic children. The fragment is a well-known growth factor for nerves, and studies imply that it plays a role in T-cell immune responses as well. To study the autism-related effects of this protein fragment on postnatal neurodevelopment and behavior, Dr. Tan and his team inserted the human DNA sequence coding for the sAPP-α fragment into the genome of a mouse model for autism. While the studies are ongoing, the researchers documented the protein fragment's effects on the immune system of the test mice. "We used molecular biology and immunohistochemistry techniques to characterize T-cell development in the thymus and also function in the spleen of the test animals," Dr. Tan said. "Then we compared transgenic mice to their wild-type litter mates." The researchers found that increased levels of sAPP-α in the transgenic mice led to increased cytotoxic T-cell numbers. The investigators also discovered subsequent impairment in the recall function of memory T-cells in the test mice, suggesting that the adaptive immune response is negatively affected in the presence of high levels of the protein fragment. "Our work suggests that the negative effects of elevated sAPP-α on the adaptive immune system is a novel mechanism underlying certain forms of autism," concluded Dr. Tan, who holds the Silver Chair in Developmental Neurobiology. "The findings also add support to the role of sAPP-α in the T-cell response." Facing complexity in the left brain/right brain paradigm Wednesday, 04 January 2012 Story Source

Photo credit The left brain/right brain dichotomy has been prominent on the pop psychology scene since Nobel Laureate Roger Sperry broached the subject in the 1960s. The left is analytical while the right is creative, so goes the adage. And then there is the quasi-scientific obsession with "the face." Facial recognition technology and facial microexpressions are the stuff of television crime dramas, such as Person of Interest and Lie to Me. But Ming Meng, an assistant professor in the Department of Psychological and Brain Sciences at Dartmouth College, has brought these two together in a way that offers new insights into the organization of the brain with implications for autism. Meng and his colleagues have published their findings January 4 in the online edition of the Proceedings of the Royal Society B (Biological Sciences). Meng's novel approach is to combine functional magnetic resonance imaging (fMRI), computer vision, and psychophysics to take our understanding of brain function in a new direction. He was able to assign distinct complementary aspects of visual information processing to each side of the brain. Meng is interested in perception and considers vision its major domain. His research focuses on how the brain is organized to process visual information. The traditional approach to visual information processing has been to view it as an ordered sequence. In the early stages of processing, the right side of the brain was thought to process the left visual field and vice versa, whereas in later stages of processing the right and the left brain process the whole visual field in parallel. "I find such organization puzzling in terms of efficiency with both parts of the brain effectively processing the same thing—a waste of resources," says Meng. Instead, he proposes a division of labor with right side and left doing different things.

Looking at how the brain processes faces is Meng's key to unlocking the mysteries of the left brain/right brain paradigm. The left and right fusiform gyri (spindle-shaped sections) of the temporal lobes were known to be the places where facial stimuli were processed, and Meng homed in here. "I wondered what the difference might be between the left brain and the right in processing the human face and this was the place to look," he says. But first he looked to computer-generated images for his experimental materials. Meng felt that fMRI measurement of his test subjects' reactions only to images of faces versus non-faces offered too coarse a distinction. "We needed to study the full spectrum, the stimuli that makes an image look like a face but not necessarily a face. These results would show the subtle differences between the left and right side of the brain as they dealt with this range of images," he explains. A computer algorithm generated the desired range of images that he then showed to his test subjects while taking fMRI measurements of their brain activity. Using psychophysics as behavioral testing tools, Meng analyzed the spectrum from random non-faces to genuine faces. "We were able to systematically quantify the face-semblance of each of our stimuli (images). This is important because otherwise we would only have an oversimplified 'black-white' distinction between faces and non-faces, which would not be particularly useful to differentiate the functional roles of the left and right hemispheres," Meng explains. "Only with the psychophysical face-semblance ratings, we've found that the left is involved in the graded analysis of the visual stimuli. Our results suggest the left side of the brain is processing the external physical input which resolves into a 'grey scale' while the right brain is underlying the final decision of whether or not it is a face." Application of Meng's tripartite methodology that has shown the differences in the left brain/right brain picture could provide a template for studying patients with face processing deficits, as well as a new frame of reference for autism. Faces constitute a particular challenge for autistic children. They typically avoid eye contact, diverting their gaze from another person's face. Meng suggests that, "the underlying reason for their problems with social interaction may be correlated to their problems with face perception." Knowing the organization of face processing mechanisms in normal individuals provides a good starting point for exploring how this organization might be different in people with autism. The wonder of science Wednesday, 04 January 2012 by Mano Singham Story Source

The Wilkinson Microwave Anisotropy Probe (WMAP) team's first detailed full-sky map of the oldest light in the universe. One of the common criticisms that one hears against us science-based atheists is that our search for naturalistic explanations of hitherto mysterious phenomena, coupled with a relentless assault on irrational and unscientific thinking, results in all the wonder being drained from life. We are told, for example, that to explain that the rainbow is the product of multiple scattering of light by water droplets in the air is to somehow detract from its beauty or that when gazing at the billions of twinkling stars on a beautifully clear cloudless night, to be aware that they are the products of nuclear fusion reactions that took place billions of years ago is to reduce their grandeur. I must say that I don't understand the criticism. For me at least, understanding how these things come about actually enhances my sense of wonder about the universe. The more I learn about how the universe works and how the impersonal forces of nature created everything around us, the more I am impressed.

To illustrate my point, I am now going to show you something that I think is incredibly beautiful. It is the equation: T = 2tanh

-1(√ΩΛ)/(3H0√ΩΛ)

So what is so great about this equation? It is the equation that tells us the age of the universe. Note that the age T depends on just two quantities H0 and the square root of ΩΛ, both of which are measured quantities. H0 is the value of the Hubble constant at the present time and is given by the slope of the straight line obtained when one plots the speed of distant galaxies (on the y-axis) versus the distance to those galaxies (on the x-axis). ΩΛ is the ratio of the density of dark matter in the universe to the total density of the universe. As with all scientific results, there are some basic theoretical assumptions that go into obtaining them. This particular one requires that the universe be governed by Einstein's equations of general relativity and that its current state is 'matter dominated' (i.e., the energy contribution of pure radiation is negligible) and 'flat' (i.e., the total density of the universe is at its critical value so that the curvature of space is neither convex nor concave). These 'assumptions' are supported by other measurements, so they are not arbitrary. The values of H0 and ΩΛ are obtained using satellite probes that collect a vast body of data from stars and galaxies and scientists then do a best fit to those data for multiple parameters, of which these are just two. The current values were obtained in 2009 by the WMAP (Wilkinson Microwave Satellite Probe) satellite launched in 2001, and are given by H0=70.5 km/s/Mpc and ΩΛ=0.726. Insert these values into the above equation (with the appropriate units) and you get that the age of the universe is 13.7 billion years. Why do I think this equation is a thing of extraordinary beauty? Just think about the implications of that equation for a moment. We humans have been around for just an infinitesimally small period of time in history and occupy an infinitesimally small part of the universe. And yet we have been able, using pure ingenuity and by steadily building upon the scientific achievements of our predecessors, to not only figure out the large-scale structure of the vast universe we happen to occupy but to determine, in a simple equation, its actual age! That is truly incredible. If that does not strike you with wonder, then I don't know what will. Furthermore, note how simple the equation is. The tanh

-1 function (which represents the inverse of the

hyperbolic tangent) may be intimidating for some but it is such a standard mathematical function that it can be found on any scientific hand calculator. If a news report states that new satellite data have given revised best fit values for by H0 and ΩΛ, anyone can calculate the revised age of the universe themselves in a few minutes. But as this xkcd cartoon captures accurately, it is not that scientists lose their sense of wonder but that they find wonder in learning about the universe, and do not need to invoke mystery to sense it.

MLU/MAILONLINE SCIENCE&TECH Stem cells could hold key to 'stopping ageing' say scientists after trial triples mouse lifespan

Lifespan tripled after injection of cells from young mice

Injection made mice grow bigger and stronger

Effect on ageing-disorder cells even works in lab dish

Could hold key for injections that offer humans 'youthful vigor' By Rob Waugh Last updated at 8:24 AM on 4th January 2012

Comments (47)

An experiment proved that a single injection of stem cells could make mice live three times as long. Scientists think that studying the proteins within stem cells might hold the key to injections that offer a 'shot of youthful vigour' to human beings Stem cells can halt ageing - and even prolong lifespans up to three times, scientists say. An experiment proved that a single injection of stem cells could make mice live three times as long. The injection also made the mice grow bigger and stronger. Scientists think that studying the proteins within stem cells might hold the key to injections that offer a 'shot of youthful vigour' to human beings. The researchers, from the University of Pittsburgh,say further research could help us hold off the ageing process altogether. The effect was even visible on cells in a lab dish where young stem cells were placed next to prematurely ageing cells. The sick, ageing cells performed better after being placed next to the healthy ones. The U.S. researchers conducted experiments on mice modified to age prematurely, a condition known as progeria. Progeria also occurs in humans. Giving these mice shots of stem cells from young, healthy counterparts allowed them to live up to three times longer than those with progeria. Their experiment saw them first study the stem cells of the progeria sufferers. The scientists saw there was a difference between the cells in the mice with the ageing disorder - they had fewer, the ones they had regenerated less quickly than ordinary mouse stem cells. However, injecting stem cells into 17 day-old mice saw a huge increase in their lifespans - from an average of just 21-28 days to more than 66 days, three times longer than usual. The modified mice given stem cell shots grew almost as large as their healthy counterparts and grew new blood vessels in their brains and muscles.

Stem cells can halt the signs of ageing - and even prolong lifespans up to three times, scientists say. An experiment proved that a single injection of stem cells could make mice live three times as long 'The mice were healthier and lived longer after an injection of stem cells from young, healthy animals,' said Dr Laura Niedernhofer Scientists think this is because the healthy stem cells helped correct abnormalities in the cells of the rapidly-ageing mice. Study author Dr Laura Niedernhofer, whose findings were published in journal Nature Communications said: 'Our experiments showed that mice that have progeria, a disorder of premature aging, were healthier and lived longer after an injection of stem cells from young, healthy animals.' 'That tells us that stem cell dysfunction is a cause of the changes we see with aging.' She added: 'As the progeria mice age, they lose muscle mass in their hind limbs, hunch over, tremble, and move slowly and awkwardly. 'Affected mice that got a shot of stem cells just before showing the first signs of ageing were more like normal mice, and they grew almost as large.' 'Closer examination showed new blood vessel growth in the brain and muscle, even though the stem/progenitor cells weren’t detected in those tissues. 'In fact, the cells didn’t migrate to any particular tissue after injection into the abdomen. ' “This leads us to think that healthy cells secrete factors to create an environment that help correct the dysfunction present in the native stem cell population and aged tissue. 'In a culture dish experiment, we put young stem cells close to, but not touching, progeria stem cells and the unhealthy cells functionally improved.' Once again, creepyness, you want to live longer, look after yourself, eat well and exercise, this to me screams out mutation and cancer. Like I said before MITOCHONDRIA, you want these to improve get into endurance exercise, improve your Vo2 max, yes also try and implement some interval training to improve the power in your muscle fibres. Do not go and inject yourself with a long of bull that only has 40-50 days of extra like behind it, if someone was sick and dying fine, but healthy and well GOOD GOD NO! - Lee, London England, 04/1/2012 14:41 @Robert, Canada: They were using modified mice whose lifespans are drastically reduced due to progeria. I don't expect they have the time to test it on healthy mice, as it could take around 6-8 years to quantify the results. Also, you could prevent a healthy person from getting a disease, and prevention is better than cure, no? Extended lifespans could mean further space exploration, which is definitely a good thing. - Si D, Rochdale, 04/1/2012 13:10 Excellent news! Of course this will not translate directly into a 3-fold increase in human lifespan(more like a % in low single figures), but it's another small step to indefinite lifespans. On the overpopulation objection: the longer people live, assuming longer fertility accompanies longer life, then the less pressure they will feel to reproduce soon. I think this will result in a lowering of birthrates. To me,

failing to investigate this incredible possibility seems equivalent to choosing to die. I say let's find reasons to live rather than excuses to die. - David Wilkins, Lewes, UK, 04/1/2012 12:58 DM science reporting - 'could, 'could', 'could'... - ER, London, 04/1/2012 12:44 You can't cure a disease in someone who is not sick, so I don't see how this has general application to healthy mice or people. The average healthy lab mouse lives up to 2 years, and the 66 days is 1/12 of that average lifespan. - Robert, Canada (expat), 04/1/2012 12:25 Great you get to work for 3 times longer, how about we leave nature alone, you never die, your physical body does but YOU dont, even crazy religions would agree with that. - tim, oz, 04/1/2012 00:14------------------------------------- its the crazy religions that came up with it in the first place Tim, ofcourse they will agree. - Atheist, Arknsaw, 04/1/2012 10:48 Anyone seen teh movie the island? 20 years time, there will be a company harvesting the stem cells of clones and selling them to the rich and famous. - Dave, London, 04/1/2012 10:28 Just hurry up and get something out, I'll be dead in 10 years. Knowing my luck it will be available in 11. - Peter Pan, Kingswood, 04/1/2012 10:19 THIS IS OK ,AS LONG AS IT IS DENIED TO POLITICIANS AND DiM READERS, THEY ALREADY LIVE FAR TO LONG - lucky peasant, currupt land, 04/1/2012 09:25 If we colonise Mars, we will (literally!) be able to go to heaven and live forever. Science or religion, folks? - Mitch, Lagos, 04/1/2012 08:21 Read more: http://www.dailymail.co.uk/sciencetech/article-2081734/Stem-cells-hold-key-stopping-ageing-say-scientists-successful-mouse-trial.html#ixzz1iVnV0K1N V. NS: Fountain of youth? Young stem cells make rapidly aging mice live longer and healthier Storing children’s teeth for later stem cell use Posted: 04 Jan 2012 06:12 AM PST Four-year-old Blair is about to lose her first tooth. Her parents have chosen to put it in a deep freeze that may help her one day. Nao climbs a ladder Posted: 04 Jan 2012 05:55 AM PST A demonstration made for CLAWAR 2011: Conference on Climbing and Walking Robots and the Support Technologies for Mobile Machines held in Paris on september 6-8, 2011 at UPMC University. Wittgenstein vs Freud vs Schopenhauer Posted: 03 Jan 2012 07:42 PM PST I'm reading a fairly heavy (though fortunately not very long!) tome by French philosopher Jacque Bouveresse, Wittgenstein R Alzheimer's damage occurs early Posted: 03 Jan 2012 07:33 PM PST Physician Oskar Hansson and his research group are studying biomarkers—substances present in spinal fluid and linked to Alzheimer's disease. FDA expands use of pneumonia vaccine Posted: 03 Jan 2012 07:29 PM PST Prevnar 13, a pneumococcal 13-valent conjugate vaccine, was approved today by the U.S. KAI Optogenetics switch turns neurons on and off

January 4, 2012 by Editor

Molecular combination switch: two light-sensitive membrane proteins (red and purple) are linked via a connecting piece (green) and anchored into the cell wall. When the cell is illuminated with blue light, it allows positively charged ions into the cell. Orange light allows negatively charged ions in. The cell is activated or deactivated, respectively. (Credit: MPI of Biophysics) A research team at the Max Planck Institute of Biophysics in Frankfurt am Main has developed a molecular light switch that makes it possible to control cells more accurately than ever before. The combination switch consists of two different light-sensitive membrane proteins — one for on, the other for off. Optogenetics is a new field of research that aims to control cells using light, using light-sensitive proteins that occur naturally in the cell walls of certain algae and bacteria. They introduce genes with the building instructions for these membrane proteins into the DNA of target cells. Depending on which proteins they use, they can create on and off switches that react to light of different wavelengths. For accurate control, it is important that the cell function can be switched off and on equally well. When the genes are introduced separately, the cell produces different numbers of copies of each protein and one type ends up dominating. Max Planck Institute of Biophysics scientists have now developed a solution that locates the genes for the on and off proteins on the same portion of DNA, along with an additional gene containing the assembly instructions for a connection piece. This interposed protein links the two switch proteins and anchors them firmly in the cell membrane. The combination light switch conceived by the researchers consists of the membrane proteins channelrhodopsin-2 and halorhodopsin. Channelrhodopsin-2 reacts to blue light by making the cell wall permeable to positively charged ions. The resulting influx of ions triggers a nerve impulse that activates the cell. Halorhodopsin has the opposite effect: when the cell is illuminated with orange light, it allows negatively charged ions in, suppressing nerve impulses. Since channelrhodopsin-2 and halorhodopsin react to light of different wavelengths, together they comprise a useful tool for switching cells on and off at will. The scientists have shown that the method they used to connect the two molecules is also suitable for use with other proteins. Ref.: Sonja Kleinlogel et al., A gene-fusion strategy for stoichiometric and co-localized expression of light-gated membrane proteins, Nature Methods, 2011 [doi: 10.1038/nmeth.1766] Topics: Biotech | Cognitive Science/Neuroscience Related Site Content: Using light to create a safer, more reliable pacemaker | September 26, 2011 Making cells on an assembly line | March 23, 2011 Engineering excitable cells | July 21, 2011 Using cells’ chemical signaling to control cancer or detect toxins | November 17, 2011 An optogenetic technique for neuroscience that uses lasers to manipulate neurocircuits in moving animals | January 17, 2011 Quantum computing applications in imaging January 4, 2012 by Editor Quantum computing may have applications in imaging according to University of Pittsburgh researchers. Working at the interface of quantum measurement and nanotechnology, they are developing a nanoscale magnetic imaging device comprising single electrons encased in a diamond crystal (instead of a huge MRI machine), allowing for studying single molecules or groups of molecules inside cells, instead of the entire body, and without destroying the materials. “Think of this like a typical medical procedure — using MRI, but on single molecules or groups of molecules inside cells instead of the entire body,” says Gurudev Dutt, assistant professor in Pitt’s Department of Physics and Astronomy.

Dutt and colleagues have used quantum computing methods to circumvent the hardware limitation to view the entire magnetic field. By extending the field, the Pitt researchers have improved the ratio between maximum detectable field strength and field precision by a factor of 10 compared to the standard technique used previously. Ref.: N. M. Nusran, et al., High-dynamic-range magnetometry with a single electronic spin in diamond, Nature Nanotechnology, 2011; [DOI: 10.1038/nnano.2011.225] Topics: Nanotech/Materials Science | Physics/Cosmology | Quantum Related Site Content: Subatomic quantum memory in diamond demonstrated | June 28, 2011 New Magnetic Resonance Technique Could Revolutionise Quantum Computing | March 8, 2011 Magnetic nanoparticles fry tumors | July 5, 2011 Topological quantum computing and telecommunication | November 22, 2011 Quantum sensor tracked in human cells could aid drug discovery | May 26, 2011 Videos of 28th Chaos Communication Congress talks January 4, 2012 by Editor

[+] 28C3 (credit: Chaos Computer Club) Videos of the 28th Chaos Communication Congress (28C3) are now available. The four-day conference on technology, society, and utopia, offered lectures and workshops on effects of technological advances on society. The recordings are available for download and can be viewed on the 28C3 Youtube channel. See also: Cory Doctorow keynote talk “The coming war on general computation” keynote talk at 28C3. Topics: Computers/Infotech/UI | Internet/Telecom | Social Networking/Web 2.0 | Social/Ethical/Legal Related Site Content: The coming war on general computation | January 2, 2012 Video: Goertzel presents open-source AI engine at AGI-11 conference | September 22, 2011 The mystery behind anesthesia | December 20, 2011 Smartphone brain scanner | September 19, 2011 Fifth Strategies for Engineered Negligible Senescence (SENS) Conference | September 29, 2011 New year, new science January 4, 2012 Source: Nature News

[+] Lake Vostok (credit: NASA) Nature looks ahead to the key findings and events that may emerge in 2012:

NASA’s car-sized rover, Curiosity, is set to arrive on Mars in August to study its watery past.

Six visionary research proposals will vie for huge grants from the European Commission’s Future and Emerging Technologies Flagship scheme for studies of graphene, robot companions for the lonely, planetary-scale modelling of human activities and their environmental impact, autonomous energy-scavenging sensors, ways to apply research data more efficiently in health care, and supercomputer simulation of the brain.

The Large Hadron Collider will gather enough data this year to either confirm or rule out the existence of the simplest form of the Higgs boson.

Junk DNA sequences will be decoded.

Two monoclonal antibodies to treat Alzheimer’s disease — solanezumab and bapineuzumab — would be a big hit if they reported positive results from phase III trials in 2012.

Russian researchers hope to finish drilling through Antarctica’s ice sheet to reach Lake Vostok, a huge freshwater lake roughly 3,750 meters beneath the surface.

South Africa and Australia will find out by March which of them might host the $2.1-billion Square Kilometer Array (SKA), which would be the world’s largest radio telescope if it is built.

SpaceX hopes to be the first commercial firm to fly an unmanned cargo craft to the International Space Station in February — a milestone in private spaceflight.

Synthetic biologists can build entire genomes from scratch, working from natural models, and they can also rewire the genetic circuitry of living things. Might 2012 see the first really useful artificial genome?

Read original article Topics: Biomed/Longevity | Biotech | Electronics | Environment/Climate | Nanotech/Materials Science | Physics/Cosmology | Singularity/Futures | Space | Synthetic Biology Related Site Content: Help NASA find life on Mars with MAPPER | October 4, 2011 Asteroid served up ‘custom orders’ of life’s ingredients | June 10, 2011 Battery uses salinity difference between freshwater and saltwater | March 30, 2011 NASA research shows DNA building blocks can be made in space | August 9, 2011 NASA telescopes help find rare galaxy at dawn of time | December 23, 2011 NATURE ARTICLE: Nature | News New year, new science Nature looks ahead to the key findings and events that may emerge in 2012.

Richard Van Noorden 03 January 2012 Article tools

Print

Email

Download PDF

Rights and Permissions

Share/bookmark Let’s talk about Earth In June, scientists, politicians and campaigners of all stripes will flock to Rio de Janeiro, Brazil, for the United Nations’ fourth Earth summit, devoted to sustainable development and the green economy. The conference — undoubtedly the major environmental meeting of 2012 — comes 20 years after the UN Framework Convention on Climate Change was signed at the first UN Earth summit, also in Rio. The source of Martian methane NASA’s car-sized rover, Curiosity, is set to arrive on Mars in August. The US$2.5-billion craft will be lowered by an innovative landing system — the ‘sky crane’ — into Gale crater, where it will study rock strata in a bid to unpick the red planet’s watery past. It will also sniff for methane in Mars’s atmosphere, and could reveal whether the gas is being produced by geological processes — or by microbial martian life. Farther afield, NASA’s Kepler mission surely ought to find a true extrasolar twin for Earth, with just the right size and orbit around a Sun-like star to be habitable.

Will iCub and his robot friends win a €1-billion grant this year? Oli Scarff/Getty Images Robots, brains or graphene? Six visionary research proposals will vie for huge grants from the European Commission’s Future and Emerging Technologies Flagship scheme. The two winning projects, to be announced in the latter half of the year, will each receive €1 billion (US$1.3 billion) over the next decade. In the running are projects on graphene, the promising new form of carbon; robot companions for the lonely; planetary-scale modelling of human activities and their environmental impact; autonomous energy-scavenging sensors; ways to apply research data more efficiently in health care; and a supercomputer simulation of the brain. Majorana mystery The Large Hadron Collider, the giant particle accelerator at CERN, near Geneva in Switzer-land, will gather enough data this year to either confirm or rule out the existence of the simplest form of the Higgs boson, a key part of the mechanism that is thought to confer mass on other matter. A riskier bet would be on physicists finding an example of a Majorana fermion, hypothesized to be massless, chargeless entities able to serve as their own antiparticles, which could be useful for forming stable bits in quantum computing. Experiments have suggested that in materials known as topological insulators, the collective motions of electrons create a quasiparticle that behaves like a Majorana. DNA encyclopedia Related content

Detectors home in on Higgs boson

European researchers chase billion-euro technology prize

Genome 'census' reveals hidden riches

More related content Biologists know that much of what was once termed ‘junk’ DNA actually has a role. But which sequences are functional — and what do they do? The best answer so far will come with a major update from the US National Institutes of Health’s ENCODE (Encyclopedia of DNA Elements) project, which aims to identify all the functional elements in the human genome. Pharmaceutical promise Two monoclonal antibodies to treat Alzheimer’s disease — solanezumab and bapineuzumab — would be a big hit if they reported positive results from phase III trials in 2012. Both bind to the amyloid-β peptides that make up the protein plaques seen in the brains of people with the disease. Meanwhile, the US Food and Drug Administration will once again consider the thorny issue of approving obesity drugs: it rejected one last year because of worries over side effects. It will also decide whether to approve a pioneering drug for cystic fibrosis, ivacaftor, made by Vertex Pharmaceuticals of Cambridge, Massachusetts. The drug works only for people with a particular genetic mutation, but would be the first to treat the disease’s underlying cause, rather than its symptoms. And blockbuster drugs will continue to lose patent protection, including the anticlotting Plavix (clopidogrel) and the antipsychotic Seroquel (quetiapine). Raiders of the lost lake Within weeks, Russian researchers hope to finish drilling through Antarctica’s ice sheet to reach Lake Vostok, a huge freshwater lake roughly 3,750 metres beneath the surface. It’s a race against time: 10–50 metres of ice separate the team from its goal, which it must reach before the last aircraft of the season leaves in February. There’ll be more drilling research in April, when Japan’s Chikyu ship sets sail to bore into the underwater fault that caused the magnitude-9.0 Tohoku earthquake last year.

The biggest array South Africa and Australia will find out by March which of them might host the $2.1-billion Square Kilometre Array (SKA), which would be the world’s largest radio telescope if it is built. The decision will be made by the SKA’s programme development office in Manchester, UK. Meanwhile, the Atacama Large Milli-meter/Submillimeter Array in Chile’s Atacama Desert should be 60% complete by the end of the year. Spaceflight advances In February, SpaceX of Hawthorne, California, hopes to be the first commercial firm to fly an unmanned cargo craft to the International Space Station — a milestone in private spaceflight. As for government space efforts, China, brimming with confidence after last year’s docking of the unmanned spacecraft Shenzhou-8 with the experimental module Tiangong-1, hopes to send astronauts up for a manned docking manoeuvre this year. A useful synthetic genome Synthetic biologists can build entire genomes from scratch, working from natural models, and they can also rewire the genetic circuitry of living things. But so far, no one has united the two approaches: Craig Venter’s synthetic genome of 2010 was cribbed wholesale from a bacterium and contained no new genetic circuitry beyond a DNA watermark. Might 2012 see the first really useful artificial genome?

Nature 481, 12 (05 January 2012) doi:10.1038/481012a

Related stories and links From nature.com

Detectors home in on Higgs boson 13 December 2011

European researchers chase billion-euro technology prize 08 March 2011

Genome 'census' reveals hidden riches 22 December 2010

Planetary science: A whiff of mystery on Mars 27 January 2010

From elsewhere

Rio Earth Summit

Europe’s Technologies Flagship scheme

SpaceX Author information Comments

1. 2012-01-04 12:01 PM Report this comment | #35592 Anurag Chaurasia said: Unseen microbial biodiversity conservation & its significant role in earth sustainable development and the green economy will (if not then should) have special attention in UN 4th Earth summit. Anurag chaurasia,ICAR,India,[email protected],[email protected],+919452196686(M)

You need to be registered with Nature and agree to our Community Guidelines to leave a comment. Please log in or register as a new user. You will be re-directed back to this page.

January 4, 2012 by Editor

Predictors of face-selective regions are labeled from red to yellow, and scene-selective predictors are labeled from blue to light blue. The seed region is highlighted in purple. (Credit: Zeynep M Saygin et al./Nature Neuroscience) For more than a decade, neuroscientists have known that many of the cells in a brain region called the fusiform gyrus specialize in recognizing faces. However, those cells don’t act alone: They need to communicate with several other parts of the brain. By tracing those connections, MIT neuroscientists have now shown that they can accurately predict which parts of the fusiform gyrus are face-selective. This approach may allow scientists to learn more about the face-recognition impairments often seen in autism and prosopagnosia, a disorder often caused by stroke. It could also be used to determine relationships between structure and function in other parts of the brain. The study is the first to link a brain region’s connectivity with its function. No two people have the exact same fusiform gyrus structure, but using connectivity patterns, the researchers can now accurately predict which parts of an individual’s fusiform gyrus are involved in face recognition. To map the brain’s connectivity patterns, the researchers used a technique called diffusion-weighted imaging, which is based on MRI. A magnetic field applied to the brain of the person in the scanner causes water in the brain to flow in the same direction. However, wherever there are axons — the long cellular extensions that connect a neuron to other brain regions — water is forced to flow along the axon, rather than crossing it. This is because axons are coated in a fatty material called myelin, which is impervious to water. By applying the magnetic field in many different directions and observing which way the water flows, the researchers can identify the locations of axons and determine which brain regions they are connecting. Making connections The researchers found that certain patches of the fusiform gyrus were strongly connected to brain regions also known to be involved in face recognition, including the superior and inferior temporal cortices. Those fusiform gyrus patches were also most active when the subjects were performing face-recognition tasks. Based on the results in one group of subjects, the researchers created a model that predicts function in the fusiform gyrus based solely on the observed connectivity patterns. In a second group of subjects, they found that the model successfully predicted which patches of the fusiform gyrus would respond to faces. The other regions connected to the fusiform gyrus are believed to be involved in higher-level visual processing. One surprise was that some parts of the fusiform gyrus connect to a part of the brain called the cerebellar cortex, which is not thought to be part of the traditional vision-processing pathway. That area has not been studied very thoroughly, but a few studies have suggested that it might have a role in face recognition. Now that the researchers have an accurate model to predict function of fusiform gyrus cells based solely on their connectivity, they could use the model to study the brains of patients, such as severely autistic children, who can’t lie down in an MRI scanner long enough to participate in a series of face-recognition tasks. That is one of the most important aspects of the study, says Michael Beauchamp, an associate professor of neurobiology at the University of Texas Medical School.

The MIT researchers are now expanding their connectivity studies into other brain regions and other visual functions, such as recognizing objects and scenes, as well as faces. They hope that such studies will also help to reveal some of the mechanisms of how information is processed at each point as it flows through the brain. Ref.: Zeynep M Saygin et al., Anatomical connectivity patterns predict face selectivity in the fusiform gyrus, Nature Neuroscience, 2011, [doi: 10.1038/nn.3001] Topics: Cognitive Science/Neuroscience Related Site Content: Identifying brain networks for specific mental states | May 27, 2011 Surprising responses to faces from single neurons in the amygdala | September 30, 2011 Insights on spontaneous brain activity from neuroimaging | May 17, 2011 Brain maps reveal clue to mental decline | February 8, 2011 Brain ‘network maps’ reveal clue to mental decline in old age | February 9, 2011 Wanted: supercomputer software engineers January 4, 2012 by Editor

[+] BlueGeneL supercomputer cabinet (credit: Raul654/Wikimedia Commons) Elite U.S. supercomputing labs are looking for software engineers with backgrounds in high-performance computing, HPCWire reports. In related news, on January 19–20, 2012, the Cornell Center for Advanced Computing (CAC) will present a National Science Foundation-sponsored training workshop on large-scale data computation and analysis. Agenda. Register. In other related news, Jeff Nichols, associate lab director in charge of scientific computing at Oak Ridge National Laboratory, recently returned from a Beijng conference, says China is going to be “hard to compete with.” “They’ve got a lot of money. They can invest tons of money in computers…. I think we have a better handle on applications and scalability and issues surrounding scalability, especially these unique heterogeneous nodes . . . But they can afford to build and buy these machines a lot faster than we are.” In the next three to four years, China will graduate students with a lot of experience on these advanced machines and the U.S. could lose headway, he added. Topics: Computers/Infotech/UI Related Site Content: Folding@home’s biological research now on supercomputers | November 21, 2011 Do robots take people’s jobs? | July 18, 2011 China has homemade supercomputer gain | October 30, 2011 China racing to expand data center capacity | December 29, 2011 Affective Computing | August 11, 2011 TOPDOCS The Amber Time Machine Posted: 04 Jan 2012 04:00 AM PST

Amber is one of David Attenborough’s great passions – he is captivated by its beauty and the animals frozen within it in perfect detail. In a personal journey he traces the history of a piece he has had since he was a boy, traveling back millions of Watch now... C.WORLD News Atom chip on Android smartphones expected at CES Analysts looking for devices from LG and Samsung

By Matt Hamblen January 4, 2012 06:02 AM ET 1 Comment Computerworld - LG Electronics and Samsung are expected to unveil Android smartphones next week that use Intel's latest Atom chip, dubbed Medfield, analysts said. The move, if it pans out, could portend a shift away from ARM-based chips, which are in 95% of smartphones. The arrival of Atom-based smartphones at the International CES show in Las Vegas would join Google and Intel at the hip, said Jack Gold, an analyst at J. Gold Associates. With Google in the midst of acquiring Motorola Mobility for $12.5 billion, at least some Motorola smartphones could be soon running Atom chips, as well. A joint Intel-Google effort on smartphone designs would also allow Intel to incorporate smartphone security software from McAfee, which is an Intel company. "It helps Google especially to have McAfee in the mix, because Android is not known for its security," Gold said. "Intel has spent a lot of time with Google to optimize Android for their chips." Mobile wars

Atom chip on Android smartphones expected at CES

AT&T, Verizon LTE nets offer similar data download, Web browsing speeds

AT&T to ship the LG Nitro on Dec. 4

Adobe said ready to drop mobile Flash

RIM's down in U.S., but future is brighter elsewhere

New low-cost mobile carrier set for Tuesday launch

Hands on: Samsung's Stratosphere smartphone doesn't quite reach orbit

Lumia seen as dim light in U.S. versus iPhone, Android

Update: Nokia unveils first Windows phones

Verizon kicks off online orders for Droid Razr on Thursday More in Mobile & Wireless

LG, Samsung and Google could not be reached for comment on their smartphone announcements at CES, which runs Jan. 10-13. Several analysts said LG and Samsung are expected to show off Atom-based smartphones at the event. LG has set a news conference for Monday morning to discuss a range of announcements; Samsung has set a separate news conference on Monday afternoon. Presumably, new Android phones on Atom processors would run the latest Android 4.0 operating system, known also as Ice Cream Sandwich, though that isn't certain. Intel has released a photo of a reference design smartphone running Medfield, which MIT's Technology Review posted recently online. According to Technology Review, Intel Vice President of Architecture Stephen Smith said products based on Medfield would be announced in the first half of 2012. That dovetails with what Intel promised at its Intel Developer Forum in September. Technology Review said the Medfield prototype phone looked similar to an Apple iPhone 4, although it weighed less. Intel would not comment about any upcoming Atom announcements at CES. Although the Intel prototype ran Android 2.3, Gold said Intel and others would need to offer devices that use Android 4.0 to stay competitive. "Atom in smartphones could do well if they get this right, and LG's should be the real first competitive offering," Gold said. "But Intel has a lot of catching up with Qualcomm and Nvidia on smartphone chips. Medfield allows Intel to get into the game. Intel's been struggling to get a toe-hold in smartphones." Rob Enderle, of Enderle Group, said he also expects Samsung and LG to unveil Atom-based smartphones on Android at CES. But he agreed that ARM-based phones will be "very hard to displace." Enderle said the new Atom phones would have to be at least 20% better in performance or cheaper or offer demonstrably better security features to assure significant adoption.

"Security enhancements might do it for Atom smartphones, since Atom's inherently more secure, but security isn't always the biggest selling point," Enderle said. "I really doubt Atom phones would have what it takes to succeed, especially since the ARM makers aren't standing still." Although Intel is expected to make major news at CES related to chips for ultrabooks, it is also hoping to tap into a smartphone market that will remain robust. Most analysts believe users who rely on a tablet, laptop or ultrabook for inputing long emails and producing detailed content will still want a smartphone. Intel exited the smartphone chip market when it sold Strongarm chip technology to Marvell and has taken years to build Atom into its Medfield release. "That [sale] was actually a smart decision at the time," Gold said. Mobile wars

Atom chip on Android smartphones expected at CES

AT&T, Verizon LTE nets offer similar data download, Web browsing speeds

AT&T to ship the LG Nitro on Dec. 4

Adobe said ready to drop mobile Flash

RIM's down in U.S., but future is brighter elsewhere

New low-cost mobile carrier set for Tuesday launch

Hands on: Samsung's Stratosphere smartphone doesn't quite reach orbit

Lumia seen as dim light in U.S. versus iPhone, Android

Update: Nokia unveils first Windows phones

Verizon kicks off online orders for Droid Razr on Thursday More in Mobile & Wireless

Prior to Medfield, Atom chips were power hungry because they spread processing across several pieces of silicon. Medfield is truly on a single chip, making it more efficient and more competitive with ARM-based designs, analysts said. Although Gold has seen Medfield in operation and was impressed, he hasn't been able to compare it with ARM-based phones. Even if a smartphone design with Medfield doesn't catch on, analysts said the processor could do well in tablet computers once the Windows 8 operating system appears this year. A beta of Windows 8 is due out in February. Microsoft could also rely on Atom designs for future Windows Phone 7.5 or 8 devices. "Atom on tablets probably has a better shot than atom on smartphones," Enderle said. Matt Hamblen covers mobile and wireless, smartphones and other handhelds, and wireless

networking for Computerworld. Follow Matt on Twitter at @matthamblen or subscribe to Matt's RSS feed. His e-mail address is [email protected]. Read more about Processors in Computerworld's Processors Topic Center. FREE Download: 5 Big Ideas in Data Analytics

03 IAN

UT A Balanced Budget on Titan by Jason Major on January 3, 2012

Hazy Titan and the smaller, cloudless Dione seen on December 10, 2011 by the Cassini spacecraft. (NASA/JPL/SSI/J. Major)

It’s been said many times that the most Earthlike world in our solar system is not a planet at all, but rather Saturn’s moon Titan. At first it may not seem obvious why; being only a bit larger than the planet Mercury and coated in a thick opaque atmosphere containing methane and hydrocarbons, Titan sure doesn’t look like our home planet. But once it’s realized that this is the only moon known to even have a substantial atmosphere, and that atmosphere creates a hydrologic cycle on its surface that mimics Earth’s – complete with weather, rain, and gully-carving streams that feed liquid methane into enormous lakes – the similarities become more evident. Which, of course, is precisely why Titan continues to hold such fascination for scientists. Now, researchers have identified yet another similarity between Saturn’s hazy moon and our own planet: Titan’s energy budget is in equilibrium, making it much more like Earth than the gas giant it orbits. A team of researchers led by Liming Li of the Department of Earth and Atmospheric Sciences at the University of Houston in Texas has completed the first-ever investigation of the energy balance of Titan, using data acquired by telescopes and the Cassini spacecraft from 2004 to 2010. Energy balance (or “budget”) refers to the radiation a planet or moon receives from the Sun versus what it puts out. Saturn, Jupiter and Neptune emit more energy than they receive, which indicates an internal energy source. Earth radiates about the same amount as it receives, so it is said to be in equilibrium… similar to what is now shown to be the case for Titan.

Blue hazes hover high above thicker orange clouds over Titan's south pole (NASA/JPL/SSI) The energy absorption and reflection rates of a planet’s – or moon’s! – atmosphere are important clues to the state of its climate and weather. Different balances of energy or changes in those balances can indicate climate change – global cooling or global warming, for instance. Of course, this doesn’t mean Titan is a balmy world. At nearly 300 degrees below zero (F) it has an environment that even the most extreme Earth-based life would find inhospitable. Although Titan’s atmosphere is ten times thicker than Earth’s its composition is very different, permitting easy passage of infrared radiation (a.k.a. “heat”) and thus exhibits an “anti-greenhouse” effect, unlike Earth or, on the opposite end of the scale, Venus. Still, some stable process is in place on Saturn’s moon that allows for distribution of solar energy across its surface, within its atmosphere and back out into space. With results due in from Cassini from a flyby on Jan. 2, perhaps there will soon be even more clues as to what that may be. Read more about Earth’s changing energy budget here. The team’s report was published in the AGU’s Geophysical Research Letters on December 15, 2011. Li, L., et al. (2011), The global energy balance of Titan, Geophys. Res. Lett., 38, L23201, doi:10.1029/2011GL050053. SCIAM Key Findings on Higgs Boson, Alzheimer's Drugs, Lake Vostok Set to Emerge in 2012 A look ahead also points to what might be the first commercial firm to fly an unmanned cargo craft to the International Space Station and the first useful artificial genome By Richard Van Noorden and Nature magazine | January 3, 2012 | 2

Image: Let's talk about Earth In June, scientists, politicians and campaigners of all stripes will flock to Rio de Janeiro, Brazil, for the United Nations’ fourth Earth summit, devoted to sustainable development and the green economy. The conference—undoubtedly the major environmental meeting of 2012—comes 20 years after the UN Framework Convention on Climate Change was signed at the first UN Earth summit, also in Rio. The source of Martian methane NASA’s car-sized rover, Curiosity, is set to arrive on Mars in August. The $2.5-billion craft will be lowered by an innovative landing system—the ‘sky crane’—into Gale crater, where it will study rock strata in a bid to unpick the red planet’s watery past. It will also sniff for methane in Mars’s atmosphere, and could reveal whether the gas is being produced by geological processes—or by microbial martian life. Farther afield, NASA’s Kepler mission surely ought to find a true extrasolar twin for Earth, with just the right size and orbit around a Sun-like star to be habitable. Robots, brains or graphene? Six visionary research proposals will vie for huge grants from the European Commission’s Future and Emerging Technologies Flagship scheme. The two winning projects, to be announced in the latter half of the year, will each receive €1 billion ($1.3 billion) over the next decade. In the running are projects on graphene, the promising new form of carbon; robot companions for the lonely; planetary-scale modelling of human activities and their environmental impact; autonomous energy-scavenging sensors; ways to apply research data more efficiently in health care; and a supercomputer simulation of the brain. Majorana mystery The Large Hadron Collider, the giant particle accelerator at CERN, near Geneva in Switzer-land, will gather enough data this year to either confirm or rule out the existence of the simplest form of the Higgs boson, a key part of the mechanism that is thought to confer mass on other matter. A riskier bet would be on physicists finding an example of a Majorana fermion, hypothesized to be massless, chargeless entities able to serve as their own antiparticles, which could be useful for forming stable bits in quantum computing. Experiments have suggested that in materials known as topological insulators, the collective motions of electrons create a quasiparticle that behaves like a Majorana. DNA encyclopedia Biologists know that much of what was once termed ‘junk’ DNA actually has a role. But which sequences are functional — and what do they do? The best answer so far will come with a major update from the US National Institutes of Health’s ENCODE (Encyclopedia of DNA Elements) project, which aims to identify all the functional elements in the human genome. Pharmaceutical promise Two monoclonal antibodies to treat Alzheimer’s disease — solanezumab and bapineuzumab—would be a big hit if they reported positive results from phase III trials in 2012. Both bind to the amyloid-β peptides that make up the protein plaques seen in the brains of people with the disease. Meanwhile, the US Food and Drug Administration will once again consider the thorny issue of approving obesity drugs: it rejected one last year because of worries over side effects. It will also decide whether to approve a pioneering drug for cystic fibrosis, ivacaftor, made by Vertex Pharmaceuticals of Cambridge, Massachusetts. The drug works only for people with a particular genetic mutation, but would be the first to treat the disease’s underlying cause, rather than its symptoms. And blockbuster

drugs will continue to lose patent protection, including the anticlotting Plavix (clopidogrel) and the antipsychotic Seroquel (quetiapine). Raiders of the lost lake Within weeks, Russian researchers hope to finish drilling through Antarctica’s ice sheet to reach Lake Vostok, a huge freshwater lake roughly 3,750 metres beneath the surface. It’s a race against time: 10–50 metres of ice separate the team from its goal, which it must reach before the last aircraft of the season leaves in February. There’ll be more drilling research in April, when Japan’s Chikyu ship sets sail to bore into the underwater fault that caused the magnitude-9.0 Tohoku earthquake last year. The biggest array South Africa and Australia will find out by March which of them might host the $2.1-billion Square Kilometre Array (SKA), which would be the world’s largest radio telescope if it is built. The decision will be made by the SKA’s programme development office in Manchester, UK. Meanwhile, the Atacama Large Milli-meter/Submillimeter Array in Chile’s Atacama Desert should be 60% complete by the end of the year. Spaceflight advances In February, SpaceX of Hawthorne, California, hopes to be the first commercial firm to fly an unmanned cargo craft to the International Space Station—a milestone in private spaceflight. As for government space efforts, China, brimming with confidence after last year’s docking of the unmanned spacecraft Shenzhou-8 with the experimental module Tiangong-1, hopes to send astronauts up for a manned docking manoeuvre this year. A useful synthetic genome Synthetic biologists can build entire genomes from scratch, working from natural models, and they can also rewire the genetic circuitry of living things. But so far, no one has united the two approaches: Craig Venter’s synthetic genome of 2010 was cribbed wholesale from a bacterium and contained no new genetic circuitry beyond a DNA watermark. Might 2012 see the first really useful artificial genome? This article is reproduced with permission from the magazine Nature. The article was first published on January 3, 2012,

Higgs boson key findings again? How many locks to a Higgs? This debate poses a fascinating angle on the philosophical implications of

finding the 'god particle', really interesting new angle. http://iai.tv/video/the-ultimate-particle Deep-Brain Stimulation Found to Fix Depression Long-Term The first placebo-controlled trial of implanted electrodes is positive, but recovery is usually slow and procedures are being fine-tuned By Alison Abbott and Nature magazine | January 3, 2012

Image: nimh.nih.gov Deep depression that fails to respond to any other form of therapy can be moderated or reversed by stimulation of areas deep inside the brain. Now the first placebo-controlled study of this procedure shows that these responses can be maintained in the long term.

Neurologist Helen Mayberg at Emory University in Atlanta, Georgia, followed ten patients with major depressive disorder and seven with bipolar disorder, or manic depression, after an electrode device was implanted in the subcallosal cingulate white matter of their brains and the area continuously stimulated. All but one of twelve patients who reached the two-year point in the study had completely shed their depression or had only mild symptoms. For psychiatrists accustomed to seeing severely depressed patients fail to respond—or fail to maintain a response—to antidepressant or cognitive therapy, these results seem near miraculous. Design control “It’s almost spooky,” says Thomas Schlaepfer, a psychiatrist at the University of Bonn, Germany, who says he has seen similar long-term results in five treatment-resistant depressed patients following deep-brain stimulation (DBS) in the nucleus accumbens brain area. DBS is hardly a quick fix for depression though. Not only does it involve invasive brain surgery, but recovery is usually slow. “In our study we found that many patients didn’t get well at all in the first months—but then they started to respond after a year or more of stimulation,” says Mayberg. It’s also not a cure, she notes, as patients quickly reverted to full-blown depression if stimulation of their electrodes was discontinued. “We were particularly happy to see that bipolar patients responded as well as unipolar patients because bipolar disorder is notoriously hard to treat,” she says. Nearly all studies of DBS in depression so far have involved patients with major depressive disorder. It’s hard to design proper placebo-controlled clinical studies for DBS, because sham surgery is not ethically acceptable. But in Mayberg’s study, the patients were told that immediately after surgery they would be randomly assigned to two treatment groups, one receiving immediate stimulation, and the other receiving stimulation only after four weeks. In fact none of them was stimulated during this period. The patients showed no significant placebo effect. Placebo-controlled phase-3 clinical trials involving hundreds of patients are being carried out at multiple centres in North America and Europe by two electrode manufacturers, but those results won’t come out for several years. In the meantime, academic studies such as Mayberg’s will establish the necessary fine-tuning of procedures. “One of the huge breakthroughs of the past decade has been the understanding that depression is a disease of brain networks,” says Schlaepfer. DBS involves stimulating an underperforming network running between different brain areas, he says, and both the subcallosal cingulate and the nucleus accumbens are part of the same network. “But we are still searching for the optimal target in the network, one that may help make recovery faster.” This article is reproduced with permission from the magazine Nature. The article was first published on January 3, 2012, !!!!!!!! Workplace Rudeness Has a Ripple Effect An unpleasant employee can spread stress far beyond the office By Winnie Yu | January 3, 2012 | 6

Image: Kyle T. Webster

If you think that nasty co-worker is creating problems for you alone, think again. His rudeness may have a ripple effect that extends as far as your spouse’s workplace. A recent study at Baylor University found that working with horrible colleagues can generate far-reaching stress that follows you home, causing unhappiness for your spouse and family and ultimately affecting your partner’s job. The study was published in August in the Journal of Organizational Behavior. Study author Merideth J. Ferguson, a psychologist and an assistant professor of management at Baylor, used statistical software to analyze the relation between employee reports of co-worker rudeness and reports by the employee’s partner of home and work life. Not surprisingly, she found that exposure to rudeness created stress for both partner and family. She also found a direct correlation between the rudeness that the employee experienced and stress at the partner’s workplace. Keeping workplace stress outside the home can be difficult, especially when it is chronic, Ferguson says. Being treated unkindly by a colleague can cause loss of self-esteem, anxiety and depression, which underminesyour happiness outside of work. “Some people can successfully address that issue by being mindful of where they are and what they are doing,” Ferguson says. To do that, she suggests focusing strictly on family and friends when at home and devoting your full attention to work when you are at the office. Talking to a counselor or psychologist about the stress or learning stress-management techniques (such as taking strategic breaks) can help, too.

I like how the author, Ms. Winnie Yu, uses pronouns and place all the blame of stress in the workforce on the he's and him's. The she's must not create any stress in the workforce or at home. It would also be good if the editor of this article took some editing classes at their local college. "underminesyour" in the next to last paragraph, is not a word.

The proponents of "attitude for attitude's sake" (half the entertainment industry, at least) should read this article.

in reply to JamesDavis I noticed this gender bias as well. Very, well, rude. Ms. Yu, is it always a "he" that is a difficult, rude boss??

My own experience is that WOMEN supervisors can be even worse, and are more likely to be rude, especially to other women. Several companies where I have worked had an unwritten rule that the supervisor of the "steno pool" was a male, usually at the "steno pool's" request. Also, my experience is that a rude or difficult person at work very often has outside issues, marital or money problems that are the root cause. It very well could be the spillover effect you describe. I use this hypothesis to help me deal with the problem, to have some compassion for the person, and to not add to the rudeness but to break the chain of rudeness, to act as a buffer. It doesn't always work, there are some bad apples out there, but then bad apples don't last long usually. I sell construction products. One of the companies I represented had a notorious second generation owner. One day I was talking to a contractor and this owner's name came up. The contractor said that he had replaced the driveway at the home of this owner. The owner came home one day and was met in the yard by the wife who screamed and berated him in front of everyone about various issues while he was passive and humiliated. She did not work so I don't know where she picked up this "rudeness", but I learned where his came from. I have found that rude and difficult customers are profitable. I actually look for them. It can take a while but they can usually be "won over", and when they are, they are very loyal and considerate. And, my competitors stay away. I have also found that in general, males are less inclined to make issues a personal matter, whereas females most often make everything personal.

in reply to grovewest There's an old saying: "When men talk to their friends, they insult each other. They don't really mean it. When women talk to their friends, they compliment each other. They don't mean it either." This can be put down to "gender bias" but I see it as nutshelling behavioral patterns observable worldwide, throughout history. It underlies why women experienced the "glass ceiling" and why men rarely "invade" womens' groups. Men and women simply relate to others differently. That doesn't mean either gender is doing it wrong. We will never approach gender equality until this is recognized as a fact of life and taken into account in all interpersonal relationships including those in the workplace.

in reply to JamesDavis JamesDavis and the comments that followed fail in their understanding of proper pronoun usage, as indicated in most writers' "style books". I prefer the Strunk and White series. Most journalists abide by the grammatical rules and standards, using proper grammar, not political correctness of the day. When gender is unknown, it proper to use male pronoun, and not the cumbersome he/she and the awkward his or hers. Commentaries indicate overly sensitivity, even use the forum to bash female supervisors and co-workers; none complain about author's usage, which might be a worthwhile discussion; many people take exception to the established rule. The comments betray workplace grief by their ease to Ms. Yu's use of masculine

pronoun, by concluding that she was calling out out men as the only offenders. I did not see that. Absolutely agree with James Davis that "underminesyour" has no place in the English language. Unfortunate for Ms. Yu "undermines her" own credibility by her and editor's failure to utilize spell checker or find one the many phrases that more accurately describe the her thoughts. !!! World's Only Known Natural Quasicrystal Traced to Ancient Meteorite A theoretical physicist searched for years to find the only known natural occurrence of an exotic type of structure, the discovery of which netted the 2011 Nobel Prize in Chemistry By Richard Van Noorden and Nature magazine | January 3, 2012 | 1

Image: Paul J. Steinhardt Theoretical physicist Paul Steinhardt did not expect to spend last summer travelling across spongy tundra to a remote gold-mining region in north-eastern Russia. But that is where he spent three weeks tracing the origins of the world’s only known natural example of a quasicrystal—an exotic type of structure discovered in 1982 in a synthetic material by Dan Shechtman, a materials scientist at the Israel Institute of Technology in Haifa who netted the 2011 Nobel Prize in Chemistry for the finding. “I just grabbed the problem and held on wherever it dragged me—even across the tundra,” says Steinhardt, from Princeton University in New Jersey. The story includes secret diaries, smuggling and the discovery that nature’s quasicrystal seems to come from a meteorite some 4.5 billion years old: far from an artificial innovation, the quasicrystal may be one of the oldest minerals in existence, formed at the birth of the Solar System. The finding was published this week in the Proceedings of the National Academy of Sciences. When Shechtman first reported the atomic structure of a quasicrystal, it shocked researchers. Instead of a lattice of regularly repeating units like any normal crystal, the atoms were arrayed in a pattern that was ordered but never quite repeated itself, like an intricate three-dimensional mosaic. Around the time of the publication, Steinhardt, then at the University of Pennsylvania in Philadelphia, happened to be working with mathematician Dov Levine on the theory behind such non-repeating patterns. He later coined the term quasicrystal. Hundreds of synthetic quasicrystals have now been created in controlled conditions in laboratories, by melting and cooling metals. Steinhardt started a search to find a quasicrystal in nature, trawling through databases of x-ray diffraction patterns recorded from other materials in order to find possible candidates. But it was not until the autumn of 2008 that he was contacted by Luca Bindi, a mineralogist at the Museum of Natural History in Florence, Italy, who had found a quasicrystal grain, around 100 micrometres across, in a millimetre-sized rock fragment in the museum’s collection. In 2009, Steinhardt, Bindi and their colleagues reported in Science that the grain was a quasicrystalline alloy of aluminium, copper and iron. According to the label on the box in which it was stored, the rock came from the Koryak Mountains in Russia. Alien Origins In the latest study, Bindi joined with Steinhardt and other US scientists to analyse the rock. The ratios of isotopes of oxygen in silicate and oxide minerals around the quasicrystal grain are typical of minerals found in meteorites called carbonaceous chondrites, the team reports. This indicates that the rock is of extraterrestrial origin and very old: virtually all chondrites formed at the birth of the Solar System. It is likely, but not certain, that the quasicrystal grain within the meteorite is of roughly the

same age. It was found entwined with a silica mineral that forms only at high pressures and temperatures—such as might be created by a collision with the chondrite body. The huge array of minerals that we know on Earth today did not start to form until plate tectonics and the oxygenation of the atmosphere created new kinds of physical and chemical environments (see ‘Microbes drove Earth’s mineral evolution'). Only around a hundred minerals have the distinction of forming before that, when matter started colliding and coalescing to form the Solar System. So if the natural quasicrystal did form under astrophysical conditions—by mechanisms that the researchers still don’t understand—then it can be added to the select category of earliest minerals. The meteorite’s history on Earth is also exotic. As Steinhardt tells the story, the Florence museum had bought it in 1990 from a now-deceased private collector in Amsterdam, as part of a job lot of 10,000 samples. Bindi tracked down the collector’s widow, who agreed to let the scientists look at secret diaries that included details of an ‘exchange’—or smuggling operation—in Romania. After further detective work, including talking to a former Russian secret-service agent who had helped to smuggle the rock out of the country, the scientists found V. V. Kryachko, the man who in 1979 had first dug the rock from sticky clay in the remote Chukotka region of Russia, just across the Bering Strait from Alaska. Steinhardt and his colleagues trekked out to Chukotka last summer to examine the site for signs of quasicrystals, but have not yet published their findings. So far, says Steinhardt, hunting down natural quasicrystals has been incredibly rewarding. “I have learned so many things about materials science, the Earth, even the history of Russia. It’s an irresistible problem which has delivered fun and fascinating science and non-science.” He hopes geologists and mineralogists will keep hunting for unusual diffraction pattern—perhaps even in non-metals. “I’m not through searching for other natural quasicrystals,” he says. This article is reproduced with permission from the magazine Nature. The article was first published on January 3, 2012, KAI Abundance: The Future Is Better Than You Think January 3, 2012 Author:

Peter H. Diamandis, Steven Kotler Published:

Free Press, 2012

[+] Amazon | Providing abundance is humanity’s grandest challenge — this is a book about how we rise to meet it. We will soon be able to meet and exceed the basic needs of every man, woman and child on the planet. Abundance for all is within our grasp. This bold, contrarian view, backed up by exhaustive research, introduces our near-term future, where exponentially growing technologies and three other powerful forces are conspiring to better the lives of billions. An antidote to pessimism by tech entrepreneur turned philanthropist, Peter H. Diamandis and award-winning science writer Steven Kotler.

Since the dawn of humanity, a privileged few have lived in stark contrast to the hardscrabble majority. Conventional wisdom says this gap cannot be closed. But it is closing — fast. The authors document how four forces — exponential technologies, the DIY innovator, the Technophilanthropist, and the Rising Billion — are conspiring to solve our biggest problems. Abundance establishes hard targets for change and lays out a strategic roadmap for governments, industry and entrepreneurs, giving us plenty of reason for optimism. Examining human need by category — water, food, energy, healthcare, education, freedom — Diamandis and Kotler introduce dozens of innovators making great strides in each area: Larry Page, Steven Hawking, Dean Kamen, Daniel Kahneman, Elon Musk, Bill Joy, Stewart Brand, Jeff Skoll, Ray Kurzweil, Ratan Tata, Craig Venter, among many, many others. Topics: Singularity/Futures Buy on Amazon Related Site Content: Singularity university founder runs a school for startups | September 11, 2011 Ask Ray | Will human intelligence amplication widen the divide between ‘haves’ and ‘have nots’? | May 11, 2011 Beyond Humanity?: The Ethics of Biomedical Enhancement | January 5, 2011 2011 State of the Future | July 31, 2011 Do More Faster: TechStars Lessons to Accelerate Your Startup | March 17, 2011 BOOKS Distrust That Particular Flavor January 3, 2012 Author:

William Gibson Published:

Putnam Adult, 2012

[+] Amazon | William Gibson is known primarily as a novelist, with his work ranging from his groundbreaking first novel, Neuromancer, to his more recent contemporary bestsellers Pattern Recognition, Spook Country, and Zero History. During those nearly thirty years, though, Gibson has been sought out by widely varying publications for his insights into contemporary culture. Wired magazine sent him to Singapore to report on one of the world’s most buttoned-up states. The New York Times Magazine asked him to describe what was wrong with the Internet. Rolling Stone published his essay on the ways our lives are all “soundtracked” by the music and the culture around us. And in a speech at the 2010 Book Expo, he memorably described the interactive relationship between writer and reader. These essays and articles have never been collected-until now. Some have never appeared in print at all. In addition, Distrust That Particular Flavor includes journalism from small publishers, online sources, and magazines no longer in existence. This volume will be essential reading for any lover of William Gibson’s novels. Distrust That Particular Flavor offers readers a privileged view into the mind of a writer whose thinking has shaped not only a generation of writers but our entire culture. Topics: Cognitive Science/Neuroscience | Entertainment/New Media Buy on Amazon

Related Site Content: A digital ‘magazine’ with one subscriber | March 10, 2011 Build Your Own Robot! | February 7, 2011 Amazon.com now selling more Kindle books than print books | May 22, 2011 Amazon avoids Apple toll with Kindle Cloud Reader | August 15, 2011

1. Scientists use brain scans to predict outcome of psychotic episodes | November 8, 2011

D.NEWS Stephen Hawking Needs Help

Analysis by Irene Klotz Tue Jan 3, 2012 11:08 AM ET (8) Comments | Leave a Comment

No, the famed physicist, who turns 70 on Sunday, isn't looking to pick your brains about wormholes, dark energy or alternative universes. He, like many of us these days, needs tech support. Confined to a wheelchair and dependent on a voice synthesizer to talk, Hawking relies more than most on technology to live and work. The British cosmologist and author is nearly paralyzed by the incurable motor neuron disease amyotrophic lateral sclerosis, or ALS. He lost his voice in 1985 after an emergency tracheotomy.

WATCH VIDEO: AN INTERVIEW WITH STEPHEN HAWKING

Hawking: Surprise! There's No Heaven Hawking plans to hire an assistant to help develop and maintain the electronic speech system that allows him to communicate his vision of the universe, according to an Associated Press report. An informal job ad posted on Hawking's website said the assistant should be computer literate, ready to travel, and able to repair electronic devices "with no instruction manual or technical support," AP reports.

Stephen Hawking Is Such a Troublemaker The ad has since been removed from Hawking's website and replaced with a note stating that the “post of Technical Assistant to Stephen Hawking will be advertised officially in due course” on the University of Cambridge’s Department of Applied Mathematics and Theoretical Physics website. The informal ad said the job pays about $38,500 a year, AP reports. Image: Stephen Hawking aboard a Zero Gravity Corp. parabolic flight in Florida in 2007. (Credit: Zero Gravity Corp). Jerome Goodwin $38K They must have meant pounds after all he is in England. Besides how much work would be involved it could be easy and could drag on for years. He has already lived longer than was predicted. What is he doing in the vomit comet? I don't think he believes we should go into space because we might meet Aliens. Charles Leo I believe 38k is above the average US salary. Plus, universities generally have job stability, very good benefits, opportunities to travel, as well as educational opportunities. Last but not least, the position opens the door in the future for interviews, talks, job connections, etc. It would be nice if it would pay higher, but all things considered, it would be a good opportunity. It's perfect for a younger graduate that's looking to travel. As for "working with a genius", that title may be debatable (and I don't mean any offense by stating this.) Shwa There are a lot of people out there willing to PAY $38.5k/year just for the opportunity to work with this man. The line of computer nerds will be long - wouldn't mind standing in it myself... Rick K "The informal ad said the job pays about $38,500 a year" That is less than an entry level salary. Who are they kidding? Aaron "The informal ad said the job pays about $38,500 a year" It also pays in osmotically absorbed awsomeness. Selladore Perhaps they should get onto it pronto, because it's sheer miracle and unimaginable determination of this man that he's still with us. Happy birthday mr. Hawking and many more to come! PO Physicists propose test for loop quantum gravity

January 3, 2012 by Lisa Zyga Enlarge Artist's illustration of loop quantum gravity. (PhysOrg.com) -- As a quantum theory of gravity, loop quantum gravity could potentially solve one of the biggest problems in physics: reconciling general relativity and quantum mechanics. But like all tentative theories of quantum gravity, loop quantum gravity has never been experimentally tested. Now in a new study, scientists have found that, when black holes evaporate, the radiation they emit could potentially reveal “footprints” of loop quantum gravity, distinct from the usual Hawking radiation that black holes are expected to emit. In this way, evaporating black holes could enable the first ever experimental test for any theory of quantum gravity. However, the proposed test would not be easy, since scientists have not yet been able to detect any kind of radiation from an evaporating black hole. The scientists, from institutions in France and the US, have published their study called “Probing Loop Quantum Gravity with

Evaporating Black Holes” in a recent issue of Physical Review Letters. “For decades, Planck-scale physics has been thought to be untestable,” coauthor Aurélien Barrau of the French National Institute of Nuclear and Particle Physics (IN2P3) told PhysOrg.com. “Nowadays, it seems that it might enter the realm of experimental physics! This is very exciting, especially in the appealing framework of loop quantum gravity.” In their study, the scientists have used algorithms to show that primordial black holes are expected to reveal two distinct loop quantum gravity signatures, while larger black holes are expected to reveal one distinct signature. These signatures refer to features in the black hole’s energy spectrum, such as broad peaks at certain energy levels.

Using Monte Carlo simulations, the scientists estimated the circumstances under which they could discriminate the predicted signatures of loop quantum gravity and those of the Hawking radiation that black holes are expected to emit with or without loop quantum gravity. They found that a discrimination is possible as long as there are enough black holes or a relatively small error on the energy reconstruction. While the scientists have shown that an analysis of black hole evaporation could possibly serve as a probe for loop quantum gravity, they note that one of the biggest challenges will be simply detecting evaporating black holes. “We should be honest: this detection will be difficult,” Barrau said. “But it is far from being impossible.” He added that black holes are not the only possible probe of loop quantum gravity, and he’s currently investigating whether loop quantum gravity might have signatures in the universe’s background radiation. “I am now working on the cosmological side of loop quantum gravity,” Barrau said. “This is the other way to try to test the theory: some specific footprints in the cosmic microwave background might be detected in the future.” More information: A. Barrau, et al. “Probing Loop Quantum Gravity with Evaporating Black Holes.” Physical Review Letters 107, 251301 (2011). DOI: 10.1103/PhysRevLett.107.251301 rawa1 Abstract is here. They do want to study the quantum evaporation of microscopic primordial black holes, which were never observed. It's improbable such objects would be observable at cosmological distances. Their spectrum should exhibit distinct spectral lines by LQG in similar way, like spectrum of hydrogen atom. Other models lead to less or more smooth spectrum. http://arxiv.org/abs/1109.4239 In AWT the only primordial black holes are quite common atom nuclei, which are stable. They cannot evaporate, until they're not radioactive. If they decay, the indeed exhibit some lines in their energy spectrum. thuber Researchers are beginning to feel pressure because string theory and its offshoots like LQG remain untestable. thingumbobesquire "Evaporating black holes" sounds like an apt metaphor for the economy to me... Noumenon Researchers are beginning to feel pressure because string theory and its offshoots like LQG remain untestable. So far. What is certain is that if it is not testable, it is not science. conjecture -> hypothesis -> theory JIMBO This is interesting, but shows little to no chance of a viable phenomenology. Hawking radiation has been simulated using lasers, so perhaps this offers a chance at detection ? Almost unnoticed, is last year's measurement by the INTEGRAL sat of a New scale of quantum gravity, 14 decades below the Planck length. This re-levels the entire playing field of QG. Noumenon @TheGhostofOtto1923 You rated me a one for the above comment. Why? Or are you a drive by rater. Skultch @TheGhostofOtto1923 You rated me a one for the above comment. Why? Or are you a drive by rater. Dude, give it a rest and grow up already. Why do you care at all? Frankly, I'm kind of annoyed that now I have to ignore all your posts like I do with the Aether and neutron repulsion guys. Your comments weren't childish and worthless like this when I started on this site. Why could he have given you a 1? I don't know, but maybe it's because you are provoking an epistemology argument for the 10^34th time, and it doesn't add anything to THIS topic. ???? Will that ever get old for you? Ok. Ok. I'm in a bad mood again. You guys need to start being more entertaining, ASAP! ;P GreyLensman @TheGhostofOtto1923 You rated me a one for the above comment. Why? Or are you a drive by rater. The original comment is poor, your me-too is no improvement. typicalguy Researchers are beginning to feel pressure because string theory and its offshoots like LQG remain untestable. LQG is a competitor to and not a type of string theory. Noumenon @TheGhostofOtto1923 You rated me a one for the above comment. Why? Or are you a drive by rater. Dude, give it a rest and grow up already. Why do you care at all? Frankly, I'm kind of annoyed that now I have to ignore all your posts like I do with the Aether and neutron repulsion guys. Your comments weren't childish and worthless like this when I started on this site. I don't know, but maybe it's because you are provoking an epistemology argument for the 10^34th time, and it doesn't add anything to THIS topic. A) Your comment only adds to the very thing you accused me of. B) I never invoked an epistemology argument above; I invoked inductive method which is the foundation of science, based on OBSERVATION. C) My point is that he should engage in proper discussion rather than 1 rate drive-by. The comment by GreyLensman, like yours, is a mere qualitative judgement, without content relevant to my original post. This is the problem which prompted my 2nd post.

Skultch You're nitpicking at best and still acting like a child. Your off topic crusade is as transparent as the Aether/repulsion crusades. No one cares if this fits your definition of "science," or not. All I'm saying is that people might start caring about your opinions again if you just let it go. I did, and I'm much happier for it. So, you've got nothing intelligent to say or ask about irt LQG? Well, I might. What would the evidence be, exactly? Would it be the effect of some different kind of virtual particle (pair?) on the photons that reach our instruments? How? Thanks. Noumenon There is only ONE definition of science, which involves the inductive method, and the possible falsification of theories, as stated in my 1st post. This is matter of fact, not of opinion. I've stated my motivation for engaging in a "off topic crusade", in this thread.. what was yours? Eoprime There is only ONE definition of science, which involves the inductive method, and the possible falsification of theories. This is matter of fact, not of opinion. Read the article (again) and try to understand it, maybe you will find the points you are complaining about Noumenon There is only ONE definition of science, which involves the inductive method, and the possible falsification of theories. This is matter of fact, not of opinion. Read the article (again) and try to understand it, maybe you will find the points you are complaining about Reread my 1st post, I said, "so far" in response to thuber saying it is not testable, and further, "IF", "it is not testable", than it's not science. What specifically was I "complaining" about, Frank (or VD) Skultch It's a total waste of time to bring up the idea that this may or may not YET be inductive science. Bringing it up damages your credibility in that it's irrelevant and can only influence people to expect that your future posts will also be pointless. Will it add to the understanding of reality? If yes, then let's please...just....move....on already. Patterns of connections reveal brain functions January 3, 2012 by Anne Trafton

Enlarge Graphic: Christine Daniloff For more than a decade, neuroscientists have known that many of the cells in a brain region called the fusiform gyrus specialize in recognizing faces. However, those cells don’t act alone: They need to communicate with several other parts of the brain. By tracing those connections, MIT neuroscientists have now shown that they can accurately predict which parts of the fusiform gyrus are face-selective. The study, which appeared in the Dec. 25 issue of the journal Nature Neuroscience, is the first to link a brain region’s connectivity with its function. No two people have

the exact same fusiform gyrus structure, but using connectivity patterns, the researchers can now accurately predict which parts of an individual’s fusiform gyrus are involved in face recognition. This work goes a step beyond previous studies that have used magnetic resonance imaging (MRI) to locate the regions that are involved in particular functions. “Rather than just mapping the brain, what we’re doing now is adding on to that a description of function with respect to connectivity,” says David Osher, a lead author of the paper and a graduate student in the lab of John Gabrieli, the Grover Hermann Professor of Health Sciences and Technology and Cognitive Neuroscience and a member of MIT’s McGovern Institute for Brain Research. Using this approach, scientists may be able to learn more about the face-recognition impairments often seen in autism and prosopagnosia, a disorder often caused by stroke. This method could also be used to determine relationships between structure and function in other parts of the brain. To map the brain’s connectivity patterns, the researchers used a technique called diffusion-weighted imaging, which is based on MRI. A magnetic field applied to the brain of the person in the scanner causes water in the brain to flow in the same direction. However, wherever there are axons — the long cellular extensions that connect a neuron to other brain regions — water is forced to flow along the axon, rather than crossing it. This is because axons are coated in a fatty material called myelin, which is impervious to water. By applying the magnetic field in many different directions and observing which way the water flows, the researchers can identify the locations of axons and determine which brain regions they are connecting. “For every measurable unit of the brain at this level, we have a description of how it connects with every other region, and with what strength it connects with every other region,” says Zeynep Saygin, a

lead author of the paper and a graduate student who is advised by Gabrieli and Rebecca Saxe, senior author of the paper and associate professor of brain and cognitive sciences. Gabrieli is also an author of the paper, along with Kami Koldewyn, a postdoc in MIT professor Nancy Kanwisher’s lab, and Gretchen Reynolds, a former technical assistant in Gabrieli’s lab. Making connections The researchers found that certain patches of the fusiform gyrus were strongly connected to brain regions also known to be involved in face recognition, including the superior and inferior temporal cortices. Those fusiform gyrus patches were also most active when the subjects were performing face-recognition tasks. Based on the results in one group of subjects, the researchers created a model that predicts function in the fusiform gyrus based solely on the observed connectivity patterns. In a second group of subjects, they found that the model successfully predicted which patches of the fusiform gyrus would respond to faces. “This is the first time we’ve had direct evidence of this relationship between function and connectivity, even though you certainly would have assumed that was going to be true,” says Saxe, who is also an associate member of the McGovern Institute. “One thing this paper does is demonstrate that the tools we have are sufficient to see something that we strongly believed had to be there, but that we didn’t know we’d be able to see.” The other regions connected to the fusiform gyrus are believed to be involved in higher-level visual processing. One surprise was that some parts of the fusiform gyrus connect to a part of the brain called the cerebellar cortex, which is not thought to be part of the traditional vision-processing pathway. That area has not been studied very thoroughly, but a few studies have suggested that it might have a role in face recognition, Osher says. Now that the researchers have an accurate model to predict function of fusiform gyrus cells based solely on their connectivity, they could use the model to study the brains of patients, such as severely autistic children, who can’t lie down in an MRI scanner long enough to participate in a series of face-recognition tasks. That is one of the most important aspects of the study, says Michael Beauchamp, an associate professor of neurobiology at the University of Texas Medical School. “Functional MRI is the best tool we have for looking at human brain function, but it’s not suitable for all patient groups, especially children or older people with cognitive disabilities,” says Beauchamp, who was not involved in this study. The MIT researchers are now expanding their connectivity studies into other brain regions and other visual functions, such as recognizing objects and scenes, as well as faces. They hope that such studies will also help to reveal some of the mechanisms of how information is processed at each point as it flows through the brain. Provided by Massachusetts Institute of Technology (news : web) This story is republished courtesy of MIT News (http://web.mit.edu/newsoffice/), a popular site that covers news about MIT research, innovation and teaching. NS Nobel prizewinning quasicrystal fell from space

14:46 03 January 2012 by David Shiga A Nobel prizewinning crystal has just got alien status. It now seems that the only known sample of a naturally occurring quasicrystal fell from space, changing our understanding of the conditions needed for these curious structures to form. Quasicrystals are orderly, like conventional crystals, but have a more complex form of symmetry. Patterns echoing this symmetry have been used in art for centuries but materials with this kind of order on the atomic scale were not discovered until the 1980s. Their discovery, in a lab-made material composed of metallic elements including aluminium and manganese, garnered Daniel Shechtman of the Technion Israel Institute of Technology in Haifa last year's Nobel prize in chemistry. Now Paul Steinhardt of Princeton University and colleagues have evidence that the only known naturally occurring quasicrystal sample, found in a rock from the Koryak mountains in eastern Russia, is part of a meteorite. Nutty conditions Steinhardt suspected the rock might be a meteorite when a team that he led discovered the natural quasicrystal sample in 2009. But other researchers, including meteorite expert Glenn MacPherson of the Smithsonian Institution of Washington DC, were sceptical. Now Steinhardt and members of the 2009 team have joined forces with MacPherson to perform a new analysis of the rock, uncovering evidence that has finally convinced MacPherson.

In a paper that the pair and their teams wrote together, the researchers say the rock has experienced the extreme pressures and temperatures typical of the high-speed collisions that produce meteoroids in the asteroid belt. In addition, the relative abundances of different oxygen isotopes in the rock matched those of other meteorites rather than the isotope levels of rocks from Earth. It is still not clear exactly how quasicrystals form in nature. Laboratory specimens are made by depositing metallic vapour of a carefully controlled composition in a vacuum chamber. The new discovery that that they can form in space too, where the environment is more variable, suggests the crystals can be produced in a wider variety of conditions. "Nature managed to do it under conditions we would have thought completely nuts," says Steinhardt. Journal reference: Proceedings of the National Academy of Sciences, DOI: 10.1073/pnas.1111115109 SCIAM Gingrich Tops Scientific American's Geek Guide to the 2012 GOP Candidates "Newt Skywalker" nudges out Romney and Paul based on the former congressman's engagement in issues related to energy, the Internet and military weapons By Christopher Mims | January 3, 2012 | 9

1 2 3 4 5 6 Next > V.ÎN CONTINUARE !

Image: Flickr/AmericanSolutions The contenders for the Republican nomination in the 2012 U.S. presidential election may appear to be a fairly uniform group of middle-aged white conservatives, but when it comes to issues of science, technology and overall geek cred, none of these candidates is cut from the same cloth. In fact, Newt Gingrich nudges out Mitt Romney and Ron Paul in Scientific American's overall ranking, based on the former Congressman's engagement in issues related to energy, the Internet and military weapons, combined with his mastery of top online tools such as Twitter and a healthy appetite for science nonfiction. Paul is a geek contender based on his appeal to libertarian-leaning Silicon Valley, combined with his support of online freedoms, although he fails science when it comes to accepting evidence for anthropogenic climate change and evolution. Romney accepts evolution, accepts at least the phenomenon of climate change, if not the science showing that it is human-caused, and has deeper ties to Silicon Valley. He also has thought extensively about energy, technology and engineering issues to the point that he explicitly favors a federal program for advanced energy research. All candidates were ranked with up to five stars in three broad categories: "Geekiness" is an evaluation of whether or not the candidate qualifies as a geek. "Associations" encapsulates the degree to which he or she has been attached to causes and people in science and technology. And "policies" sums up the degree to which the candidate engages those subjects in his or her platforms. Read on for a deep dive into the GOP candidates' personal histories, public statements and policy proposals, which gives a unique window into their understanding of the issues closest to geeks' hearts and of how the universe works. # 1 - Newt Gingrich

The two things you need to know about Gingrich's geek cred is that one of his nicknames is Newt Skywalker and that he once made the cover of Wired—back in its early, weird days—in a feature written by none other than technology investor and commentator Esther Dyson. Bob Walker, a Gingrich booster and former chairman of the U.S. House Committee on Science (now the Committee on Science, Space, and Technology), said that Gingrich "would probably be the most knowledgeable president on technology issues ever elected." Calling Gingrich a science-fiction nerd is like saying that vampires have seen a modest resurgence in young adult literature. He has repeatedly expressed that Isaac Asimov's seminal Foundation trilogy (about "psychohistorians" who use mathematical models to predict the future) made a deep impression on him in his youth. Gingrich has written so much and spoken so often that it is possible to confuse the volume of his pronouncements with their frequency, but some of his ideas appear to come straight from the science fiction he has read. He has proposed using lasers against North Korea, putting mirrors in space to increase agricultural productivity, colonizing the moon, reviving a Star Wars–style orbiting missile defense system and solving climate change through geoengineering. Whereas other candidates wring their hands over the threat of Chinese currency controls, he has warned of the threat to the U.S. of that most science-fictionesque of all weapons, the electromagnetic pulse. Post a Comment | Read Comments (9) !!!!!!!!!!!!! 1 2 3 4 5 6 Next > Articles You Might Also Like Famous for Being Fatuous: Celebs and Pols Say the Darnedest ThingsEspecially about Science The Prince of Evolution: Peter Kropotkin's Adventures in Science and Politics A Stern Commentary: Howard Stern Calls Out Rick Perry for His Anti-Science Views The Bachmann Files: Dont Let the Facts Stand in the Way of Incendiary Politics Vaccine for Human Papillomavirus (HPV) Remains Safe 9 Comments 1. JamesDavis Good Lord in Heaven, or wherever you think he/she/it is located! What planet did these sub-humans, or Neanderthals, come from? You can sum up all the GOP candidates in one short sentence: "Mentally Retarded and Totally Out Of Touch With Reality". It is frightening when a presidential candidate quotes and believes that science fiction, fiction, and cartoons are real and want to implement their fictional technology and psychology into the federal government. Wow! Peter Rabbit and George W. Bush is very proud of every one of these GOP candidates. 2. RogerPink How does being a fan of science fiction make someone qualified with regards to Science? Does reading romance novels make one a casanova? Does reading adventure stories make one daring? Wouldn't a better way to promote his credentials be his opposition to the cancelling of the SSC in 1992? Here are his words: "And the truth is we are not really sure what we will find out, because that is part of the genius of this particular experiment. This is at the absolute frontier of our knowledge of the universe. It is our absolute frontier of our knowledge of physics. But what we do know is that if we walk off from this project leaving it to the Europeans to dominate the outer ridge of science, if we walk away sending a signal to the Japanese that their future is with Germany, Italy, or Switzerland but not with the United States, if we decide that cheap ignorance is better than an investment in the future of science, then we will have shaped for our children and our grandchildren a real weakness." - 1992 house debate on SSC termination I find myself less concerned about the vicissitudes of the Republican candidates (who I won't be voting for of course) than the absolute rubbish that science is becoming. We live in a weird time where all the outside criticisms of science are mostly rubbish (climategate, etc.), and yet there is so much wrong in the scientific community we now ignore, I suspect out of fear it might be used against us and also because we're lazy. The commoditization of papers and those nonsense metrics meant to determine "scientific worth". The increasing emphasis on technology as opposed to knowledge. And yes, absolutely illogical and sensationalist or partisan (which I believe this story to be) scientific "news" articles. 3. dbtinc Politics at its worst - being "antiscience" and "antilogic" appeals to the voters these people need to become elected. It says more about the state of the voters than the politicians. Either way it's very disheartening. 4. Dolmance He denies climate change, and is willing to see havoc done to the environment for personal gain. So whatever scientific knowledge he might possess, the value of same doesn't stack up well against the negatives at all. He's got a bad character. 5. BaldEgalitarian I found this article mildly comforting, but find our electoral process discomforting. Being represented by only the rich, the active, the loud, the talented, the organized and the popular does not properly represent the poor who cannot contribute money to a cause, the inactive

who do not write to congress, the polite who do not impose, the untalented who cannot articulate quickly, the unorganized who lack amplification and the unpopular who fear condemnation. 6. Dr. Cosmic I do not think that Ron Paul is for Net Neutrality and Gingrich calls it "government theft". 7. Mr. Peabody II Has anyone analyzed Adolph Hitler by the same criteria? Seems like he would score pretty high on "Science-Savvy", too. 8. BcdErick Wow! I didn't know the comically named "Scientific American" was just another another tentacle of the far let DNC. Pretty sad really. 9. potomac79 Huntsman tweeted in August, "To be clear. I believe in evolution and trust scientists on global warming. Call me crazy." This set him apart from the field. It does seem, however, that he's getting reeled back in: "Because there are questions about the validity of the science, evidenced by [an unnamed] university over in Scotland recently, I think the onus is on the scientific community to provide more in the way of information, to help clarify the [climate change] situation...If theres some disruption or disconnect in terms of what other scientists have to say, let the debate play out within the scientific community." I'm just curious where exactly he's drawing the line--or, perhaps, where his campaign contributors are drawing the line--between honest differences of opinion and FUD. MLU MRI: Quantum computing meets medicine Tuesday, 03 January 2012 Story Source

Image credit A new study advances toward nanoscale MRI instruments that could study the properties of specific molecules in a noninvasive way. The research, at the interface of quantum measurement and nanotechnology, suggests that quantum computing may have applications in areas outside of pure electronics. “Think of this like a typical medical procedure—a magnetic resonance imaging (MRI)—but on single molecules or groups of molecules inside cells instead of the entire body. “Traditional MRI techniques don’t work well with such small volumes, so an instrument must be built to accommodate such high-precision work,” says Gurudev Dutt, assistant professor of physics and astronomy at the University of Pittsburgh. However, a significant challenge arose for researchers working on the problem of building such an instrument: How does one measure a magnetic field accurately using the resonance of the single electrons within the diamond crystal? Resonance is defined as an object’s tendency to oscillate with higher energy at a particular frequency, and occurs naturally all around us: for example, with musical instruments, children on swings, and pendulum clocks. Resonances are particularly powerful because they allow physicists to make sensitive measurements of quantities like force, mass, and electric and magnetic fields, Dutt says. “But they also restrict the maximum field that one can measure accurately.” In magnetic imaging, this means that physicists can only detect a narrow range of fields from molecules near the sensor’s resonant frequency, making the imaging process more difficult. “It can be done,” says Dutt, “but it requires very sophisticated image processing and other techniques to understand what one is imaging. Essentially, one must use software to fix the limitations of hardware, and the scans take longer and are harder to interpret.” Dutt—working with postdoctoral researcher Ummal Momeen and PhD student Naufer Nusran—has used quantum computing methods to circumvent the hardware limitation to view the entire magnetic field. By extending the field, researchers have improved the ratio between maximum detectable field strength and field precision by a factor of 10 compared to the standard technique used previously. Published in the journal Nature Nanotechnology, puts them one step closer toward a future nanoscale MRI instrument that could study properties of molecules, materials, and cells in a noninvasive way,

displaying where atoms are located without destroying them; current methods employed for this kind of study inevitably destroy the samples. “This would have an immediate impact on our understanding of these molecules, materials, or living cells and potentially allow us to create better technologies,” says Dutt. These are only the initial results, says Dutt, and he expects further improvements to be made with additional research: “Our work shows that quantum computing methods reach beyond pure electronic technologies and can solve problems that, earlier, seemed to be fundamental roadblocks to making progress with high-precision measurements.” The coming war on general computation Tuesday, 03 January 2012 According to Cory Doctorow, the last 20 years of Internet policy have been dominated by the copyright war, but the war turns out only to have been a skirmish. The coming century will be dominated by war against the general purpose computer, and the stakes are the freedom, fortune and privacy of the entire human race. The problem is twofold: first, there is no known general-purpose computer that can execute all the programs we can think of except the naughty ones; second, general-purpose computers have replaced every other device in our world. There are no airplanes, only computers that fly. There are no cars, only computers we sit in. There are no hearing aids, only computers we put in our ears. There are no 3D printers, only computers that drive peripherals. There are no radios, only computers with fast ADCs and DACs and phased-array antennas. Consequently anything you do to "secure" anything with a computer in it ends up undermining the capabilities and security of every other corner of modern human society. Cory Doctorow photo credit: Paula Mariel Salischiker, pausal.co.uk, CC-BY V.VIDEO: The coming war on general computation KAI Understanding the science for tomorrow: myth and reality January 3, 2012 by Editor

Image from Understanding the Science for Tomorrow promo video (credit: The Great Courses) In 24 video lectures on Understanding the Science for Tomorrow: Myth and Reality, Jeffrey C. Grossman, a research scientist and professor at University of Illinois at Urbana-Champaign and MIT, presents a “scientifically accurate and enlightening survey of today’s most advanced research” in fields such as engineering, biology, chemistry, and theoretical physics, including nanotechnology, quantum computing, genetic engineering, and AI. Topics: AI/Robotics | Biomed/Longevity | Biotech | Computers/Infotech/UI | Electronics | Energy | Entertainment/New Media | Nanotech/Materials Science | Physics/Cosmology | Singularity/Futures inShare6 Related Site Content:

1. Operational quantum computing center established at USC | November 1, 2011 2. Is quantum computing real? | September 28, 2011 3. ‘Convergence’ may lead to revolutionary advances in biomedicine, other sciences | January 5,

2011 4. The ghost of personalized medicine | June 15, 2011 5. In Pursuit of Qubits, Uniting Subatomic Particles by the Billions | January 21, 2011

Your connected vehicle is arriving January 3, 2012 Source: Technology Review

In-car Internet (credit: BMW) Over the next 10 years automobiles will rapidly become “connected vehicles” that access, consume, and create information and share it with drivers, passengers, public infrastructure, and machines including other cars. Continued evolution in sensors, computing power, machine learning, and big-data analytics will bring us closer to the goals of zero accidents and real-time traffic management. Cars that are aware of their own location and the location of other vehicles will “self organize”: they will talk to one another and to the infrastructure to optimize traffic flow, minimize congestion, reduce pollution, and increase general mobility. Imagine a future in which even a 90-year-old person can remain mobile over long distances in a car that drives itself. Read original article Topics: AI/Robotics | Computers/Infotech/UI | Electronics | Internet/Telecom Related Site Content: How Google’s self-driving car works | October 20, 2011 Ford, Google team up to make smarter cars | May 12, 2011 Government aims to build a ‘Data Eye in the Sky’ | October 11, 2011 NASA announces initial designs for 2025 aircraft | January 17, 2011 NASA plans cloud marketplace for scientists | November 4, 2011 KAI/TLG REV Your Connected Vehicle Is Arriving As our cars become networked—to the Internet and to one another—new trends in technology and society will redefine transportation. What's certain: tomorrow's automobiles will provide experiences that go well beyond driving.

Tuesday, January 3, 2012

By Thilo Koslowski Audio »

See the rest of our Business Impact report on The Connected Vehicle. I am passionate about cars and always have been. As a child, I imagined owning a car that would do whatever I wanted it to. Of course, it could fly as well as drive. But more important, it would do much more than simply getting me from point A to point B. My future car would look out for me, entertain me, and make sure that I would never be late for a playdate with my friends. These are no longer childish notions. The automotive and transportation industries are entering a phase of the most significant innovation since the popularization of personal automobiles a hundred

years ago. Similar to the way telephones have evolved into smart phones, over the next 10 years automobiles will rapidly become "connected vehicles" that access, consume, and create information and share it with drivers, passengers, public infrastructure, and machines including other cars. We can already predict benefits such as reduced accident rates, improved productivity, lowered emissions, and on-demand entertainment for passengers. The rise of connected cars will lead to widespread changes affecting many kinds of businesses, not to mention governments and communities. As just one example, we are seeing collaborations between automakers and life-science companies to develop in-vehicle health-monitoring sensors that can transmit data about the driver's health in case of an emergency. It's only recently that the importance of the connected car has become widely accepted. When I founded Gartner's global automotive advisory practice in 1999 in San Jose, California, automotive company executives scratched their heads and asked me why I didn't open the office in Detroit. Back then, it took a good 15 minutes to explain the critical role Silicon Valley would play in the automotive industry's future. Today, little convincing is required. Automotive executives, as well as managers in such industries as consumer electronics, media, the Internet, computer hardware, and financial services, are beginning to realize that new concepts of mobility will affect their business and that they need new strategies to address consumer and market opportunities. Convergence of digital lifestyles and cars The emergence of the connected vehicle is closely linked to that of smart phones and the mobile Internet, which, though still relatively new, are strongly shaping consumer expectations for on-the-go access to data. Consumers increasingly view the automobile in this connectivity context, and carmakers realize they must offer in-vehicle access to data from the Web in order to stay relevant. For example, vehicle navigation systems are already the location-based service that's most popular with consumers; next-generation navigation systems able to incorporate up-to-date maps and real-time traffic information are bound to be even more appealing. Expectations for accessing other digital content in the vehicle will continue to grow, and by 2016, the majority of consumers in mature automotive markets will view in-vehicle access to Web content as a key buying criterion. The fact is, automakers now compete for customer dollars not only against each other but also against iPhones and iPads, especially among younger consumers. Data collected by us illustrates the trend. In one survey, participants were asked to choose between Internet access and owning an automobile. Among U.S. 18-to-24-year-old drivers, 46 percent said they would probably select the Internet and give up their car. Among 45-to-64-year-old drivers, only 15 percent said they would be likely give up their car for Internet access. This means that mechanical excellence won't be enough for automakers to impress future customers. The automotive industry must capture consumers' interest in digital lifestyle offerings and adapt the relevant technologies to the car. Although most car companies have innovative efforts under way, quite a few of these will misfire as the companies attempt to simply imitate what consumers already do on smart phones—for instance, having the car read you Facebook updates while you drive. Instead, successful adaptions of mobile technology will enhance the experience of owning a vehicle. For example, future cars could monitor the driver's cognitive and emotional state and assess what information, and how much of it, the driver can consume at a given time. Noncritical phone calls could be routed straight to voice mail when the car is on the highway in heavy traffic, and text messages could be read out loud when it is idling at a stop light. Changing demographics and the need for sustainability The world's population is growing, and in developed countries it is aging. That means we have to find ways of giving more people (including the increasing number who can't drive anymore) the ability to move around. This needs to occur in the context of a global transportation scenario in which growing traffic congestion, rising energy prices, and concerns over the burning of fossil fuels are likely to be critical limiting factors. One technology stands out in addressing these challenges: the self-driving vehicle. Over the next 10 years, continued evolution in sensors, computing power, machine learning, and big-data analytics will bring us closer to the goals of zero accidents and real-time traffic management. Cars that are aware of their own location and the location of other vehicles will "self organize": they will talk to one another and to the infrastructure in order to optimize traffic flow, minimize congestion, reduce pollution, and increase general mobility. Imagine a future in which even a 90-year-old person can remain mobile over long distances in a car that drives itself. That may mean more visits from the grandparents—and it might also mean that the car could drive them straight to a hospital in a medical emergency. The autonomous vehicle would also eliminate the dangers of distracted driving and make possible more fuel-efficient driving—for example, if the cars traveled in a platoon that minimizes wind resistance.

Self-driving cars are certain to bring up profound legal questions: Will a 10-year-old child be permitted to "drive"? What about someone who has had a few drinks? Who would be legally at fault in case of an accident involving two autonomous vehicles? While the idea of a self-driving vehicle may sound far-fetched, companies and governments have already made considerable investments to turn it into reality. For example, Google has driven autonomous vehicles hundreds of thousands of miles on U.S. roads, and the military has been developing driverless drone cars for a number of years. Fully autonomous vehicles will gradually evolve from features already in place in some cars, such as computer systems that automatically apply the brakes in stop-and-go traffic. Consumers like the idea of a self-driving car as well. In a survey I commissioned, 35 percent of U.S. vehicle owners said they would be likely to get autonomous driving features in their next new vehicle if these were an option. New "mobility" business models Of course, one could ask if we even need automobiles in the future. My answer: Absolutely. Cars and personal transportation are not going away. But we may increasingly replace automobile ownership with automobile access and see nontraditional companies disrupting the automotive industry's established order. Imagine you would always have access to the car you want when you need it. Already startups, such as Getaround and RelayRides, offer peer-to-peer car sharing services, in which members can unlock your car with the wave of a smart phone and rent it by the hour. While this idea may sound odd to some, at Gartner we have found that the idea is more widely accepted among younger drivers. I predict that within four years 10 percent of the urban population of the U.S. will use shared cars instead of personally owned vehicles. But this is just the beginning of how connectivity will create new business models and challenge some of the established industry players to become "mobility providers" instead of just car companies. I also predict that by the end of 2016 at least one mega-technology company will have announced disruptive plans to launch its own automotive offering. Other industries may also be challenged by the evolution of the connected vehicle. Insurance companies, for example, will need to define new risk models based on drastically reduced accident rates. Governments might establish personal emission allowances to restrict the use of cars powered by internal combustion engines and monitor for aggressive or wasteful driving behavior. In the long term, the connected vehicle will have an impact on urban development as cities use technology to try to solve traffic, parking, and pollution problems. The next two decades will be an incredibly creative period. The automotive industry and the vehicles we drive will change more than they have in the last century. Automobile companies must take advantage of these changes, define new values and products, and shape a new ecosystem of partnerships with technology companies. As cars move from being basic transportation to becoming intelligent systems, I am hopeful that they will continue to evoke the passions of enthusiasts like me. Thilo Koslowski is vice president, distinguished analyst, and leader of the Automotive, Vehicle ICT & Mobility Practice at Gartner. KAI/NYT Defining words, without the arbiters January 3, 2012 Source: New York Times

[+] (Credit: Wordnik)

When you search for the definition of a word in Wordnik, a vast online dictionary, it shows the information it has found on the Internet, with no editorial tinkering. When readers ask about a word, Wordnik provides definitions on the left-hand side of the screen, callling on the more than six million words it has found so far. Example sentences on the right-hand side provide further understanding of a new term. Another innovative database is the Corpus of Contemporary American English, 1990-2011, containing 425 million words of text from articles, transcripts of conversations, and other sources. It shows how often a word is used, and the types of discourse in which it is found, be it conversational speech or academic prose. The collection also lets users see words found near a new word. Read original article Topics: Computers/Infotech/UI | Internet/Telecom | Social Networking/Web 2.0 Related Site Content: Study matches brain scans with topics of thoughts | September 1, 2011 The emotional meanings of emoticons | October 16, 2011 Android, take a letter: Robotic hand helps people type | March 4, 2011 Psychopathic killers: computerized text analysis uncovers the word patterns of a predator | October 17, 2011 Unexplained communication between brain hemispheres without corpus callosum | October 21, 2011 EURATOM Fusion project funding dispute threatens Horizon 2020 Published EurActiv 03 January 2012 An ongoing tussle between the EU institutions over the future funding of a controversial nuclear fusion project – which will come under the spotlight during the Danish EU presidency – threatens to hack into the European Commission’s €85-billion Horizon 2020 budget proposal. The funding dispute centres around the International Thermonuclear Experimental Reactor project based at the Cadarache research facility in southern France. Construction is to begin this year. The ITER reactor aims to replicate the kind of fusion that occurs in the sun, creating cheap and abundant energy that does not rely on fossil fuels. Long-term funding unclear At the end of 2011, a €1.3-billion shortfall in funding for the ITER project was secured under the Polish presidency, after the European Council and Parliament agreed to use unused EU funds to plug the gap. Funding for the next EU budget – which runs from 2014 to 2020 – has not yet been agreed and the EU institutions have opposing views on the future of the fusion project. ITER is governed by the Euratom Treaty and therefore outside the immediate responsibility of the EU. The Commission wants ITER to be funded under separate cover by the EU’s member states. The Commission is broadly backed by the Parliament. Although some member states want to keep ITER funding inside the EU's budget, others prefer to have it outside, with still others wanting the issue debated further. Meantime, the cost of the project has soared from an original estimate of €5 billion to €16 billion. Commission fears ITER could jeopardise Horizon The Commission fears that including ITER within the EU's general budget will jeopardise its proposed €85 billion framework programme for research, since the money would largely be extracted from the existing research proposals. The ongoing debate about future funding will track parallel negotiations on the size of the next EU budget, known as the multiannual financial framework of MFF. The budget negotiations are set to run throughout the year spanning the Danish and Cypriot EU presidencies. A spokesman for the EU Commissioner for Industry and Entrepreneurship told EurActiv: "We proposed to take ITER out of the MFF because we believe this is the best way to ensure continuing financial support for ITER without exposing the EU budget to unexpected rising costs of such projects. It is now for member states to react to our proposal. Let's give them the necessary time to agree on what is the widest EU issue to negotiate, then we'll see.” Helga Nowotny, president of the European Research Council – one of the research institutes set to enjoy a boost of funding under the Horizon proposals – told EurActiv: “Nothing is definitive in times of crisis and moreover, the figures [for Horizon] are those proposed by the European Commission. They still have to be confirmed in lengthy negotiations with the European Parliament and Council.” Positions: “I hope that it will be kept out of the multiannual financial framework so that people realise that it is an unrealistic endeavour,” said Belgian Green MEP Philippe Lamberts. “Our opposition to this project

has been consistent. It is a white elephant, is not expected to deliver any energy until 50 years from now, and it’s a moving target which we cannot afford.” Next steps: 2012: Multiannual financial framework to be agreed, probably by end of year, ITER funding discussions will run in parallel.

02 IAN

TOPDOCS Time Machine Posted: 02 Jan 2012 04:00 AM PST

From the creation of the highest mountains to the opening of a flower’s petals, time controls the world around us. To understand this super-powerful force on Earth, we must wrench control of time ourselves – compressing, expanding, stopping and dissecting it, to reveal how the passing of Watch now...

UT Guest Post: The Cosmic Energy Inventory by Nancy Atkinson on January 2, 2012

The Cosmic Energy Inventory chart by Markus Pössel, Haus der Astronomie. Click for larger version. Editor’s Note: Markus Pössel is a theoretical physicist turned astronomical outreach scientist. He is the managing scientist at the Centre for Astronomy Education and Outreach “Haus der Astronomie” in Heidelberg, Germany. Now that the old year has drawn to a close, it’s traditional to take stock. And why not think big and take stock of everything there is? Let’s base our inventory on energy. And as Einstein taught us that energy and mass are equivalent, that means automatically taking stock of all the mass that’s in the universe, as well – including all the different forms of matter we might be interested in. Of course, since the universe might well be infinite in size, we can’t simply add up all the energy. What we’ll do instead is look at fractions: How much of the energy in the universe is in the form of planets? How much is in the form of stars? How much is plasma, or dark matter, or dark energy? The chart above is a fairly detailed inventory of our universe. The numbers I’ve used are from the article The Cosmic Energy Inventory by Masataka Fukugita and Jim Peebles, published in 2004 in the

Astrophysical Journal (vol. 616, p. 643ff.). The chart style is borrowed from Randall Munroe’s Radiation Dose Chart over at xkcd. These fractions will have changed a lot over time, of course. Around 13.7 billion years ago, in the big bang phase, there would have been no stars at all. And the number of, say, neutron stars or stellar black holes will have grown continuously as more and more massive stars have ended their lives, producing these kinds of stellar remnants. For this chart, following Fukugita and Peebles, we’ll look at the present era. What is the current distribution of energy in the universe? Unsurprisingly, the values given in that article come with different uncertainties – after all, the authors are extrapolating to a pretty grand scale! The details can be found in Fukugita & Peebles’ article; for us, their most important conclusion is that the observational data and their theoretical bases are now indeed firm enough for an approximate, but differentiated and consistent picture of the cosmic inventory to emerge. Let’s start with what’s closest to our own home. How much of the energy (equivalently, mass) is in the form of planets? As it turns out: not a lot. Based on extrapolations from what data we have about exoplanets (that is, planets orbiting stars other than the sun), just one part-per-million (1 ppm) of all energy is in the form of planets; in scientific notation: 10-6. Let’s take “1 ppm” as the basic unit for our first chart, and represent it by a small light-green square. (Fractions of 1 ppm will be represented by partially filled such squares.) Here is the first box (of three), listing planets and other contributions of about the same order of magnitude:

So what else is in that box? Other forms of condensed matter, mainly cosmic dust, account for 2.5 ppm, according to rough extrapolations based on observations within our home galaxy, the Milky Way. Among other things, this is the raw material for future planets! For the next contribution, a jump in scale. To the best of our knowledge, pretty much every galaxy contains a supermassive black hole (SMBH) in its central region. Masses for these SMBH’s vary between a hundred thousand times the mass of our Sun and several billion solar masses. Matter falling into such a black hole (and getting caught up, intermittently, in super-hot accretion disks swirling around the SMBHs) is responsible for some of the brightest phenomena in the universe: active galaxies, including ultra high-powered quasars. The contribution of matter caught up in SMBHs to our energy inventory is rather modest, though: about 4 ppm; possibly a bit more. Who else is playing in the same league? The sum total of all electromagnetic radiation produced by stars and by active galaxies (to name the two most important sources) over the course of the last billions of years, to name one: 2 ppm. Also, neutrinos produced during supernova explosions (at the end of the life of massive stars), or in the formation of white dwarfs (remnants of lower-mass stars like our Sun), or simply as part of the ordinary fusion processes that power ordinary stars: 3.2 ppm all in all. Then, there’s binding energy: If two components are bound together, you will need to invest energy in order to separate them. That’s why binding energy is negative – it’s an energy deficit you will need to overcome to pry the system’s components apart. Nuclear binding energy, from stars fusing together light elements to form heavier ones, accounts for -6.3 ppm in the present universe – and the total gravitational binding energy accumulated as stars, galaxies, galaxy clusters, other gravitationally bound objects and the large-scale structure of the universe have formed over the past 14 or so billion years, for an even larger -13.4 ppm. All in all, the negative contributions from binding energy more than cancel out all the positive contributions by planets, radiation, neutrinos etc. we’ve listed so far. Which brings us to the next level. In order to visualize larger contributions, we need a change scale. In box 2, one square will represent a fraction of 1/20,000 or 0.00005. Put differently: Fifty of the little squares in the first box correspond to a single square in the second box:

So here, without further ado, is box 2 (including, in the upper right corner, a scale model of the first box):

Now we are in the realm of stars and related objects. By measuring the luminosity of galaxies, and using standard relations between the masses and luminosity of stars (“mass-to-light-ratio”), you can get a first estimate for the total mass (equivalently: energy) contained in stars. You’ll also need to use the empirical relation (“initial mass function”) for how this mass is distributed, though: How many massive stars should there be? How many lower-mass stars? Since different stars have different lifetimes (live massively, die young), this gives estimates for how many stars out there are still in the prime of life (“main sequence stars”) and how many have already died, leaving white dwarfs (from low-mass stars), neutron stars (from more massive stars) or stellar black holes (from even more massive stars) behind. The mass distribution also provides you with an estimate of how much mass there is in substellar objects such as brown dwarfs – objects which never had sufficient mass to make it to stardom in the first place. Let’s start small with the neutron stars at 0.00005 (1 square, at our current scale) and the stellar black holes (0.00007). Interestingly, those are outweighed by brown dwarfs which, individually, have much less mass, but of which there are, apparently, really a lot (0.00014; this is typical of stellar mass distribution – lots of low-mass stars, much fewer massive ones.) Next come white dwarfs as the remnants of lower-mass stars like our Sun (0.00036). And then, much more than all the remnants or substellar objects combined, ordinary, main sequence stars like our Sun and its higher-mass and (mostly) lower-mass brethren (0.00205).

Interestingly enough, in this box, stars and related objects contribute about as much mass (or energy) as more undifferentiated types of matter: molecular gas (mostly hydrogen molecules, at 0.00016), hydrogen and helium atoms (HI and HeI, 0.00062) and, most notably, the plasma that fills the void between galaxies in large clusters (0.0018) add up to a whopping 0.00258. Stars, brown dwarfs and remnants add up to 0.00267. Further contributions with about the same order of magnitude are survivors from our universe’s most distant past: The cosmic background radiation (CMB), remnant of the extremely hot radiation interacting with equally hot plasma in the big bang phase, contributes 0.00005; the lesser-known cosmic neutrino background, another remnant of that early equilibrium, contributes a remarkable 0.0013. The binding energy from the first primordial fusion events (formation of light elements within those famous “first three minutes”) gives another contribution in this range: -0.00008. While, in the previous box, the matter we love, know and need was not dominant, it at least made a dent. This changes when we move on to box 3. In this box, one square corresponds to 0.005. In other words: 100 squares from box 2 add up to a single square in box 3:

Box 3 is the last box of our chart. Again, a scale model of box 2 is added for comparison: All that’s in box 2 corresponds to one-square-and-a-bit in box 3.

The first new contribution: warm intergalactic plasma. Its presence is deduced from the overall amount of ordinary matter (which follows from measurements of the cosmic background radiation, combined with data from surveys and measurements of the abundances of light elements) as compared with the ordinary matter that has actually been detected (as plasma, stars, e.g.). From models of large-scale structure formation, it follows that this missing matter should come in the shape (non-shape?) of a diffuse plasma, which isn’t dense (or hot) enough to allow for direct detection. This cosmic filler substance amounts to 0.04, or 85% of ordinary matter, showing just how much of a fringe phenomena those astronomical objects we usually hear and read about really are. The final two (dominant) contributions come as no surprise for anyone keeping up with basic cosmology: dark matter at 23% is, according to simulations, the backbone of cosmic large-scale structure, with ordinary matter no more than icing on the cake. Last but not least, there’s dark energy with its contribution of 72%, responsible both for the cosmos’ accelerated expansion and for the 2011 physics Nobel Prize. Minority inhabitants of a part-per-million type of object made of non-standard cosmic matter – that’s us. But at the same time, we are a species, that, its cosmic fringe position notwithstanding, has made

remarkable strides in unravelling the big picture – including the cosmic inventory represented in this chart. __________________________________________ Here is the full chart for you to download: the PNG version (1200×900 px, 233 kB) or the lovingly hand-crafted SVG version (29 kB). The chart “The Cosmic Energy Inventory” is licensed under Creative Commons BY-NC-SA 3.0. In short: You’re free to use it non-commercially; you must add the proper credit line “Markus Pössel [www.haus-der-astronomie.de]“; if you adapt the work, the result must be available under this or a similar license. Technical notes: As is common in astrophysics, Fukugita and Peebles give densities as fractions of the so-called critical density; in the usual cosmological models, that density, evaluated at any given time (in this case: the present), is critical for determining the geometry of the universe. Using very precise measurements of the cosmic background radiation, we know that the average density of the universe is indistinguishable from the critical density. For simplicity’s sake, I’m skipping this detour in the main text and quoting all of F & P’s numbers as “fractions of the universe’s total energy (density)”. For the supermassive black hole contributions, I’ve neglected the fraction ?n in F & P’s article; that’s why I’m quoting a lower limit only. The real number could theoretically be twice the quoted value; it’s apparently more likely to be close to the value given here, though. For my gravitational binding energy, I’ve added F & P’s primeval gravitational binding energy (no. 4 in their list) and their binding energy from dissipative gravitational settling (no. 5). The fact that the content of box 3 adds up not quite to 1, but to 0.997, is an artefact of rounding not quite consistently when going from box 2 to box 3. I wanted to keep the sum of all that’s in box 2 at the precision level of that box.

Nancy Atkinson is Universe Today's Senior Editor. She also is the host of the NASA Lunar Science Institute podcast and works with the Astronomy Cast and 365 Days of Astronomy podcasts. Nancy is also a NASA/JPL Solar System Ambassador.

lcrowell This chart is worth printing out and putting on an office wall. LC Stephen B when they realise the universe has been around forever, goes forever and will be here

forever, to put it simply, this only applies to our own backyard. In a few years as technology gives us more reach to further points in the great "out there", it'll be so exciting to see how the equations keep true to the reality of it all.

Baksa Péter typo negative: "in scientific notation: 10-6" is 4. DavidPaul54 Trying to find "GOD" is fun, interesting and amazing. "HE" is hiding well, ...huh.?...lol. Steve Dougherty Very informative and useful article. Any idea or way of predicting the degree or

range of accuracy of all this? And what might these numbers be solved to 100 years from now? MLU Life, the Universe, and Everything: What are the Odds? Monday, 02 January 2012 by Dan Falk Story Source

Have you ever wondered how likely – or unlikely – it is that you exist? Although it may sound pie-in-the-sky, it’s really a scientific problem, though you don’t have to be a scientist to be captivated by it. Take, for example, the wonderfully-named Cosmicomics of 20th-century Italian writer Italo Calvino – a collection of whimsical, science-fiction-flavoured short stories. One of the stories, called “How Much Shall We Bet,” involves two characters, the narrator (with the unpronounceable name “Qfwfq”) and someone named “Dean (k)yK.” The two men seem to have existed since the before the beginning of

the universe – somehow separate from the universe, whatever that could mean – and they seem to be immortal. All they do is make an endless series of bets regarding what sorts of things will happen in their cosmos. As you might imagine, the series of events that they bet on, and the series of events that actually unfold, are rather familiar: They seem to resemble the actual events that have unfolded in the history of our own universe. Their first bet is on the formation of atoms; the narrator bets for it, while Dean bets against it. They go on betting on the formation of various chemical elements, and, looking billions of years ahead, they bet as to whether the Assyrians will invade Mesopotamia. We’re told that Dean always bets no, “not because he believed the Assyrians wouldn’t do it, but because he refused to think there would ever be Assyrians and Mesopotamia and the Earth and a human race.” Let’s begin with the big philosophical questions: First there’s the issue of determinism – roughly, whether the “stuff that happens” in the universe is largely, or perhaps completely, determined by what came before. This is something that thinkers have wrestled with for 2,500 years, and I won’t attempt to add to that discussion here; but it is worth mentioning that most versions of determinism seem to place free will in jeopardy, making them rather unappetizing (though not necessarily wrong). (But I would say that, wouldn’t, if I were destined to say it?) Secondly, assuming that the future is not fully determined by the present, there’s the string of probabilities associated with each development along the way to “us.” Thinking again of Calvino’s story: Before you can have Assyrians, you have to have human beings, and before you can have human beings you have to have life, and before you can have life you have to have a habitable planet orbiting a star at just the right distance… it does sound like a leaning tower of improbabilities, doesn’t it? In my next blog post, I’ll explore what I think is the weakest link in that chain – the appearance of intelligent life. But first, let’s have some more fun with the ideas and the numbers. Certainly, the more specific the outcome, more improbable it seems. If you consider some particular state of affairs, and then ask what the odds are, starting from today and going back even a short time (let alone the 3.8 billion years to when life first appeared on this planet), that particular state will seem extraordinarily unlikely. For example, imagine turning the clock back five years. From that perspective, what were the odds that, on this particular day, you would be sitting in this particular room, in this city, reading this particular sentence? And what something even more basic – say, your own existence? A couple of months ago, a “probability chart” produced by Harvard Law School blogger Ali Binazir went somewhat viral, encouraging people to contemplate this very question. In the chart, Binazir calculates just how improbable it was that the right sperm from your father hooked up with the right egg produced by your mother – by his estimate, it’s about one chance in 400 quadrillion (that number seems only slightly more tame in scientific notation: 4 x 10^17). And that’s hardly the whole battle: To even get to that stage, all of your ancestors, going all the way back to the beginning of life on Earth, had to survive to reproductive age. Multiplying the string of probabilities together, he concludes that the odds of your existence are an astronomical one in 10^2,685.000. (As you can imagine, not everyone in the blogosphere was kind to Binazir; one asked if it was painful to pull those numbers out of you-know-where.) To be sure, we can quibble about the precise figures. But I’m sure we can agree that the chances of anything specific happening, viewed from a remote enough point in the past, seem absurdly low. And yet, for some reason, we often weave stories in which historical events have a flavour of inevitability to them. Think how many science fiction stories you’ve read on the theme of time travel, in which the time traveller attempts to “change history,” only to find that what was going to happen, happens anyway. Push history, and it pushes back. If you’re a Stephen King fan, you’ll know that his latest book, 11-22-63, involves a time traveller who attempts to prevent the Kennedy assassination (which of course took place on the date that gives the book its title). As you might guess, even with several years lead-time, preventing the fatal shot from being fired from the Dallas book depository is no simple task. As filmmaker Errol Morris puts it in his review of King’s book: “What if history is too forceful to redirect? What if jiggering the engine produces no favourable outcome – merely a postponement of the inevitable? If he had lived, Kennedy might not have escalated the war in Vietnam, and might have kept America out of a bloody mire. But we don’t know. What if we were headed there anyway? Then our tampering might only make things worse. It is not historical inevitability, but something close.” These kinds of questions, about the inevitability (or otherwise) of history, have made their way into our popular culture, so I’m happy to give the last word to Lisa Simpson. I’m thinking of a Halloween episode in which Lisa had lost a tooth; as part of an experiment for a science fair project, she leaves the tooth in a glass of cola overnight. Sure enough, the next morning she sees a peculiar mould

growing on it; and looking through her microscope, she sees that she’s crated little cave men. Some hours later she looks again, and the little people are undergoing what appears to be the Renaissance; soon, one of the little people is seen nailing something to the cathedral door. She gasps: “I’ve created Lutherans!” More on likelihood of life – and intelligent life in particular – next time. A super-resolution microscope has been built Monday, 02 January 2012 Story Source

Optical microscopes are still second to none when it comes to analyzing biological samples. However, their low resolution, improved only in recent years in STED microscopes, continues to be a problem. A device of this type, one of the first in Poland, has been constructed by a student of the Faculty of Physics, University of Warsaw. Due to diffraction limit, optical microscopes will never be able to discern details smaller than 200 nanometres – it was believed only a dozen or so years ago. In recent years, scientists have managed to overcome this limit and build super-resolution devices, including, for example, STED confocal microscopes. A prototype device of this type has recently been built at the Faculty of Physics, University of Warsaw (FUW), as part of Joanna Oracz’s MA thesis. As of next year, the new microscope will be used not only for research in the field of optics but also to analyze biological samples. There are many imaging techniques with a resolution of the order of nanometres (billionths of a metre) known to science, for example, electron or atomic force microscopy. These techniques require special preparation of samples and make it possible to observe only the surface itself. When it comes to samples of biological origin, not infrequently living ones, optical microscopy is still second to none. One of its advantages is the possibility of observing the spatial structure of the sample. A major disadvantage, however, is a low resolution. An optical microscope makes it possible to discern details no smaller than half the wavelength of the light illuminating the sample. This limit is due to diffraction, which makes it impossible to focus the beam of light onto a point. As a result, if we use a red light source with a wavelength of 635 nanometres, we can, at best, see details around 300 nanometres in size. In 1994 Stefan W. Hell from the Max-Planck-Institut für biophysikalische Chemie in Göttingen proposed a theoretical way to overcome the diffraction limit in optical microscopy by means of stimulated emission depletion (STED). Five years later he built the first super-resolution STED fluorescence microscope. In standard fluorescence confocal microscopy, a laser beam scans a biological sample and locally excites dye molecules, introduced into the sample earlier. Upon excitation, the molecules begin to emit light. The light is passed through a filter and recorded by a detector located behind a confocal aperture. Due to the size of the aperture, light from out-of-focus planes is eliminated, increasing the contrast of the image. The dye itself is selected in such a way that it accumulates in those parts of a living cell that are of interest to researches. An additional laser beam – depletion beam – is used in STED microscopy. Given its wavelength, the beam induces stimulated emission in dye molecules it illuminates. Molecules that have lost energy as a result of stimulated emission are no longer able to fluoresce. Therefore, their light (similarly to the light from stimulated emission) will not pass through the filter in front of the detector, and they will not be visible on the recorded image. The essence of the STED method lies in the fact that the depletion beam is donut-shaped. If a beam of this shape is properly synchronized in time and space with the illuminating beam, fluorescence will occur first and foremost in the area of the sample located in the centre of the depletion beam. “Thanks to the second beam, the area of the sample emitting light as a result of fluorescence is distinctly smaller than the diameter of laser beams. The effect is as if the illuminating beam were better focused, meaning that we can scan the sample with a higher resolution,” explains Joanna Oracz,

adding that when she began working on her device a year ago, there was only one STED microscope in Poland, purchased for a million and a half euros. The confocal microscope with a STED setup was built at the Faculty of Physics, University of Warsaw, using commercially available elements. The greatest problem was to ensure that both laser beams overlapped. “In order to observe the STED effect, both beams need to be ideally aligned – the minimum of the depletion beam needs to closely overlap with centre of the excitation beam,” says Oracz. The prototype microscope at FUW has a resolution of about 100 nm, over two times higher than that of a standard confocal microscope. Works are still underway to increase the resolution. “The advantage of our microscope is the possibility of controlling all parameters and studying the physics of the optical phenomena occurring,” stresses Oracz, currently a PhD student at the Ultrafast Phenomena Lab of the Institute of Experimental Physics FUW. The aim is to reach a resolution of about 60 nm. It would make it possible to observe details as minute as dendritic spines of neurons. “It would not have been possible to construct such a sophisticated device without collaboration with other scientific institutions,” stresses Prof. Czesław Radzewicz, head of the Ultrafast Phenomena Lab of the Institute of Experimental Physics FUW. The scientists relied, among other things, on the experience gained during the construction of a confocal microscope at the Laser Centre of the Institute of Physical Chemistry, Polish Academy of Sciences and the Faculty of Physics, University of Warsaw. The samples were dyed at the Nencki Institute of Experimental Biology, Polish Academy of Sciences. KAI The coming war on general computation January 2, 2012 by Editor

[+] Cory Doctorow's talk at the Chaos Computer Congress The coming century will be dominated by war against the general-purpose computer, and the stakes are the freedom, fortune and privacy of the entire human race, said Cory Doctorow in “The coming war on general computation” keynote talk at the Chaos Computer Congress in Berlin. “The last 20 years of Internet policy have been dominated by the copyright war, but the war turns out only to have been a skirmish, he said. “The coming century will be dominated by war against the general purpose computer, and the stakes are the freedom, fortune and privacy of the entire human race. “The problem is twofold: first, there is no known general-purpose computer that can execute all the programs we can think of except the naughty ones; second, general-purpose computers have replaced every other device in our world. There are no airplanes, only computers that fly. There are no cars, only computers we sit in. There are no hearing aids, only computers we put in our ears. There are no 3D printers, only computers that drive peripherals. There are no radios, only computers with fast ADCs and DACs and phased-array antennas. Consequently anything you do to “secure” anything with a computer in it ends up undermining the capabilities and security of every other corner of modern human society. “And general purpose computers can cause harm — whether it’s printing out AR15 components, causing mid-air collisions, or snarling traffic. So the number of parties with legitimate grievances against computers are going to continue to multiply, as will the cries to regulate PCs. “The primary regulatory impulse is to use combinations of code-signing and other “trust” mechanisms to create computers that run programs that users can’t inspect or terminate, that run without users’ consent or knowledge, and that run even when users don’t want them to. The upshot: a world of ubiquitous malware, where everything we do to make things better only makes it worse, where the tools of liberation become tools of oppression. “Our duty and challenge is to devise systems for mitigating the harm of general purpose computing without recourse to spyware, first to keep ourselves safe, and second to keep computers safe from the regulatory impulse.” V.VIDEO: The coming war on general computation

Related Site Content: UCB students’ solar vehicle to compete in world’s premier solar car race | May 9, 2011 Mind vs. Machine | February 14, 2011 The World’s Technological Capacity to Store, Communicate, and Compute Information | February 11, 2011 Flexible paper computer morphs into smartphone or tablet | May 5, 2011 Bottlenose social-media dashboard launches | December 13, 2011 Anonymous: beyond the mask January 2, 2012 by Editor

[+] Anonymous (credit: Jim Merithew/Wired.com) What is Anonymous? Wired Threat Level answer in an in-depth series, “Anonymous: Beyond the Mask.” Anonymous 101: Introduction to the Lulz Anonymous 101 Part Deux: Morals Triumph Over Lulz Related Site Content: The Anonymous threat to ‘erase’ the NYSE | October 5, 2011 Breakthrough medical gadgets: the future of healthcare hardware | November 7, 2011 Startup ducks immigration law with ‘Googleplex of the sea’ | December 29, 2011 Bionic glasses for poor vision | July 6, 2011 50 companies team to create open source EV | November 2, 2011 V. VIDEO: Anonymous: beyond the mask 2012 KAI/HEALTHCRUNCH Six big health tech ideas that will change medicine in 2012 January 2, 2012 Source: TechCrunch

[+] FutureMed (credit: Singularity University) “In the future we might not prescribe drugs all the time, we might prescribe apps,” says Singularity University‘s executive director of FutureMed Daniel Kraft M.D. AI, big data, 3-D printing, social health networks, social networks, new communication platforms, and smart-phone connections to your healthcare record and for tracking medical metrics will help you get better medical care, says Kraft, who believes that by analyzing where the field is going, we have the ability to reinvent medicine and build important new business models. 6 Big HealthTech Ideas That Will Change Medicine In 2012

Josh Constine posted yesterday

“In the future we might not prescribe drugs all the time, we might prescribe apps.” Singularity University‘s executive director of FutureMed Daniel Kraft M.D. sat down with me to discuss the biggest emerging trends in HealthTech. Here we’ll look at how A.I, big data, 3D printing, social health networks and other new technologies will help you get better medical care. Kraft believes that by analyzing where the field is going, we have the ability to reinvent medicine and build important new business models. For background, Daniel Kraft studied medicine at Stanford and did his residency at Harvard. He’s the founder of StemCore systems and inventor of the MarrowMiner, a minimally invasive bone marrow stem cell harvesting device. The following is rough transcript of the 6 big ideas Kraft outlined for me at the Practice Fusion conference Artificial Intelligence Siri and IBM’s Watson are starting to be applied to medical questions. They’ll assist with diagnostics and decision support for both patients and clinicians. Through the cloud, any device will be able to access powerful medical AI. For example, an X-ray gun in remote africa could send shots to the cloud where an artificial intelligence augmented physician could analyze them. Pap smears and some mammograms are already read with some AI or elements of pattern recognition. This has the potential to disintermediate some fields of medicine like dermatology which is a pattern based field — I look at the rash and I know what it is. Soon every primary care doctor is going to have an app on their phone that can send photos to the cloud. They’ll be analyzed by AI and determine “oh that mole looks like a dangerous melanoma” or “it’s normal”. So the referral pattern to the dermatologist will slow down. On the plus side, there are consumer apps like Skin Scan where for $5 you can take a picture of lesion and send it to the cloud, and it will at least give you an idea if it’s dangerous or not. If it is, it can help you

find a nearby doctor, which could help dermatologists get more business. Many fields are going to change because of artificial intelligence, pattern recognition, and cheaper tests. Big Data We’re gaining the ability to get more and more data at lower and lower price points. The primary example is the human genome and genomic sequencing. It cost a billion dollars or more 10 years ago to get a complete human sequence. However, the cost and speed of getting that data has dropped faster than Moore’s law to the point where it’s less than $5,000 when ordered online. From 23andMe you can now get a cheap snip test, and it has a pilot program for $999 for a whole exam. Maybe there were 10,000 patients sequenced last year. Next year it could be 100,000 and soon millions. A genome sequence could be the cost of a blood count today. When that information becomes queryable in an a crowdsourced and cloudsourced way we can be more predictive about what you’re more likely to get based on your genomics. You can then take preventative steps or get screened more often. So we’re pulling in huge data sets from low-cost genomics to proteomics (analyzing the proteins in the blood) to quantifiable self. The challenge is to make sense of that data and make it actionable information without making the patient or doctor overwhelmed. I think we need to make smart dashboards like they have for fighter pilots. They would piece together data from ubiquitous sensors, like those made by GreenGoose, and Microsoft Kinect that can measure your activity around the house. It would be like the OnStar for your body that could give you clues about when you’re about to get in trouble, and it could call for help or guide you to appropriate therapy.

3D Printing 3D printing has been around for a while but now it’s being applied to medicine in ways such as being able to scan the remaining leg of a patient that’s missing one from an accident. It can then build a prosthetic leg with skin and size that matches. 3D printing is integrating with the fast-moving world of stem cells and regenerative medicine with 3D ink being replaced by stem cells. In the future we’ll probably use 3D printing and stem cells to make libraries of replacement parts. It will start with simple tissues and eventually maybe we’ll be printing organs. Social Health Network Social networks have the ability to change our behavior. When you wireless weight scale shares metrics with your friends, you get praised for success and pressured if you’re not maintaining your diet. Social networks are also quite powerful for tracking and predicting disease. James Fowler, co-author of the book Connected is now working with Facebook to look at health data. Not surprisingly, the more friends you have, the earlier in the flu season you’ll get influenza. This could help predict when you’ll get the flu and let you take steps to avoid it. We’re in the Facebook era, and are more open to sharing information in the healthcare spectrum. Individuals will share their whole history through services including PatientsLikeMe and CureTogether

where patients with similar problems from migraines to Lou Geghrig’s disease will consolidate health information. This will enable improvements in clinical trials. Genomera is trying allow for low-cost web-based clinical trial around any question. Practice Fusion can also crowdsource that data from its electronic medical records. By collecting data from all the patients within a hospital or a region you can see trends and almost run clinical studies on the fly. For example you could see all the patients that have this gene and that are taking this drug, and determine if that drug is effective for them or not. Communication With Doctors New communication platforms similar to a Skype or FaceTime will help you communicate differently with your clinician. Many of these things are basically already here. The challenge is often not the technology but the regulatory and reimbursement markets around them. If you’re going to be talking with your clinician on your iPhone you may need to do that in a HIPAA privacy protected way. The physician is also going to want to be paid for that in some way. They’re not going to want to get all your data every time you have a hiccup or look at your iPhone pictures of your rash unless there’s a way to get paid. The regulatory system needs to adapt towards to becoming Accountable Care Organizations, which reward clinicians and healthcare plans for keeping patients healthy opposed to paying them to do extra procedures. This contrasts with a model of paying them for service like putting in stents and doing things after a problem has already progressed. Incentives need to be aligned and reimbursement needs to change to enable some of these new technologies to actually enter the clinic. Mobile The ability to have your phone tie to your healthcare record and track medical metrics will have vast repercussions. Though some aren’t cleared for sale in US yet, devices like the Alivecor electrocardiogram can monitor your heart in realtime, send the data to the cloud, and allow your cardiologist to look at it instantly. Other devices are turning phones into otoscopes for looking in your ears, or glucometers for monitoring blood sugar.

Quantified self devices like the Fitbit, Jawbone Up, and more medically themed devices will take what you used to do dsin a clinic or hospital and bring it home. This will allow therapies to be tuned much more effectively than scribbling data on a piece of paper and bringing it in to your doctor months later. Eventually these devices will converge into the equivalent of Star Trek tricorder that can perform a wide variety of medical functions. There’s even an $10 million X Prize proposed to reward the inventor of the first functional tricorder. Unfortunately, the strict regulatory system and entrenched, interested of the United States are pushing innovation offshore. A lot of the work for using mobile phones for health care is happening in Africa and India. Since there are few physicians in some of these areas mobile health and telemedicine are taking off. For example, microfluidics allows multiple tests to be done on a small chip at pennies per test, with the ability to connect to the web for analysis. The US will need to find a way to solve these regulatory problems while keeping patients safe, otherwise jobs and revenue could slip abroad. – To learn more about what’s happening next in healthtech, check out Singularity University’s FutureMed 2020 program, watch Daniel Kraft’s Ted Talk, and browse our healthtech channel. [Image Credits: Guiacirugiaestetica.com, shopping.com]

Crunchbase

DANIEL KRAFT

PRACTICE FUSION

23ANDME

GREENGOOSE! Person: Daniel Kraft Companies: RegenMed Systems, Proteus Venture Partners, Singularity University, IntelliMedicine , Stanford University School of Medicine Daniel is Stanford and Harvard trained physician, scientist, inventor and entrepreneur. Daniel is the founder of StemCor Systems and founder and consulting chief medical officer for RegenMed Systems, a clinical-stage medical device company that develops tools to enable regenerative stem cell therapies. He is the inventor of the MarrowMiner, a minimally invasive device for the harvest of bone marrow derived stem and progenitor cells. This device is being commercialized by Hospira Inc. for use in bone marrow transplantation and by...

Learn more Tags: Health, HealthTech, Editor's Picks Eugenia Loli-Queru I'm writing this after battling IBS-D (and a whole other host of problems) for 10 years. None of the medicines in the last decade, neither these "6 Big HealthTech Ideas That Will Change Medicine In 2012" would have helped with my problem. At the end, all I needed was to follow the Paleo diet, which is actually a health diet rather than a weight loss one. I'm 4 months into the diet now, and went from half-dead to be very, very alive. I have documented all these things on my blog, but I won't link so it doesn't look like an infomercial or something. But I swear about the healing properties of the Paleo diet, even TED had a recent talk about an actual doctor with multiple sclerosis who saw the best care and couldn't get better -- but she did with Paleo. Check the TED talk on Youtube, it's from Nov 30 2011. Basically, what I'm saying is that medical advancements are good, but not when simpler solutions are sidestepped that much and viewed as "fads". Lucas Rayala · Top Commenter · Hamline University In his defense, it IS a tech blog ;) … But you totally got me to look up the Paleo diet... Misha Chellam · Top Commenter · San Francisco, California A health social networking site like CureTogether or PatientsLikeMe might have helped you find the link between IBS-D and the Paleo diet?

Dave Chase · Subscribe · Top Commenter · CEO at Avado Daniel is a great speaker and thought provoker. One of the areas that I'm tracking is how the explosion of personal biometric devices is creating a proliferation of data that many of us are unable to make much sense of. While AI holds great promise, I am expecting some MDs will provide a service where they are expert at finding the "signal" in what may be "noise" for many...particularly those with chronic conditions that lend themselves to these biometric devices. Currently, there's two separate and parallel universes - the medical world and the consumer device space. There'll opportunity in bridging that divide. Chaz Chacra · Anoka-Ramsey Community College I like this article because it helps expose the promise and challenges of moving forward with technology as a help not a hindrance. Yet as the TED talk on You Tube mentions things like the Paleo Diet, I think the most promising prospect for over-all medical and optimal health potential is found in the integrative approach, especially where specific high-tech systems can include and encourage the best and latest in Naturopathic modalities and the potential behind some of the best in neurotherapeutic contributions. Robin Raskin · Subscribe Totally impressed by the way Daniel nailed the top trends. I also think preventative medicine -- body monitoring, brain sprints, etc. are going to be more important as the population ages. And thanks Eugenia.. my daughter is a cyclical vomiter so I'm going to the Paleo diet site now. Please attend http://digitalhealthsummit.com at CES if you can. If not, watch our videos from the conference. Charlie Smigelski · University of Vermont Robin, you might look at Tong Ren for your daughter. It is non invasive energy healing, with amazing utility in cases where it seems aberrant nerve signals are the problem. I have friends and patients with MS and PArkinson's doing well on Tong Ren. Read about it at www.tomtam.com [email protected] Matty Van · Swinburne In terms of communications with doctors, checkout ringadoc ( www.ringadoc.com ) already available On iphone and android. Iit Delhi · Indian Institute of Technology(IIT),Delhi telemedicine is the also one of the big ideas. For jobs in the IT sector, visit http://joblagao.com/blog/. Norman Tobias · Subscribe · New York, New York Star Trek tricorder is on it's way... $10 million X Prize proposed to reward the inventor of the first functional tricorder. Conan Tobias Star Trek tricorder ha ha beam me up... David Albert · Duke University Thanks for the mention and excellent article with well-earned kudos to my friend Daniel Kraft. Gene Powell · Ottawa Hills, Ohio Proofread much? Dan Munro · Founder & CEO at IPatient, Inc. Nothing but great admiration and respect for Daniel Kraft - who I've seen speak (Rock Health Bootcamp in SFO) and then had the chance to share an ad-hoc breakfast with at the recent mHealth Summit in D.C.... but... two things about this article. The title suggests this list will "Change Medicine in 2012." In all fairness, many (if not most) of the technologies you reference are very futuristic and will most certainly NOT change medicine in 2012. I think Daniel would also agree that 2012 isn't realistic for many of the exciting advances he references in this article. I applaud the X-Prize $10M Tricorder effort - but we are years away from that one. A key requirement is the ability to diagnose a patient with equal accuracy to 10 Board Certified Physicians. I'm not sure we'll see that one in my lifetime (although I do hope we see it next year!) ;-). The Jawbone UP was hastily released - and then removed from the market - and the Fitbit (as reported by TC) had significant data breaches including the ability for people to Google the "sexual activity" of Fitbit users. Yikes! Daniel is absolutely correct with the future capabilities - and they are all *very* exciting - but many of the ones listed in this article certainly won't appear in 2012 - or even 2015. Ed Botsko · Delray Beach, Florida This article is both enlightening and frustrating. We had a remote diagnostic system built and on the market in the early 90s. That worked within the capability of remote diagnostic devices. It would permit a physician at a central location to consult with and diagnose patients. The biggest problem was that physicians refused to use it on a regular basis. In addition to being able to capture and display output from a variety of medical devices, it also had "doctor friendly" features like touch screens tablet for hand writing and highlighting and choice of typing or voice input. All the patient information was kept together and was network-able and secure. Two applications that were "ready" at the time were military where the patient could be kept in or near the battle and could be diagnosed by a remote medical team, and remote clinics such as remote medical clinics able to have access to a doctor at a university hospital or some other central location. Adam Darrow Exciting stuff. Notice no mention of cloud-based inventions for preventing conventional treatments in the first place (aside from crowdsourced peer pressure). I'm relieved that I'm blind-siding the industry, but not surprised. Few people have overcome health issues as successfully as me using mostly holistic treatments AND have the set of skills needed to build something like www.AutoPilotLifestyle.com Talk about big data...imagine having an accurate account of how millions

of people spend their 24 hours each day, plus their scheduling changes (implied trade-offs) and how that impacted their stress, overall health and ability to acheive life goals. It's coming early 2012... Alex Butler · Subscribe · Founder and Managing Director at The Social Moon Thanks Daniel, really high quality article. I agree completely with your predictions and the capacity for technology and healthcare away from pharmaceuticals in the pure sense over the next decade. The intersection between technology and medicine is certainly an exciting place to be. If poeple are interested in these subjects they can listen to the podcast Digitally Sick that is already the most popular digital health marketing/technology pod http://digitallysick.com/. There is also a curated topic Healthcare Social Media to be found here => http://www.scoop.it/t/health-care-social-media. Have a fantastic, happy and healthy 2012. a damon lynn app Great Article! Something as simple as improving patient to doctor communication, particularly with regard to chronic conditions, can save the system so much time and money. Many pain specialists are expected to read minds when they ask their patients how they've felt the past month. Not my doctors! I created a pain tracking app to document my RSD/CRPS. ChronicPainApp.com Recovery Record Great distillation of current trends, but no mention of mental health? NIMH estimates that 1 in 4 Americans suffer from a diagnosable mental illness, costing the US economy approximately $317 billion, yet very few innovators are working in this space. Call to action to the tech community! Deniz Kural · Top Commenter · Brighton, Massachusetts By the way it is not a whole exam, it is an EXOME, which means sequencing ~1-2% of your genome that codes for protein. SNP Chips (the cheap option) queries for known variants; the exome can discover new variants inside that 1-2% , far from whole. Simon Tucker · Subscribe · Saint Louis, Missouri Excellent article. I would imagine it is already feasible to develop just about every technology mentioned here. I hope to see some of these making there way out in 2012. Frank Manson (signed in using Hotmail) The only things that could transform healthcare are universal healthcare, cure for cancer, cure for aids and cure for aging. Rock Health Exciting piece on digital health in 2012 by Rock Health advisor Daniel Kraft! Lucien Engelen · Subscribe · Director Radboud REshape & Innovation Center at Radboud University Nijmegen Medical Centre Great overview of my friend Daniel Kraft ! Also nice to see our common thought about Weighing scales and Videoconference (like our (FaceTalk spinn-out of the Umc St Radboud from our Radboud REshape & Innovation Centre) added to the loist. Was great to have Daniel on stage at Tedx Maastricht on April 2nd we will run the second edition ! Alltought it is a tech view, in this article and Techcrunch, the most important change in medicine is missed in this article : patients. We make a great deal out of it and with the added change in this article a true revolution is started where patients will be acting as partners in the OWN health(care). (http://www.slideshare.net/lucienengelen/patients-as-partners-10756051) more on this to be found on http://www.tedxmaastricht.nl/inspiration/videos/ and with this years line-up of speakers things like big data will be adressed Might i.e. want to read the blogpost Paul Grundy (IBM's Global Director of healthcare transotions http://www.tedxmaastricht.nl/2011/12/now-you-have-healthcare-data-what-are-you-going-to-do-with-it/ Lucien Engelen Founder & Curator @zorg20 on twitter Dead? Social media’s explosive growth is only beginning January 2, 2012 Source: ReadWriteWeb The period of rapid growth for social media is over, says Vivek Wadhwa in predictions for 2012 published in the Washington Post. But ReadWriteWeb‘s Marshall Kirkpatrick argues that “given the huge growth of data input that is likely just around the corner, it makes no sense to me that investors and start-ups don’t have plenty of room to make money in social still.” Read original article Topics: Social Networking/Web 2.0 inShare1

Related Site Content: 2012 predictions | December 27, 2011 Giant Casimir effect predicted inside metamaterials | December 6, 2011 The Future of the Human Genome | February 10, 2011 LinkedIn pulls Facebook-style stunt | August 11, 2011 Data-intensive supercomputing | April 25, 2011 KAI/READWRITEWEB Dead? Social Media's Explosive Growth is Only Beginning By Marshall Kirkpatrick / December 31, 2011 8:00 AM / 2 Comments inShare159 Share on Tumblr Social media, types of media where everyday people can publish and subscribe to what one another publishes, have changed the world. At least in the United States, though, their rapid expansion through acquisition of new users may be over. Facebook specialist Eric Eldon published a compilation of statistics from around the web this week on TechCrunch that pointed towards US and Canadian market saturation this past year for Facebook. Surely Facebook represents the forward line of all social media. Academic and tech industry analyst Vivek Wadhwa posted a set of predictions for 2012 in the Washington Post last night, starting with a prediction that the period of rapid growth for social media is over. In the future it will be a feature, not a product, he argues. To startups and investors, Wadha says "It's time to jump on the next bandwagon, folks." A word from our sponsor:

The Medill School of Journalism at Northwestern University offers programs that combine the enduring skills and values of journalism with new techniques and knowledge that are essential to thrive in a digital world. Ad powered by BTBuckets "No matter how you slice the data," Vivek Wadhwa said on Twitter, "the exponential growth in Social Media is no more. Just gradual growth now." Wadhwa is an astute observer of long-term technology trends and is likely correct within a particular understanding of the situation. For one thing, I can't

help but imagine raw user numbers still have a long, long way to go in many parts of the world just beginning to come online. Even within the US and the rest of the West though, such conclusions require an assumption that the key metric is number of new users in total. "Instead of raw user growth," Eldon argues on Techcrunch, "the numbers to watch going forward will be around engagement." What might that look like? I'd like to present two possibilities for major continued growth in social media. Afterwords, Vivek Wadhwa's response. The Instrumentation of Everyday Life One way to understand engagement with social networks is not just time on site, but data provided as input. Mark Zuckerberg sees it this way, he argues that the amount of information people share doubles every year. Facebook's Open Graph API allows all kinds of websites to push user activity into their Facebook newsfeeds. The roll-out of Open Graph, widely referred to as Frictionless Sharing, has just barely begun. It's already super controversial. I believe it has been implemented in a way that puts the whole kitten caboodle at risk, unfortunately. Have you noticed how much more prominent music has become in Facebook since the introduction of the Open Graph on Spotify, Rdio and other services? Now imagine that rolling out to everything you do online. It's already begun to enter into news reading and video viewing. Facebook is sure to do it better the next time around when they roll it out to shopping again, after the Beacom debacle several years ago. Meals eaten? Hours of sleep slept? Distances traveled? TV shows and books watched? There are many more parts of our lives that can be wired up to Facebook or other social networks. The instrumentation of everyday life may sound frightening to many people, but so did posting photos of yourself online or using a debit card (at all) just a few years ago.

If it's done well, with privacy protections, security, user education, informed consent and delighted users - then this type of engagement with social media could represent a huge and desirable period of growth in the industry. "Just as location-based applications became a 'feature' rather than the 'big thing,' social media will live on and become an integral part of what we do," Wadha writes. "But the party's over for investors and start-ups in this space. The big growth is behind us. Revenues from social media have not lived up to the promises, and the vast majority of those thousands of start-ups are either dying or on the ropes. It's time to jump on the next bandwagon, folks." Given the huge growth of data input that is likely just around the corner, it makes no sense to me that investors and start-ups don't have plenty of room to make money in social still. The Web of Things Connected devices, many of which you might not even consider connecting to the Web today, are expected to facilitate fundamental changes in human life over the next few years. Hans Vestberg, CEO of Ericsson, predicts that the world's nearly 5 billion mobile phone subscribers today will be surpassed by 50 billion connected non-phone devices in 10 years. Google X? These Nine Products From the Future Are Real Right Now What are all those devices going to do? Wireless industry analyst Chetan Sharma says they will be connected to entirely new forms of electronics and will disrupt entire industries like consumer packaged goods. Imagine cereal boxes that detected when you were about to run out of cereal and automatically ordered more from the cereal maker. Maybe that cuts out retail altogether. What does this have to do with social media? Quite simply, what do you think people will be doing while they ride in the driverless car that picked them up at home to take them to work? They'll be Facebooking and Tweeting, of course. What will happen after your 50th automatic re-supply of Cap'n' Crunch? You'll win a Super Fan badge on your social media profile, I'm sure. Blippy, the social network that publishes every debit card transaction you perform out into your social network of fellow exhibitionist friends may never take the world by fire. But Mint.com's aggregate financial data and benchmarking is much more likely to. Real-time data bout local spending sounds like social media to me. Social media in the age of instrumentation and connected devices may be more about aggregate social activity than about the long voice blogging and Tweeting. The intersection of people, machines and passively monitored objects (the cheapest input of all!) all combine to form an entirely new world of opportunity. That may be the biggest opportunity yet. As Mark Roberti, founding editor of the publication RFID Journal, wrote this Spring: "This change - enabling computers to see and understand what is happening in the real world - is enormous. Most people have yet to grasp it, seeing RFID as a more expensive alternative to bar codes. They don't comprehend that when computers can automatically collect information regarding what is happening in the world, new insights and business strategies then become possible. And the companies that leverage these capabilities most effectively will be the big winners in the century ahead." Cloud-scale information gathering regarding what is happening in the world we live in, leading to entirely new insights and business strategies. That sounds like social media to me. I expect that this kind of information is going to make the number of photos we all pro-actively upload to Facebook look like a drop in the ocean. Let's hope this vision of the future gets built in a way that's equitable and pro-freedom. Those are key concerns here at the early morning, just after the dawn, of social media. Wadhwa's reply I was fortunate enough to catch Vivek Wadhwa on Twitter last night and sent him this post before publication. This was his response. "I don't disagree with you. But I maintain that this segment will lose its sizzle--just like eCommerce did in the early days of the Internet. We overhyped this, invested in too many of the same startups, and portrayed this as a destination rather than a means. Facebook and to a lesser extent, Twitter will become platforms from which other, deeper, services are built. But gone are the days of the silly me too social media startups--the Twitter and Facebook clones. "Look at 'location based services'--the insane hype that TechCrunch created around this. This has just become a feature that we take for granted and build other meaningful applications on. Social media will go the same way. It will persist and grow, but in depth and value rather than just numbers and hype.

"I expect the excitement and hype in 2012 to be in the social game companies, newfangled B2B technology plays, and cloud computing. These will be the next bubble. Soon after, we'll see the Big Data bubble. All of this is good because it spurs investment and innovation. That's the beauty of Silicon Valley--it moves from one fad to another as if nothing ever happened." See Also

Be Careful Whom You Befriend on Social Networks

Op-Ed: Stop Feeding Facebook, It's Time for Moderation

Study Predicts Growing Use Of Social Media In Healthcare

How Facebook Mobile Was Designed to Write Once, Run Everywhere

4chan's Chris Poole: Facebook & Google Are Doing It Wrong ReadWriteWeb encourages comments, but please remember: Keep it nice, keep it clean, and avoid promotional comments. We do pre-moderate some comments with links. For more information, please read our full comment policy.

atimoshenkoTop 100 Yes and no, I think. You definitely give many interesting potential uses for tracking and recording (and maybe even making accessible, on occasion) more of one's activities and interactions. But this is more about identity – about me – than it is about socialisation and sharing with friends. The limits we are hitting are not technological or demographic, but in the amount of time, in the percentage of our day, that we want to spend in the company of all of our friends. Why will we be Facebooking or Tweeting in our driverless cars, for instance, instead of reading a novel, or learning a new skill through publicly available education materials, or prototyping a new pair of shoes for our 3D printers? Or even just thinking. Why would our desire for "me time" disappear? In fact, I would even argue that our present obsession with viewing everything through a "social media" lens is something that is hindering the popularity of a lot of these new services. Take Spotify's Facebook integration for instance – maybe I'm not representative, but I would value a service whose first focus is to log everything I listen to (so that I could, for instance, check what I was listening to on this day 5 years ago) much more than a service whose first focus is to tell all of my friends what I am currently listening to, and to tell me what all of my friends are currently listening to. Ditto for Blippy – does not mean that the data should be locked, but sharing should be a secondary use. Likewise, I would much more appreciate earning a discount on my Cap 'n' Crunch from the company for my loyalty, than I would earning and displaying a Cap 'n' Crunch badge or seeing the Colgate badge of some friend. Mass, broadcast socialisation is only one of the things we enjoy doing. Social media's explosive growth is ending because it's used up most of the time and attention we're willing to dedicate to it. Steve Ardire so good time to be a #socbiz #socialanalytics #socialbi #socialintelligence #gamification #startup President Obama signs indefinite detention bill into law

January 2, 2012

The National Defense Authorization Act (NDAA), signed by President Obama into law Dec. 31, contains a sweeping worldwide indefinite detention provision. “The statute is particularly dangerous because it has no temporal or geographic limitations, and can be used by this and future presidents to militarily detain people captured far from any battlefield,” said ACLU executive … more… Hackers plan space satellites to combat censorship January 2, 2012 Source: BBC News Computer hackers plan to take the Internet beyond the reach of censors by putting their own communication satellites into orbit and creating the Hackerspace Global Grid, including ground stations to track and communicate with the satellites. The Grid would also provide a fallback infrastructure to stay connected in case of natural and economic disaster. Longer term, they hope to help put an amateur astronaut on the Moon. See also: Hackerspace Global Grid Read original article Topics: Internet/Telecom | Social Networking/Web 2.0 | Social/Ethical/Legal | Space inShare3

Related Site Content: Cellphones to get disaster alerts | May 12, 2011 Living Earth Simulator: the ultimate HPC big-data application | December 6, 2011 Cave in moon: Base station for astronauts? | March 7, 2011 For some entrepreneurs, Moon is money | April 6, 2011 This week’s solar flare illuminates the grid’s vulnerability | June 13, 2011 (RELUAT DE SĂPTĂMÂNA TRECUTĂ !!!) Hackers plan space satellites to combat censorship

By David Meyer Technology reporter

50 years after Russia's first piloted mission, hackers plan to send their own people beyond orbit Continue reading the main story Related Stories

China paper sets out space plan

Satellite hack attacks: Reaction

Space tourists could fly by 2013 Computer hackers plan to take the internet beyond the reach of censors by putting their own communication satellites into orbit. The scheme was outlined at the Chaos Communication Congress in Berlin. The project's organisers said the Hackerspace Global Grid will also involve developing a grid of ground stations to track and communicate with the satellites. Longer term they hope to help put an amateur astronaut on the moon. Hobbyists have already put a few small satellites into orbit - usually only for brief periods of time - but tracking the devices has proved difficult for low-budget projects. The hacker activist Nick Farr first put out calls for people to contribute to the project in August. He said that the increasing threat of internet censorship had motivated the project. "The first goal is an uncensorable internet in space. Let's take the internet out of the control of terrestrial entities," Mr Farr said. Beyond balloons He cited the proposed Stop Online Piracy Act (Sopa) in the United States as an example of the kind of threat facing online freedom. If passed, the act would allow for some sites to be blocked on copyright grounds. Whereas past space missions have almost all been the preserve of national agencies and large companies, amateur enthusiasts have in recent years sent a few payloads into orbit. Continue reading the main story “Start Quote This [hacker] community can put humanity back in space in a meaningful way” Nick Farr Hackerspace Global Grid project These devices have mostly been sent up using balloons and are tricky to pinpoint precisely from the ground. According to Armin Bauer, a 26-year-old enthusiast from Stuttgart who is working on the Hackerspace Global Grid, this is largely due to lack of funding. "Professionals can track satellites from ground stations, but usually they don't have to because, if you pay a large sum [to send the satellite up on a rocket], they put it in an exact place," Mr Bauer said. In the long run, a wider hacker aerospace project aims to put an amateur astronaut onto the moon within the next 23 years. "It is very ambitious so we said let's try something smaller first," Mr Bauer added. Ground network

The Berlin conference was the latest meeting held by the Chaos Computer Club, a decades-old German hacker group that has proven influential not only for those interested in exploiting or improving computer security, but also for people who enjoy tinkering with hardware and software. When Mr Farr called for contributions to Hackerspace, Mr Bauer and others decided to concentrate on the communications infrastructure aspect of the scheme.

Mr Bauer says the satellites could help provide communications to help put an amateur into space He and his teammates are working on their part of the project together with Constellation, an existing German aerospace research initiative that mostly consists of interlinked student projects. In the open-source spirit of Hackerspace, Mr Bauer and some friends came up with the idea of a distributed network of low-cost ground stations that can be bought or built by individuals. Used together in a global network, these stations would be able to pinpoint satellites at any given time, while also making it easier and more reliable for fast-moving satellites to send data back to earth. "It's kind of a reverse GPS," Mr Bauer said. "GPS uses satellites to calculate where we are, and this tells us where the satellites are. We would use GPS co-ordinates but also improve on them by using fixed sites in precisely-known locations." Mr Bauer said the team would have three prototype ground stations in place in the first half of 2012, and hoped to give away some working models at the next Chaos Communication Congress in a year's time. They would also sell the devices on a non-profit basis. "We're aiming for 100 euros (£84) per ground station. That is the amount people tell us they would be willing to spend," Mr Bauer added. Complications Experts say the satellite project is feasible, but could be restricted by technical limitations. "Low earth orbit satellites such as have been launched by amateurs so far, do not stay in a single place but rather orbit, typically every 90 minutes," said Prof Alan Woodward from the computing department at the University of Surrey. Continue reading the main story “Start Quote Any country could take the law into their own hands and disable the satellites” Prof Alan Woodward Surrey University "That's not to say they can't be used for communications but obviously only for the relatively brief periods that they are in your view. It's difficult to see how such satellites could be used as a viable communications grid other than in bursts, even if there were a significant number in your constellation." This problem could be avoided if the hackers managed to put their satellites into geostationary orbits above the equator. This would allow them to match the earth's movement and appear to be motionless when viewed from the ground. However, this would pose a different problem. "It means that they are so far from earth that there is an appreciable delay on any signal, which can interfere with certain Internet applications," Prof Woodward said. "There is also an interesting legal dimension in that outer space is not governed by the countries over which it floats. So, theoretically it could be a place for illegal communication to thrive. However, the corollary is that any country could take the law into their own hands and disable the satellites." Need for knowledge Apart from the ground station scheme, other aspects of the Hackerspace project that are being worked on include the development of new electronics that can survive in space, and the launch vehicles that can get them there in the first place.

Until now launching communications satellites has proved to be too expensive for amateur groups According to Mr Farr, the "only motive" of the Hackerspace Global Grid is knowledge. He said many participants are frustrated that no person has been sent past low Earth orbit since the Apollo 17 mission in 1972. "This [hacker] community can put humanity back in space in a meaningful way," Farr said. "The goal is to get back to where we were in the 1970s. Hackers find it offensive that we've had the technology since before many of us were born and we haven't gone back." Asked whether some might see negative security implications in the idea of establishing a hacker presence in space, Farr said the only downside would be that "people might not be able to censor your internet". "Hackers are about open information," Farr added. "We believe communication is a human right." More on This Story Related Stories

China paper sets out space plan 30 DECEMBER 2011, CHINA

Satellite hack attacks: Reaction 28 OCTOBER 2011, TECHNOLOGY

Space tourists could fly by 2013 26 OCTOBER 2011, TECHNOLOGY Related Internet links

Chaos Communication Congress

Hackerspace Global Grid

Constellation Private spaceflight: up, up, and away

January 2, 2012

In 2012 privately funded human spaceflight will advance from promises and one-off stunts to serious flight-testing of spaceships. Governments will be the biggest customers, with unmanned systems possibly docking with the International Space Station (ISS) this year and perhaps eventually taking the place of the retired U.S. space shuttles. Meanwhile, spacecraft designed to give well-heeled … more…

3-D chips grow up

January 2, 2012

Chipmakers are pursuing a pair of innovations in performance and power consumption by building up and into the third dimension at the level of both the individual transistor and the full microchip. In 2012, the chip will start to become the cube.

Stanford online course on natural language processing

January 2, 2012

Stanford University is offering a course on Natural Language Processing free and online to students worldwide, January 23 to March 18. Students will have access to screencast lecture videos, quiz questions, assignments and exams; receive regular feedback on progress; and can participate in a discussion forum, with a certificate of successful completion. Taught by Professors … more… LATEST KURZWEIL COLLECTION POSTS

Q&A with filmmaker Jason Silva as he preaches the philosophy of the Singularity

Source: Singularity Hub — December 30, 2011 | Aaron Saenz There are many futurists and techno-optimists in the world, but there is only one Jason Silva. The former host of Current TV, and fledgling documentary filmmaker is a force of personality and energy that is storming through the Singularity community. He recently spoke at this year’s Singularity Summit in New York, and is popping up all … more… Read full article here

Time Machine Posted: 02 Jan 2012 04:00 AM PST

From the creation of the highest mountains to the opening of a flower’s petals, time controls the world around us. To understand this super-powerful force on Earth, we must wrench control of time ourselves – compressing, expanding, stopping and dissecting it, to reveal how the passing of Watch now...

PO British government to fund 3D laser cameras for highway crash site investigations January 2, 2012 by Bob Yirka

Enlarge Image: Wikipedia (PhysOrg.com) -- One of the banes of modern existence is surely the time spent in traffic backups. Oftentimes these backups occur as the result of accidents and the resulting investigative work that goes on before cleanup can commence. Such work must be done in order verify what occurred during an accident for both legal and financial reasons, thus, there is little chance of simply doing away with some of them. There does appear to be hope of developing new ways to do that detective work though, as new technology is developed to help

speed things along. One of these new technologies involves the use of laser equipped 3D cameras and computer technology, instead of old fashioned photography and legwork. The way things are done now is, investigative officers use measuring tape or string to calculate the distance between crashed vehicles, length of skid-marks, etc. They then take photographs of the scene; afterwards, the data is analyzed and graphs and reports made. The use of new laser technology however can reduce the time it takes to do all of these things. The laser camera, mounted on a tripod, is panned slowly over a portion of the scene during which objects in the scene are automatically measured for distance and multiple line segments created to replicate what is found, resulting in a 360 degree high-resolution image. Using such a system is far more accurate (within millimeters) than that done by hand measuring and a single sweep takes only about four minutes to complete, and the typical crash scene generally

requires only four sweeps, which means the whole operation can be done in just fifteen or twenty minutes. Because of this the British government has announced that it is providing £2.7 million in funding to several police districts for the purchase of 37 of the laser camera systems, which should, the government says, cut backup times by an average of 39 minutes. The camera systems were developed independently by the Austrian based company RIEGL and the Swiss company Leica Geosystems. The two types of laser camera systems offer slightly different features, such as differences in the size of the beam deployed and the use of GPS to precisely pinpoint the accident locale. One system typically costs approximately £50,000. Many people that study technology trends expect that such camera systems will soon become the norm for accident investigations in most countries and that new features will be added, such as using the data recovered to create animations that demonstrate very clearly what went on prior to, and during a crash, thus removing all doubt. © 2011 PhysOrg.com bugmenot23 A unit cost of 50,000 GBP! Fail. A kinect hacker has managed to achieve a similar scanning solution which is being used to create 3D scan of archaeological digs. http://technabob....ogy-dig/ Nik_2213 Uh, that Kinect version is still 'indoors only'. Also, given that those lidar results must stand up in court, a lot of the cost is software and hardware validation. infidel Why not get Tech students to modify a few Kinects and save £49600 for whichever police districts need them? NS Ghouls on film: Paranormal photography goes digital

02 January 2012 by David Hambling For similar stories, visit the Christmas Science , The Human Brain and Death Topic Guides

New Scientist's field guide to spotting ghostly apparitions in your photos – and whence they may have arisen ARE the souls of the departed becoming fidgety in the great beyond? "I certainly get more ghostly photos sent to me now compared with 10 years ago," says Caroline Watt of the Koestler Parapsychology Unit at the University of Edinburgh in the UK. Phil Hayes, an investigator at Paranormal Research UK, agrees. Last year, he reported record numbers of spooky images reaching his inbox. So what's going on? A century ago, spirit photography was all the rage, tantalising the public imagination with what purported to be hard evidence of wispy apparitions. However, its resurgence may have less to do with the paranormal than with the increasing use of social media and cameraphones. Serious ghost hunters might be expected to welcome such technology. Yet, according to Dave Wood from the Association for the Scientific Study of Anomalous Phenomena, low-cost cameraphones are making the business of spirit photography even more fraught. Many mass-produced phones don't use high-quality lens systems and this can lead to optical artefacts that look "spooky" to the untrained eye. Photographers are also shooting more images, raising the likelihood that one or two will feature something extraordinary. "Take a few hundred snaps in a dusty old building at night and you are sure to capture a few odd photos," says Wood. So what exactly are modern ghost hunters catching on camera? Are the images truly mysterious or are technological slip-ups and wishful thinking to blame? Let New Scientist be your spirit guide... See more: You won't believe your eyes in our ghostly gallery, "Ghouls on film: Ghost or glitch? You decide" David Hambling is a writer based in London

01 IAN

THE SCIENTIST

January 2012 » Features

Animal Mind Control

Examples of parasites that manipulate the behavior of their hosts are not hard to come by, but scientists have only recently begun to understand how they induce such dramatic changes.

By Jef Akst | January 1, 2012

2 Comments

Link this Stumble Tweet this

Scott Youtsey/Miracle studios

A normally insatiable caterpillar suddenly stops eating. A quick look inside its body reveals the reason: dozens of little wasp larvae gnawing and secreting digestive enzymes to penetrate its body wall. They have been living inside the caterpillar for days—like little vampires, feeding on its “blood”—and are finally making their exodus to build their cocoons on its bright-green exterior.

In the caterpillar’s brain, a massive immune reaction is taking place—the invertebrate equivalent of a cytokine storm—and among the factors being released is an invertebrate neurohormone called octopamine. “It’s a very important compound for controlling behavior in insects,” says invertebrate behavioral physiologist Shelley Adamo of Dalhousie University in Halifax, Nova Scotia. “Octopamine levels go up, and that plays a role in shutting off feeding.”

But the parasitic larvae don’t stop there. They also inhibit the host’s ability to break down the substance. “Octopamine levels remain high for days, and this caterpillar never really eats again,” Adamo explains. “Basically, it starves to death.” This plays the important role of preventing the caterpillar from picking off the cocoons, one by one, and eating the metamorphosing larvae alive. Simply killing their host isn’t an option, Adamo says, because if the caterpillar dies, its body will

become overrun with fungal pathogens—unwelcome visitors to a wasp nursery. Plus, non-eating caterpillars retain their defensive reflexes, which protect both them and the young wasps from arthropod predators. “They’ve turned their host from being a meal ticket [into] their bodyguard,” Adamo says.

Although researchers have observed countless examples of parasites hijacking the autonomy of their hosts, only now are they beginning to understand how the parasites tinker with numerous systems within the host, ultimately changing the host’s behavior in grotesque and horrific ways. Taking a proteomics approach, for example, scientists have compared the proteins expressed in the brains of infected and uninfected animals to gain clues about which molecules might be involved in the manipulation. And more directed neurological approaches have flagged certain brain regions and particular neurotransmitters, such as serotonin and dopamine, as likely culprits.

“The real nuts and bolts have yet to be figured out for any system,” says Adamo. “But we have some hints—good hints.”

Pet

RĂZVAN URSULEANU - PAȘAPORT PENTRU ȘTIINȚĂ, 05 IANUARIE 2012 MARI ANIVERSĂRI …01

MARI ANIVERSĂRI 2011: 45 DE ANI DE STAR TREK

09 SEPTEMBRIE 2011 STAR TREK 45

Celebrate Star Trek’s 45th anniversary with this awesome infographic Sep. 9, 2011 (5:45 pm) By: Jennifer Bergen It’s hard to believe that Star Trek aired for the first time 45 years ago, just five years after the Soviet Union launched the first person into space, Yuri Gagarin. This was just the beginning of human space exploration, but it also marked the launch of a franchise that would continue on for nearly half a century while gaining more and more fans as time went on. To date, there have been eleven Star Trek movies and six TV series, including one animated one, and you can bet your Federation Credit that there will be many more. Website Space.com has put together a rather fascinating timeline of all the basic Star Trek info you need to know, along with major events in the history of space exploration. Most Trekkies will already know everything in this timeline, but we’re sure you’ll still find it interesting to relive the past 45 years of Star Trek history. For those who aren’t as familiar with the Enterprise’s history, this infographic is sure to surprise you with some fascinating tidbits. Star Trek wasn’t initially a smash hit. When Gene Roddenberry first submitted the pilot in 1964, NBC rejected it, thinking it was “too cerebral.” However, the network let him modify the show to add more action and adventure. When Star Trek first aired on NBC in 1966, it got some rather mixed ratings, and was even rumored to be pulled off the air after season two. NBC apparently received more than 100,000 letters supporting the show, and announced that it would return for a third season in 1968. The very next year is when Apollo 11 launched and we set foot on the moon. The history of the Star Trek enterprise (no pun intended) is long, so we suggest you peruse the infographic below to make sure you know it all:

via SPACE.com

The Top 10 Star Trek Technologies

45 Years On, Why Has 'Star Trek' Stayed So Popular?

Snoopy to Sci-Fi: NASA's Most Offbeat Posters

Live Long and Prosper? Endeavour Shuttle Crew Recreates 'Star Trek' Film Poster

7 Sci-Fi Weapons of Tomorrow Here Today

Countdown: The Top 10 Star Trek Technologies SPACE.com Staff Date: 06 September 2011 Time: 01:39 PM ET The Top 10 Star Trek Technologies

Credit: SPACE.com Classic Star Trek contributed more to the modern world than phrases like "Beam me up, Scotty!" Many of the devices we saw decades ago are now available for use in the real world; we thank the engineers who made real these ten Star Trek technologies. - Bill Christensen, Technovelgy.com Star Trek popularized the idea of a communicator that could instantly connect two crew members on different parts of a planet. To answer the device, you just flipped it open and started talking. Of course, everyone recognizes this device today as a cell phone. Amateur electronics wizards have occasionally made replica Star Trek communicators available on eBay; they use Bluetooth technology to piggyback on your cell phone service.

Credit: Bluetooth Star Trek Communicator When Enterprise crew members became sick, Dr. McCoy was able to diagnose the problem in record time, usually thanks to his medical tricorder. Today's physicians make use of Magnetic Resonance Imaging (MRI) and CAT scans in much the same way. For smaller bugs, NASA has actually tested a similar kind of device on the space station. The LOCAD-PTS is able to detect and identify within minutes environmental pathogens (fungi or bacteria) that could adversely affect the health of crew members.

Credit: VoxTec The Enterprise constantly dealt with intelligent beings throughout the galaxy. When different languages were encountered, the Universal Translator was there to help bring different cultures together. In the real world, the US military is using the Phraselator in Iraq for speech translation and Internet juggernaut Google, among others, can translate Web sites to suit user needs. Also, just this month, NEC announced the first cell phone with speech translation.

Credit: VoxTec

When the crew of the Enterprise received a well-deserved shore leave, they needed some kind of money to buy goods and services. The science fiction standby of "credits" was usually brought into the picture. Today, however, real-life astronauts can use colorful QUID's (Quasi Universal Intergalactic Denomination), which are specially designed for use in space.

Credit: Business Wire

The Enterprise's transporter was able to zero in on the exact location of an individual crew member from thousands of miles away. Although we're still working on teleportation (see USAF Looks Into Teleportation), we've pretty much got the location technology down pat. It's called the Global Positioning System - GPS. One such satellite, one of Europe's planned Galileo network, is shown above.

Credit: ESA

Whenever Spock beamed down to a planetary surface, there was one thing he always took with him - his trusty tricorder. This handy pocket-sized device could do things like analyze the minerals in soil and look for life signs. NASA is ready to send similar sensors to Mars in coming years, like the Raman spectrometer shown above.

This surgical technique is a non-invasive way to destroy unwanted masses within the body (like uterine fibroids) without harming the surrounding tissues. I seem to recall Dr. McCoy touting the advantages of doing surgery without using knives decades ago. On one occasion, he saved Chekov with a nifty little non-invasive surgery device (see photo), saying "Put away your butcher knives and let me save this patient before it's too late!"

Credit: Paramount Pictures

Transparent aluminum armor (aluminum oxynitride - ALON) is being tested by the military as a lighter and stronger alternative to traditional materials. ALON is a ceramic compound with very high compressive strength and durability; it offers better performance than traditional materials consisting of bonded glass. in extensive testing, ALON has performed well against multiple hits of armor-piercing

rounds. Trek fans fondly recall how the formula for (science-fictional) transparent aluminum came to our time; Scotty blabbed it to an engineer (see photo).

Credit: Surmet Several prototype PHASR weapons are being tested by the US military. The Personnel Halting and Stimulation Response device is under development at the Air Force Research Laboratory's Directed Energy Directorate. The PHASR has been designed as a non-lethal, man-portable deterrent weapon. It uses a laser system with two different wavelengths to blind (temporarily!) the enemy. The clever acronym for this device is obviously back-formed to resemble its original - the phaser rifle from Star Trek, which actually looks very similar (see another photo).

A robotic rover called Zoe is the first robot to remotely detect the presence of life. On a NASA-sponsored mission in the harsh Atacama desert in Chile, Zoe was able to detect life by looking for natural fluorescence from lichens and bacteria. Life detection is all the rage now; the European Space Agency will be using the Urey Life Detector on an upcoming Mars mission (see photo). These devices mimic the function of the long range sensors from Star Trek, which could detect life from unreasonably long distances.

Credit: NASA


Recommended