CLICK HERE FOR THOUSANDS OF FREE BLOGGER TEMPLATES »

Sunday, February 24, 2008

SCIENCE - Brain alcohol

Dartmouth researchers are learning more about the effects of alcohol on the brain. They've discovered more about how the brain works to mask or suppress the impact that alcohol has on motor skills, like reaching for and manipulating objects. In other words, the researchers are learning how people process visual information in concert with motor performance while under the influence of alcohol.

"We found that the brain does a pretty good job at compensating for the effect that alcohol has on the brain's ability to process the visual information needed to adjust motor commands," says John D. Van Horn, a research associate professor of psychological and brain sciences and the lead author on the paper. "Alcohol selectively suppresses the brain areas needed to incorporate new information into subsequent and correct motor function."

For the study, eight people, ranging in age from 21-25, were asked to maneuver a joy stick both while sober and when experiencing a blood alcohol level of 0.07 percent (just below the legal definition of intoxicated). Brain activity during this task was captured using functional magnetic resonance imaging, known as fMRI. The study was published online in the journal NeuroImage on March 6, 2006.

The study found that alcohol selectively suppresses cognitive activity in the frontal and posterior parietal brain regions; these are regions most commonly associated with the brain's ability to monitor and process visuomotor feedback. Van Horn explains that this study is one of the first ever to directly illustrate this suppression effect in the brain in humans using neuroimaging.

"We know that alcohol has a global effect on the brain. This study was unique in that it isolated the specific network that underlies the processing and translation of visual and motor commands. The poor coordination one feels after a couple of drinks is due to the poor feedback processing in brain areas critical for updating the mental models for motor action. While this idea is not entirely new, our demonstration, using functional neuroimaging is a first and likely the start of a more extensive neuroimaging research paradigm into the effects of alcohol on the brain," says Van Horn.

Laser trapping of erbium

Physicists at the National Institute of Standards and Technology (NIST) have used lasers to cool and trap erbium atoms, a "rare earth" heavy metal with unusual optical, electronic and magnetic properties. The element has such a complex energy structure that it was previously considered too wild to trap. The demonstration, reported in the April 14 issue of Physical Review Letters,* might lead to the development of novel nanoscale devices for telecommunications, quantum computing or fine-tuning the properties of semiconductors.

Laser cooling and trapping involves hitting atoms with laser beams of just the right color and configuration to cause the atoms to absorb and emit light in a way that leads to controlled loss of momentum and heat, ultimately producing a stable, nearly motionless state. Until now, the process has been possible only with atoms that switch easily between two energy levels without any possible stops in between. Erbium has over 110 energy levels between the two used in laser cooling, and thus has many ways to get "lost" in the process. NIST researchers discovered that these lost atoms actually get recycled, so trapping is possible after all.


The NIST team heated erbium to over 1300 degrees C to make a stream of atoms. Magnetic fields and six counter-propagating purple laser beams were then used to cool and trap over a million atoms in a space about 100 micrometers in diameter. As the atoms spend time in the trap, they fall into one or more of the 110 energy levels, stop responding to the lasers, and begin to diffuse out of the trap. Recycling occurs, though, because the atoms are sufficiently magnetic to be held in the vicinity by the trap's magnetic field. Eventually, many of the lurking atoms fall back to the lowest energy level that resonates with the laser light and are recaptured in the trap.

The erbium atoms can be trapped at a density that is high enough to be a good starting point for making a Bose-Einstein condensate, an unusual, very uniform state of matter used in NIST research on quantum computing. Cold trapped erbium also might be useful for producing single photons, the smallest particles of light, at wavelengths used in telecommunications. In addition, trapped erbium atoms might be used for "doping" semiconductors with small amounts of impurities to tailor their properties. Erbium--which, like other rare earth metals, retains its unique optical characteristics even when mixed with other materials--is already used in lasers, amplifiers and glazes for glasses and ceramics. Erbium salts, for example, emit pastel pink light.

Where is everybody?

With this essay by Steven Soter, Astrobiology Magazine presents the first in our series of 'Gedanken', or thought, experiments - musings by noted scientists on scientific mysteries in a series of "what if" scenarios. Gedanken experiments, which have been used for hundreds of years by scientists and philosophers to ponder thorny problems, rely on the power of one's imagination to project these scenarios to logical conclusions. They do not involve lab equipment or, often, even experimental data. They can be thought of as focused daydreams. Yet, as in the famous case of Einstein's Gedanken experiments about what it would be like to hitch a ride on a light wave, they have often led to important scientific breakthroughs.

Soter is Scientist-in-Residence in the Center for Ancient Studies at New York University, where he teaches a seminar on Scientific Thinking and Speculation, and a Research Associate in the Department of Astrophysics at the American Museum of Natural History.

In this essay, Soter examines the Drake Equation, which asks how many technically advanced civilizations exist in our galaxy. He also looks at the Fermi Paradox, which questions why, if there are other technological civilizations nearby, we haven't heard from them.

If civilizations exist in our galaxy with levels of technology at least equal to our own, we might be able to detect some of them using radio telescopes. And if civilizations exist with technologies far in advance of our own, we might expect them to have colonized millions of habitable worlds in the Milky Way, and even to have visited our own planet. Yet there is no evidence in the astronomical, geological, archaeological, or historical records that extraterrestrial civilizations exist or that visitors from other worlds have ever been to Earth. Does that mean, as some have concluded, that ours is the only civilization in the galaxy? Or could there be a natural self-regulating mechanism that limits the intensive colonization of other worlds?

In 1961 radio astronomer Frank Drake devised an equation to express how the hypothetical number of observable civilizations in our galaxy should depend on a wide range of astronomical and biological factors, such as the number of habitable planets per star, and the fraction of inhabited worlds that give rise to intelligent life. The Drake Equation has led to serious studies and encouraged the search for extraterrestrial intelligence (SETI). It has also provoked ridicule and hostility. Novelist Michael Crichton recently denounced the equation as "literally meaningless," incapable of being tested, and therefore "not science." The Drake equation, he said, also opened the door to other forms of what he called "pernicious garbage" in the name of science, including the use of mathematical climate models to characterize global warming.

Crichton rightly pointed out that any numerical "answers" produced by the Drake Equation can be no more than guesses, since most of the terms in the equation are quantitatively unknown by many orders of magnitude. But he is utterly wrong to claim that the equation is "meaningless." An equation describes how the elements of a problem are logically related, whether or not we know their numerical values. Astronomers understand perfectly well that the Drake Equation cannot prove anything. Instead, we regard it as the most useful way to organize our ignorance of a difficult subject by breaking it down into manageable parts. This kind of analysis is standard, and a valued technique in scientific thinking. As new observations and insights emerge, the Drake Equation can be modified as needed or even replaced altogether. But it provides the necessary place to start.

When Drake first proposed his equation, we had no way to estimate any of its terms beyond the first one, representing the rate of star formation in our galaxy. Then in 1995, astronomers began to discover planets in orbits around other stars. These results now promise to sharpen our estimates for the second term in the equation, denoting the number of habitable worlds per star. Who knows what unforeseen discoveries will tell us about the other terms in the equation?

In Classical antiquity, when Aristarchus conceived the heliocentric view of the solar system and Democritus developed an atomic theory of matter, they had no possible way to test their ideas. The necessary observational tools and data would not exist for another two thousand years. Of course, the Crichtons of antiquity denounced such speculations as pernicious. But when the time finally came, the ancient ideas were still there, quietly waiting to inspire and encourage Copernicus and Galileo, and the pioneers of modern atomic theory, who took the first steps to test the theories. It may take centuries, but eventually the Drake Equation and all its elements will be testable.

We can express the Drake Equation in several ways, all of which are more or less equivalent. Here is one form:
N = Rs nh fl fi fc L

where N is the number of civilizations in our galaxy, expressed as the product of six factors: Rs is the rate of star formation, nh is the number of habitable worlds per star, fl is the fraction of habitable worlds on which life arises, fi is the fraction of inhabited worlds with intelligent life, fc is the fraction of intelligent life forms that produce civilizations, and L is the average lifetime of such civilizations.

The rate of star formation in our galaxy is roughly ten per year. We can define habitable worlds conservatively as those with liquid water on the surface. Many more worlds probably have liquid water only below the surface, but any subterranean life on such worlds would not be likely to produce an observable civilization. Recent discoveries of other planetary systems suggest that habitable worlds are common and that nh is at least one such planet in a hundred stars.

The remaining terms in the equation depend on the biology and social development of other worlds, and here we are profoundly ignorant. Our local experience may provide some guidance, however. We know that life on Earth arose almost as soon as conditions allowed - as soon as the crust cooled enough for liquid water to persist. This fact suggests that conditions for the origin of life on other habitable worlds are not restrictive, and that the value of fl is closer to one than to one in a thousand. But that is merely a guess. No one knows how life began on Earth, and we cannot generalize from a single case.

The conditions for intelligent life are probably more restrictive. On Earth this step first required the evolution of complex animals, which began about three billion years after the origin of life, and then the development of brains capable of abstract thought, which took another half billion years. Among the millions of animal species that have lived on Earth, probably only one ever had intelligence sufficient to understand the Drake Equation. This suggests that fi might be a small fraction.

The probability that intelligent life develops a civilization depends on the evolution of organs to manipulate the environment. On Earth, whales and dolphins may well have intelligence sufficient for abstract thought, but they lack the means to make tools. Humans, with dexterous hands, began making tools over a million years ago. Starting about ten thousand years ago, civilizations based on agriculture arose several times independently, in Mesopotamia, Egypt, China, Mexico, Peru, and New Guinea. This suggests that the value of fc is large, but again we should not generalize from the experience of only one intelligent and manipulative species.

We now come to the most intriguing term, the average lifetime L of a civilization. The Drake Equation assumes that, whatever the other factors, the number of civilizations presently in our galaxy is simply proportional to their average lifetime. The longer they live, the more civilizations exist at any given time. But what is the life expectancy of a civilization? On Earth, dozens of major civilizations have flourished and died within the last ten thousand years. Their average lifetime is about four centuries. Few if any civilizations on Earth have ever lasted as long as two thousand years.

History and archaeology show that the collapse of any given civilization causes only a temporary gap in the record of civilizations on Earth. Other civilizations eventually arise, either from the ruins of the collapsed one or independently and elsewhere. Those civilizations also eventually collapse, but new ones continue to emerge.

For example, in the eastern Mediterranean at the end of the Bronze Age, the prevailing Mycenaean civilization suffered widespread catastrophic collapse around 1100 BC. During a few centuries of "darkness" that followed, the population was illiterate, impoverished and relatively small -- but not extinct. Classical civilization gradually arose and flourished, and gave rise to the Roman Empire, which itself collapsed in the fifth century AD. Another period of impoverished Dark Ages followed, but eventually trade and literacy revived, leading to the Renaissance. Each revival of civilization was stimulated in part by the survival of relics from the past.

Our global technological civilization, with its roots in the Mediterranean Bronze Age, is now arguably headed for collapse. But that will not be the end of civilization on Earth -- not as long as the human species survives. And the biological lifetime of our species is likely to be several million years, even if we do our worst.

We should therefore distinguish between the longevity of a single occurrence of civilization and the aggregate lifetime of a sequence of civilizations. Almost all discussions of the Drake Equation have overlooked this distinction and therefore significantly underestimated L.

The proper value of L is not the average duration of a single episode of civilization on a planet, which for Earth is about 400 years. Rather, L is much larger, being the sum of recurrent episodes of civilization, and constitutes a substantial fraction of the biological lifetime of the intelligent species. The average species lifetime for mammals is a few million years. Suppose the human species lasts another million years and our descendants have recurrent episodes of civilization for more than 10 percent of that time. Then the average effective lifetime of civilization on Earth will exceed 100,000 years, or 250 times the duration of a single episode. Other factors being the same, this generally neglected consideration should increase the expected number of civilizations in our galaxy by at least a hundredfold.

While the aggregate lifetime of civilization on a planet may be only a hundred thousand years, we should allow the possibility that a small minority of intelligent life forms, say one in a thousand, has managed to use their intelligence and technology to survive for stellar evolutionary timescales -- that is, on the order of a billion years. In that case, the average effective lifetime of civilizations in our galaxy would be about a million years.

If we now insert numbers in the Drake Equation that represent the wide range of plausible estimates for the various terms, we find that the number N of civilizations in our galaxy could range anywhere from a few thousand to about one in ten thousand. The latter (pessimistic) case is equivalent to finding no more than one civilization in ten thousand galaxies, so that ours would be the only one in the Milky Way. In the former (optimistic) case, the nearest civilization might be close enough for us to detect its radio signals. Estimates for N thus range all over the map. While this exasperates critics who demand concrete answers from science, it does not invalidate the conceptual power of the Drake Equation.

If many civilizations have arisen in our galaxy, we might expect that some of them sent out colonies, and some of those colonies sent out still more colonies. The resulting waves of colonization would have spread out across the Milky Way in a time less than the age of our galaxy. So where are all those alien civilizations? Why haven't we seen them? The physicist Enrico Fermi first posed the question in 1950. Many answers have since been proposed, including (1) ours is the first and only civilization to arise in the Milky Way, (2) the aliens exist but are hiding, and (3) they have already been here and we are their descendants. In his book Where is Everybody? Stephen Webb considers fifty proposed solutions to the so-called "Fermi Paradox" but he leaves out the most thought-provoking explanation of all, one that I call the Cosmic Quarantine Hypothesis.

In 1981, cosmologist Edward Harrison suggested a powerful self-regulating mechanism that would neatly resolve the paradox. Any civilization bent on the intensive colonization of other worlds would be driven by an expansive territorial impulse. But such an aggressive nature would be unstable in combination with the immense technological powers required for interstellar travel. Such a civilization would self-destruct long before it could reach for the stars.

The unrestrained territorial drive that served biological evolution so well for millions of years becomes a severe liability for a species once it acquires powers more than sufficient for its self-destruction. The Milky Way may well contain civilizations more advanced than ours, but they must have passed through a filter of natural selection that eliminates, by war or other self-inflicted environmental catastrophes, those civilizations driven by aggressive expansion. That is, the acquisition of powerful technology ultimately selects for wisdom.

However, suppose an alien civilization somehow finds a way to launch the aggressive colonization of other planetary systems while avoiding self-destruction. It would only take one such case, and our galaxy would have been overrun by the reproducing colonies of the civilization. But Harrison proposed a plausible backup mechanism that comes into play in the event that the self-regulating control mechanism fails. The most evolved civilizations in the galaxy, he suggested, would notice any upstart world that showed signs of launching a campaign of galactic conquest, and they would nip it in the bud. Advanced intelligence might regard any prospect of the exponential diffusion throughout the Milky Way of self-replicating colonies very much as we regard the outbreak of a deadly viral epidemic. They would have good reason, and presumably the ability, to suppress it as a measure of galactic hygiene.

There may be many highly evolved civilizations in our galaxy, and some of them may even be the interstellar colonies of others. They may control technologies vastly more powerful than ours, applied to purposes we can scarcely imagine. But Harrison's regulatory mechanisms should preclude any relentless wave of colonization from overrunning and cannibalizing the Milky Way.

By most appearances, the dominant civilization on our planet is of the expansive territorial type, and is thus headed for self-destruction. Only if we can intelligently regulate our growth-obsessed and self-destructive tendencies is our civilization likely to survive long enough to achieve interstellar communication.

Steven Soter is Scientist-in-Residence in the Center for Ancient Studies at New York University, where he teaches a seminar on Scientific Thinking and Speculation, and a Research Associate in the Department of Astrophysics at the American Museum of Natural History.

Genetics Today

Genetics began by being ignored. Now it has the opposite problem. Mendel was dismissed because his work seemed unimportant, but today genes are everywhere and the public is fascinated by their promises and disturbed by their threats. Scientists have been quick to emphasize both. Not for nothing has it been said that the four letters of the genetic code have become H, Y, P and E.

The last decade's advances have been amazing. We have the complete sequence of the DNA letters of the 60,000 or so working genes needed to make a human being, and will soon have that of all the so-called "junk" DNA sequence (which may reveal that it does more than its name implies). 10,000 different diseases have an inherited component, and - in principle at least - we know the genes involved.

That raises both hopes and fears. For diseases controlled by single genes, such as sickle-cell anaemia or cystic fibrosis, it has become easier to identify both carriers and foetuses at risk. Because any gene can be damaged in many ways - for example, there are more than 1,000 known mutations for cystic fibrosis - the tests are not straightforward, and often the best that will be possible is to tell people that they are carriers, rather than to reassure them that they are not. The decisions as to whether to become pregnant or to continue with a pregnancy will, however, become somewhat easier as the tests become less ambiguous.

Tests are commercially available for genes predisposing to cystic fibrosis and breast cancer; and the development of DNA "chips" that can screen many genes at once means that more will soon be on sale. Medicine will have to deal more and more with those who have - rightly or wrongly - diagnosed themselves as at risk.

Most people, we now realize, die of a genetic disease, or at least of a disease with a genetic component. For some, it will become possible to tell them of their plight - but why should we want to do so? Sometimes, the information is helpful. Those who inherit a disposition towards certain forms of colon cancer, for example, can be helped by surgery long before the disease appears. For other illnesses, people at high risk can be warned to avoid an environment dangerous to them. Smoking is dangerous, but a few smokers get away with it. However, anyone who carries a changed form of an enzyme involved in clearing mucus from the lungs will certainly drown in their own spit if they smoke - and that might be enough to persuade them not to. However, knowledge can be dangerous, particularly when health insurance gets involved.

The most successful kind of medicine has always been prevention rather than cure. Genetics is no different, and the hope of replacing damaged DNA by gene therapy is still around the corner, where it has been for the past ten years. Genetic surgery - the ability to snip out pieces of DNA and move them to new places - has done remarkable things, but so far has done little to cure disease.

It might, though, help prevent the world's population from starving, at least according to enthusiasts for genetically modified (GM) foods. They may be right. It has proved remarkably easy to move plant genes around. Already there are crops that have been altered to make them resistant to parasites, or to artificial weedkillers (which means that the fields can be sprayed, leaving the crop unharmed). Commercial optimism has, in Europe if not the United States, been matched by public concerns about health risks. Why people are worried by the remote risk that GM foods might be dangerous to eat when they are happy to eat cheeseburgers that definitely are, mystifies scientists, but science is less important than what consumers are willing to accept. Unless attitudes change, the hope of putting genes for, say, essential nutrients into Third World crops will probably not be fulfilled.

If interfering with plants alarms society, to do the same with animals outrages a vocal part of it. We still know rather little about how a fertilized egg turns into an adult, with hundreds of different kinds of tissue, each bearing exactly the same genetic message but with jobs as different as brain cells and bone. Although it has long been possible to grow adult plants and even frogs from single cells, the notion that it might be possible to do so with mammals seemed a fantasy - until the birth of Dolly the sheep in 1997. Then, with the simple trick of inserting the nucleus from an adult cell into an emptied egg and allowing it to develop inside a foster-mother, a sheep was made without sex: it was cloned.

Cloned sheep or cows might be important in farming, and might be used to make multiple copies of animals with inserted human genes for proteins such as growth hormone (which are already used in "pharming", the production of valuable drugs in milk). The publicity that followed Dolly led to immediate condemnation of the idea of human cloning, often without much thought as to quite why it should be so horrific. After all, we are used to identical twins (who are clones of each other), so why should an artificial version cause such horror? In the end, again, public opinion moulds what science can do, and the prospect of cloning a human being seems remote.

And why might anyone want to do it? Claims of an army of identical Saddam Husseins verge on the silly, and others of replicating a loved child who died young also seem unlikely. However, the technique has great promise in medicine. Cells of the very early embryo (stem cells, as they are called) have the potential to divide into a variety of tissues, and can be grown - cloned - in the laboratory, or even manipulated with foreign genes. Perhaps they could make new skin or blood cells, or, in time, even whole organs. Because this involves the use of very early embryos, made perhaps by artificial fertilization in the laboratory and not needed for implantation into a mother, it has become mixed up with the abortion debate. In the United States, the "Pro-Life" lobby has succeeded in denying funds from government sources for such work.

Genetics is always mixed up with politics. It has been used both to blame and to excuse human behaviour. The claim (in the end not confirmed) of a "gay gene" led to two distinct responses among the homosexual community. Some feared that the gene would be used to stigmatize them, but most welcomed the idea that their behaviour might be coded into DNA, as it meant that they could not be accused of corrupting those not already "at risk". Such opposing views apply just as much to the supposed genes that predispose to crime - are they evidence that the criminal cannot be reformed and must be locked away for ever, or should they be used in mitigation to argue that he was not acting according to his own free will?

Science has no answer to such questions, and in the end the most surprising result of the new genetics may be how little it tells us about ourselves.

Consciousness

Consciousness is widely viewed as the last frontier of science. Modern science may have split the atom and solved the mystery of life, but it has yet to explain the source of conscious feelings. Eminent thinkers from many areas of science are turning to this problem, and a wide range of theories are currently on offer. Yet sceptics doubt whether consciousness can be tamed by conventional scientific techniques, and others whether its mysteries can be understood at all.

What is Consciousness?

The best way to begin is with examples rather than definitions. Imagine the difference between having a tooth drilled without a local anaesthetic and having it drilled with one. The difference is that the anaesthetic removes the conscious pain ... assuming the anaesthetic works!

Again, think of the difference between having your eyes open and having them shut. When you shut your eyes, what disappears is your conscious visual experience.

Sometime consciousness is explained as the difference between being awake and being asleep. But this is not quite right. Dreams are conscious too. They are sequences of conscious experiences, even if these experiences are normally less coherent than waking experiences. Indeed, dream experiences, especially in nightmares or fantasies, can consciously be very intense, despite their lack of coherence - or sometimes because of this lack. Consciousness is what we lose when we fall into a dreamless sleep or undergo a total anaesthetic.

The Indefinability of Consciousness

The reason for starting with examples rather than definitions is that no objective, scientific explanation seems able to capture the essence of consciousness.

For example, suppose we try to define consciousness in terms of some characteristic psychological role that all conscious states play - in influencing decisions, perhaps, or in conveying information about our surroundings.

Or we might try to pick out conscious states directly in physical terms, as involving the presence of certain kinds of chemicals in the brain, say.

Any such attempted objective definition seems to leave out the essential ingredient. Such definitions fail to explain why conscious states feel a certain way.

Couldn't we in principle build a robot which satisfied any such scientific definition, but which had no real feelings?

Imaging a computer-brained robot whose internal states register "information" about the world and influence the robot's "decisions". Such design specifications alone don't seem to guarantee that the robot will have any real feelings.

The lights may be on, but is anyone at home? The same point applies even if we specify precise chemical and physical ingredients for making the robot.

Why should an android become conscious, just because it is made of one kind of material rather than another?

There is something ineffable about the felt nature of consciousness. We can point to this subjective element with the help of examples. But it seems to escape any attempt at objective definition.

Louis Armstrong (some say it was Fats Waller) was once asked to define jazz.

Man, if you gotta ask, you're never gonna know.

We can say the same about attempts to define consciousness. ...

The rest of the book gives a comprehensive guide to the current state of consciousness studies. It starts with the "hard problem" of the philosophical relation between mind and matter, explains the historical origins of this problem, and traces scientific attempts to explain consciousness in terms of neural mechanisms, cerebral computation and quantum mechanics. Along the way, readers are introduced to zombies and Chinese Rooms, hosts in machines and Schrodinger's cat.

Nanotechnology

Manufactured products are made from atoms. The properties of those products depend on how those atoms are arranged. If we rearrange the atoms in coal we can make diamond. If we rearrange the atoms in sand (and add a few other trace elements) we can make computer chips. If we rearrange the atoms in dirt, water and air we can make potatoes.

Todays manufacturing methods are very crude at the molecular level. Casting, grinding, milling and even lithography move atoms in great thundering statistical herds. It's like trying to make things out of LEGO blocks with boxing gloves on your hands. Yes, you can push the LEGO blocks into great heaps and pile them up, but you can't really snap them together the way you'd like.

In the future, nanotechnology will let us take off the boxing gloves. We'll be able to snap together the fundamental building blocks of nature easily, inexpensively and in almost any arrangement that we desire. This will be essential if we are to continue the revolution in computer hardware beyond about the next decade, and will also let us fabricate an entire new generation of products that are cleaner, stronger, lighter, and more precise

It's worth pointing out that the word "nanotechnology" has become very popular and is used to describe many types of research where the characteristic dimensions are less than about 1,000 nanometers. For example, continued improvements in lithography have resulted in line widths that are less than one micron: this work is often called "nanotechnology." Sub-micron lithography is clearly very valuable (ask anyone who uses a computer!) but it is equally clear that lithography will not let us build semiconductor devices in which individual dopant atoms are located at specific lattice sites. Many of the exponentially improving trends in computer hardware capability have remained steady for the last 50 years. There is fairly widespread confidence that these trends are likely to continue for at least another ten years, but then lithography starts to reach its fundamental limits.

If we are to continue these trends we will have to develop a new "post-lithographic" manufacturing technology which will let us inexpensively build computer systems with mole quantities of logic elements that are molecular in both size and precision and are interconnected in complex and highly idiosyncratic patterns. Nanotechnology will let us do this.

When it's unclear from the context whether we're using the specific definition of "nanotechnology" (given here) or the broader and more inclusive definition (often used in the literature), we'll use the terms "molecular nanotechnology" or "molecular manufacturing."

Whatever we call it, it should let us:

* Get essentially every atom in the right place.
* Make almost any structure consistent with the laws of physics and chemistry that we can specify in atomic detail.
* Have manufacturing costs not greatly exceeding the cost of the required raw materials and energy.
There are two more concepts commonly associated with nanotechnology:
* Positional assembly.
* Self replication.

Clearly, we would be happy with any method that simultaneously achieved the first three objectives. However, this seems difficult without using some form of positional assembly (to get the right molecular parts in the right places) and some form of self replication (to keep the costs down).

The need for positional assembly implies an interest in molecular robotics, e.g., robotic devices that are molecular both in their size and precision. These molecular scale positional devices are likely to resemble very small versions of their everyday macroscopic counterparts. Positional assembly is frequently used in normal macroscopic manufacturing today, and provides tremendous advantages. Imagine trying to build a bicycle with both hands tied behind your back! The idea of manipulating and positioning individual atoms and molecules is still new and takes some getting used to. However, as Feynman said in a classic talk in 1959: "The principles of physics, as far as I can see, do not speak against the possibility of maneuvering things atom by atom." We need to apply at the molecular scale the concept that has demonstrated its effectiveness at the macroscopic scale: making parts go where we want by putting them where we want!

The requirement for low cost creates an interest in self replicating manufacturing systems, studied by von Neumann in the 1940's. These systems are able both to make copies of themselves and to manufacture useful products. If we can design and build one such system the manufacturing costs for more such systems and the products they make (assuming they can make copies of themselves in some reasonably inexpensive environment) will be very low.

Key Concepts for the 21st Century

'Key Concepts for the 21st Century', brings together a selection of leading Scientific concepts concerning the formation of the universe. Introduced here, they summarise the incredible progress achieved through Scientific development. These advancements have made an impact on all our lives and have challenged the way we look at the world.

Newton's Mechanics

The theory of motion presented by Sir Isaac Newton in his great Principia (1686). It consists of a set of mathematical laws describing the rigidly deterministic motion of objects under the action of forces against the backdrop of an absolute space and absolute time. Newtonian mechanics governed the way in which scientists described the physical world for more than two centuries, until it was overthrown by experimental and theoretical developments in the early part of the 20th Century.

Quantum Theory

Quantum theory describes the behaviour of matter on very small scales. The quantum world essentially comprises two distinct notions. One of these is that matter and energy are not smoothly distributed but are to be found in discrete packets called quanta. The other is that the behaviour of these quanta is not predictable as in Newton's theory, but that only probabilities can be calculated.

The Big Bang

The Big Bang is a term, originally coined by Sir Fred Hoyle, that describes the standard picture of the cosmos and how it evolves. Currently expanding and cooling, the universe was hotter and denser in the past. Clues to its high-energy phase can be found in its expansion, in the relic radiation that pervades all space, and in the trace quantities of light atoms cooked in the primordial nuclear furnace. The early stages of the Big Bang are used by particle cosmologists to study the character of the fundamental forces of nature. The Big Bang model breaks down at the very beginning of space and time because of the existence of a singularity. It is therefore seriously incomplete, and will remain so unless and until a quantum theory of gravity has been worked out.

Black Holes

Black holes are regions of space-time where the effect of gravity is so strong that light cannot escape. Black holes are thought to exist in nature, but though the evidence for them is compelling, it remains circumstantial. For theorists, black holes provide natural test cases in which to try to explore the consequences of fitting Einstein's general theory of relativity together with the principles of quantum mechanics. Hawking himself showed that quantum effects can allow black holes to radiate, so that they are not entirely black.

Relativity

Albert Einstein developed the theory of relativity in a series of monumental papers in the early part of the 20th century, beginning with the publication of the special theory of relativity in 1905 and culminating in the general theory of 1915. Relativity theory is a theory of space and time. It deprived physics of the absolute meaning of these concepts that was embedded in Newtonian mechanics. Dealing not with space and time separately, but with a hybrid concept called space-time (which can be curved and warped), relativity replaced Newton's law of gravity with a theory of how space can be distorted by the presence of mass.

Singularities

A singularity is a point or region of space-time where the mathematical equations of a theory break down because some quantity becomes infinite. The centre of a black hole is an example of such a singularity in the general theory of relativity, as is the origin of the universe in the Big Bang model. Penrose and Hawking have proved a number of theorems about the nature and occurrence of these singularities. Their existence in Einstein's theory suggests that general relativity may be incomplete. A quantum theory of gravity is required to describe the properties of matter at the enormous densities that pertain at the Big Bang or in a black hole.

Unified Theories

As physics has grown through the 20th century, it has brought more and more disparate phenomena within the scope of unified theories. The first major step in this programme was the unification of the theories of electricity and magnetism by James Clerk Maxwell, to produce a theory of electromagnetism. Theories now exist in which electromagnetism and the nuclear forces can be described in terms of a single set of mathematical formulae. Physicists would like to include the one force missing from this treatment so far - gravity - but this force has so far eluded attempts to include it. If and when gravity is unified, a 'Theory of Everything' would be the result.

Quantum Gravity

The 'missing link' in the chain of reasoning leading to a Theory of Everything is a mathematical description that combines the general theory of relativity with the ideas of quantum mechanics. Although much effort has been expended in the search for such a theory, formidable mathematical difficulties have defeated many attempts. Only in a few special cases have gravity and quantum theory been combined in an intelligible way.

NASA sails to the stars

NASA is setting sail for the stars - literally. NASA's Marshall Space Flight Center in Huntsville, Ala., is developing space sails technology to power a mission beyond our solar system.

"This will be humankind's first planned venture outside our solar system," said Les Johnson, manager of Interstellar Propulsion Research at the Marshall Center. "This is a stretch goal that is among the most audacious things we've ever undertaken."

Towards Alpha Centauri

The interstellar probe will travel over 23 billion miles - 250 astronomical units - beyond the edge of the solar system. The distance from Earth to the Sun, 93 million miles, is one astronomical unit. For perspective, if the distance from Earth to the Sun equaled one foot, Earth would be a mere 6 inches from Mars, 38 feet from Pluto, 250 feet from the boundaries of the solar system, and a colossal 51 miles from the nearest star system, Alpha Centauri.

This first step beyond our solar system en route to the stars has an estimated trip time of 15 years.

Proposed for launch in a 2010 time frame, an interstellar probe - or precursor mission, as it's often called - will be powered by the fastest spacecraft ever flown. Zooming toward the stars at 58 miles per second, it will cover the distance from New York to Los Angeles in less than a minute. It's more than 10 times faster than the Space Shuttle's on-orbit speed of 5 miles per second.

Traveling five times faster than Voyager - a spacecraft launched in 1977 to explore our solar system's outer limits - an interstellar probe launched in 2010 would pass Voyager in 2018, going as far in eight years as Voyager will have journeyed in 41 years.

Johnson says transportation is quite possibly the toughest challenge with interstellar missions because they have to go so far, so fast. "The difficulty is that rockets need so much fuel that they can't push their own weight into interstellar space. The best option appears to be space sails, which require no fuel," he said.

Thin, reflective sails could be propelled through space by sunlight, microwave beams or laser beams - just as the wind pushes sailboats on Earth. Rays of light from the Sun would provide tremendous momentum to the gigantic structure. The sail will be the largest spacecraft ever built, spanning 440 yards - twice the diameter of the Louisiana Superdome.

"Nothing this big has ever been deployed in space. We think we know how to do it, but we're in the beginning phases of turning a concept into a real design," Johnson said.

Researchers are optimistic about recent breakthroughs with strong, lightweight composite materials. A leading candidate for sails is a carbon fiber material whose density is less than one-tenth ounce per square yard - the equivalent of flattening one raisin to the point that it covers a square yard. In space the material would unfurl like a fan when it's deployed from an expendable rocket.

About the Marshall Center

The Marshall Center is leading NASA's transportation research for interstellar probes. Engineers at Marshall are conducting laboratory experiments to evaluate and characterize materials for space sails. Materials will be exposed to harsh conditions in a simulated space environment to test their performance and durability in extremely hot and cold temperatures. The emphasis of the current research effort is on the interstellar precursor missions designed to set the stage for missions to other star systems later this century.

Marshall is partnering with NASA's Jet Propulsion Laboratory in Pasadena, Calif. The Jet Propulsion Laboratory has overall responsibility for NASA's interstellar missions and the Marshall Center is responsible for developing transportation systems for the missions. Marshall's effort is part of its Advanced Space Transportation Program, NASA's core technology program for all space transportation. The Advanced Space Transportation Program is pushing technologies that will dramatically increase the safety and reliability and reduce the cost of space transportation.

SETI@home

Are we alone in the Universe? This is the question which has baffled and fascinated mankind for centuries. One only has to look at the plethora of popular Science Fiction TV shows and films to see that we are fascinated by the idea of other intelligent life "out there".We still have no conclusive proof that we are, or are not alone, for that matter. Discounting the thousands of unsubstantiated UFO reports, as far as we know, E.T. has not dropped in, and Mr Spock has not popped by to see if we are living long and prospering.

But now, mankind has the technology to search the heavens, if on a somewhat limited basis; but then, there is rather a lot of ground to cover. This search has a name: SETI, or the Search for Extraterrestrial Intelligence, which is a scientific effort aiming to determine if there is intelligent life out in the universe. There are many methods that SETI scientific teams use to search for extraterrestrial intelligence. Many of these search billions of radio frequencies that flood the universe, looking for another civilization that might be transmitting a radio signal. Other SETI teams search by looking for signals in pulses of light emanating from the stars.

And now anyone with a humble desktop computer can take part in the search, thanks to a project called "SETI@home". This project which is based in UC Berkeley in the USA uses the world's largest single-dish radio telescope (above) at Arecibo in Puerto Rico to search for any possible radio signal from another world. The telescope is 305 m (1000 feet) in diameter, 167 feet deep, and covers an area of about twenty acres. But there is so much data recorded by the telescope; how possibly to analyse it all? The answer; break it up in to small chunks and distribute it to as many computers as possible...

The UC Berkeley SETI team has discovered that there are already thousands of computers that might be available for use. Most of these computers sit around most of the time with screensavers accomplishing absolutely nothing and wasting electricity to boot. This is where SETI@home (and you!) come into the picture. The SETI@home project hopes to convince you to allow them to borrow your computer when you aren't using it and to help them "...search out new life and new civilizations." This is accomplished with a screen saver (pictured below) that can go get a chunk of data from the SETI team over the internet, analyze that data, and then report the results back to them. When you need your computer back, the screen saver instantly gets out of the way and only continues it's analysis when you are finished with your work.

If you should be fortunate to be the first to discover a signal from another world, then no doubt instant fame and possibly fortune will follow.

No conclusively extraterrestrial signals have yet been discovered, but who knows, it could be you! Be sure to visit the SETI@home website to download the screensaver and for any other related info.

The Theory of Natural Selection

On the Origin of Species By Means of Natural Selection is the grand theory of the age of grand theories. As Darwin himself expressed it, it was a theory about all organisms throughout all time "by which all living and extinct beings are united by complex, radiating and circuitous lines of affinities into one grand system". Darwin took the analogy of a tree to symbolise his vision:

"The green and budding twigs may represent existing species. At each period of growth, all the growing twigs have tried to branch out and on all sides, and to overtop and kill the surrounding twigs and branches, in the same manner as species and groups of species have tried to overmaster other species in the great battle for life."

Establishing the argument for natural selection began by pointing to artificial selection, the kind engaged in by the pigeon fancier and stockbreeder. Working on the variation naturally and randomly occurring within species, better breeds are produced. Natural selection also works on these variations in the wild, in context of the struggle for existence where more organisms are born than can survive and reproduce. Those better adapted, fitter, are more likely to survive and leave more offspring.

Where Malthus used the metaphor of struggle for existence in relation to collective activity, that of tribes, Darwin saw the struggle taking place at the level of the individual: "individuals having any advantage, however slight over others, would have the best chance of surviving and of procreating On the other hand, we may feel sure that any variation in the least degree injurious would be rigidly destroyed". A secondary mechanism, sexual selection, is added to the struggle for food and survival. In sexual selection the struggle is for mates and reproductive success. Natural selection means increase in frequency of those best adapted, their characteristics spread through a whole population, until the average character of a species changes.

All life is genealogically connected by the process of "descent with modification." Small changes that are continually underway, it is assumed, will eventually add up to give the major developments, the appearance of new forms of life. To accommodate his theory Darwin needed huge amounts of geological time to permit natural selection to operate and to house the fossil record of this evolutionary process. His discussion of the geological record, despite the acknowledged gaps, points out that more general and linking forms are found lower, hence earlier in the fossil record with more specialised forms higher and hence later, life never moves back on itself. Natural selection occurs within geographic distribution across specific environments, the fitter survivors are those best adapted to take advantage of their environment.

"From the war of nature, from famine and death, the most exalted object which we are capable of conceiving, namely, the production of the higher animals, directly follows. There is a grandeur in this view of life, with its several powers, having been originally breathed into a few forms or into one; and that, whilst this planet has gone cycling on according to the fixed law of gravity, from so simple a beginning endless forms most beautiful and most wonderful have been, and are being evolved."

A grand unifying theory obviously had to include mankind. "The subject of man and his place in nature was so woven into Darwin's thought that it forms an indispensable part of the nature of his beliefs." The first passage in the Darwin notebooks that clearly enunciates the idea of natural selection and applies it to man was written on November 27 of 1838. In another of his notebooks Darwin noted "I will never allow that because there is a chasm between man ... and animals that man has a different origin". What Darwin would not allow he took a very long time to get around to saying. "Light will be thrown on the origin of man and his history" Darwin had written in Origin. Both the intensely Christian Charles Lyell, Antiquity of Man 1863, and Huxley, Man's Place in Nature, 1863, published before Darwin. The Descent of Man, could have been no surprise when it was finally published.

Darwin left plenty of scope for those who would interpret natural selection as theistic evolutionism, creationism. "As natural selection works solely by and for the good of each being, all corporeal and mental endowments will tend to progress towards perfection". Intellectual adjustment reasoning along this line had been underway since the Reformation. Experts argue vociferously whether Darwin himself remained a believer or not, maybe he did maybe he did not, most likely he devolved into a vestigial agnosticism. Darwin explicitly defended the idea that evolution by natural selection did not have an intentional design, which would invoke the old idea of creation by design his entire theory sought to replace. But he very clearly inserted and permitted the progressive, upward escalator of progress idea that had long been the understanding of God's providential purpose in the creation of natural law. His imagery of nature red in tooth and claw could nevertheless be understood as all being for the best in the best of all possible worlds.