Analog SFF, September 2008

Home > Other > Analog SFF, September 2008 > Page 8
Analog SFF, September 2008 Page 8

by Dell Magazine Authors


  Let's look more closely at self-assembly.

  Diffusion brings components into proximity—within the tiny volume of an E. coli bacterium, it's been calculated that every two organic molecules meet about once per second—and then non-bonding electrostatic attraction may assemble something larger. Randomly colliding components won't always orient properly to connect, but soon enough the flagellum does come together.

  As another example, kinesins are motor proteins that pull themselves—and cargo—along intracellular tracks called microtubules. The kinesin molecules exploit thermal motion within the cell to take two to three hundred “steps” per minute. This is one of nature's neat tricks: using random thermal motion to drive motion in one direction. Symmetry-breaking mechanisms—effectively like a ratchet and pawl—favor selected impacts, giving rise to a directional bias.

  Biological motors can be quite powerful. Keith Firman of the University of Portsmouth (UK) showed a video of a 20-nm device, driven by a light-powered proton pump, crossing 70 microns. That's 3,500 times its own length.

  Naturally occurring biomachines may serve as nanocomponents, but so might engineered biomachines. Walking and twisting motors built from DNA strands have been simulated.

  Perhaps mechanical and biological mechanisms will be combined. For example, carbon nanotubes might serve as tracks for kinesins.

  It's unlikely that naturally occurring proteins will fit our every need. Then what?

  Custom-designing a protein is a hard problem. Proteins consist of multiple peptides—each peptide, in turn, comprised of multiple amino acids, themselves complex. A protein's function depends less on its composition—so many carbons, oxygens, hydrogens, etc.—than on the precise structure and shape of the molecule. The protein folds as it assembles, subject to the complex interplay of electrostatic interactions (and quantum uncertainties) among its many parts. On an IBM Blue Gene L, among the fastest supercomputers in the world, to model the folding of one protein can consume weeks of computing time.[13]

  Rather than design proteins from scratch we may modify existing ones. Replacing a binding site(s) modifies the protein's bonding behavior without altering its overall shape—hence leaving intact the protein's structural properties.

  Peptides often connect with single bonds, and single bonds can (and do) twist. Twisting is much of why peptide chains take so long to settle into a stable configuration, and why protein formation is hard to model. Christian Schafmeister of Temple University has built artificial peptides, called bis-peptides, which form double bonds. Bis-proteins assembled from bis-peptides don't twist.[14] Bis-proteins assemble more quickly than natural proteins. The behaviors of bis-proteins are more readily simulated than those of natural proteins, whose shapes remain mostly unknown.

  How might protein-based components, whatever their source, be mass-produced? Schafmeister discussed an electrochemical synthesis vat, in which a computer mediates every step in the self-assembly of complex structures. The PC sends electrical activation signals to an array of catalysts. One catalyst enables each assembly step. Other catalysts drive disassembly of unwanted byproducts and excess intermediate components. Of course first we need to design and synthesize the catalysts....

  So: The good news is that research offers several promising approaches. It will take a while to sort out which methods work best for which tasks. Until nanocomponent technologies are more settled, we're unlikely to see with much clarity to the later stops along the road.

  * * * *

  Hey! What Happened to Replicators?

  Chemistry (gazillions of jostling molecules) and biology (gazillions of reproducing cells) operate with massive parallelism. In that long-ago Analog article and, in more detail, in Engines of Creation, we were given another vision of parallelism: the universal assembler.

  Assemblers were nanoscale machines that could build—well, anything. Anything, that is, for which: raw materials were at hand, energy was available, and directions were onboard.

  Most products, like that sports car I covet, would require that assemblers cooperate. Lots of assemblers. And how would we get all those assemblers? From assemblers assembling more assemblers, assembling more assemblers ... until there were enough for the job.

  Fiction has a sexier name for self-assembling robots (both nanoscale and larger): replicators. With sunlight for energy and the entire world for raw materials, it's easy to construct entertainment—or alarmist scenarios—in which replicators just keep replicating. Before you know it, there's nothing but replicators. And in these scenarios, what of humans?

  Worst case, we're raw materials.

  You will have noticed: Nothing even vaguely assemblerlike appears in the roadmap. Avoidance of negative press may have influenced the choice of direction, but there is a more basic reason. To reap the benefits of nanotechnology does not require anything as complex as an all-purpose, self-replicating, autonomous assembler.

  Two decades ago, Drexler envisioned teams of assemblers at work within synthesis vats and tabletop factories—not in the wild. Regardless, in the wild and living off the land is how much of the public (and some fiction) perceives assemblers. Asked at the recent conference about replicators, Drexler characterized that early description of nanotech as a biological analogy and proof by example. He said he considers Engines of Creation obsolete.

  Forget living in the wild—useful nanofactories won't even need vats of mobile assemblers. Chris Phoenix, director of the Center for Responsible Nanotechnology, laid out a case against mobile assemblers in a 2004 interview.[15] Stationary assemblers—think: tiny robotic arms—are simpler and more productive than free-floating assemblers. It's simpler to deliver energy and raw materials to fixed assemblers. Fixed assemblers are easier to physically orient to the task than are free floaters. Prepositioned assemblers are easily coordinated by a central computer through permanent comm lines (perhaps carbon-nanotube wires). It is much harder to maintain communications with mobile assemblers, which would not trail cables lest they tangle.

  A fixed ‘bot that's taken out of its factory won't harm anything.

  Basically, replicators are a hard problem, more complex than alternate nanoscale manufacturing approaches, scary to much of the public, and off the community's roadmap.

  But are mobile, independent, self-replicating assemblers possible? In theory, sure.

  Fear of gengineered superbugs predates most qualms about nanobots replicating. Bioengineers allayed many concerns through a program of community self-governance informed by dialogue with ethicists. Ethical guidelines, peer reviews, and funding reviews have kept genetic engineering safe and beneficial.

  Nanotech researchers are taking the same approach. The Foresight Nanotech Institute and the Institute for Molecular Manufacturing have proposed ethical guidelines for the development of nanotech.[16] Among other topics, the guidelines address alternatives to autonomous, self-replicating assemblers. Should autonomous, self-replicating assemblers be developed anyway, the guidelines argue for layered security measures to counter accidental or purposeful release. A few examples of safeguards:

  Replicators can be designed to be dependent—say, for their fuel—on chemicals unavailable in natural environments. This is like designing a genetically modified bacterium to starve outside the lab for lack of a rare trace element.

  Material-scavenging functions can be limited to specific materials not found in nature or in living cells.

  Onboard programming can defend against software errors—analogous to mutations in a bacterium—by means such as error-correcting codes.

  But could someone intentionally design replicators to survive in the wild? Again, sure—given lots of money, because replicators are a tough problem. If it's any consolation, there are probably much easier ways to do us harm.

  Even replicating nanobots designed with a malicious intent may not prove fatal. We manage to share Earth with a lot of ever-mutating, mindlessly replicating bacteria.

  * * * *

  Are we out of
the woods?

  Let us suppose we have sidestepped, at least for the near future, the gray-goo scenario of replicators run amok. Does that mean nanotechnology is safe?

  To be determined.

  Nanotech's big uncertainty is a matter for toxicologists. Materials often exhibit different properties, chemically and biologically, at the nanoscale than in bulk—and yet some nanomaterials have been introduced to factories and mass-market products without performing new health tests. To infer a substance's safety from bulk-level testing while touting the differences manifest only at nanoscale ... does that strike anyone else as a gamble?

  Toxicologists are starting to consider effects unique to the nanoscale. For example, it's been noted that materials long deemed safe could never before—until formed into nanoparticles—reach the most sensitive parts of the lungs.[17] The Environmental Protection Agency recently asserted the authority to regulate—as pesticides—the nanoscale silver particles some new clothes washers release to kill bacteria.[18]

  During a conference lunch, an insurance-industry representative brought up asbestos. Asbestos is a useful material whose health implications (asbestosis and mesothelioma) went unrecognized for decades. Related health science was behind the curve. Insurers were taken entirely by surprise. Class-action suits bred like, well, poorly designed replicators. Dozens of companies somehow linked to asbestos went bankrupt, some perhaps deservedly, but others perhaps the victims of junk science and improper litigation.[19]

  Let's hope nanopollutants aren't this century's version of the asbestos surprise.

  * * * *

  On the sunny side of the nanotech street

  I'm not an every-silver-lining-has-a-cloud sort of guy. Let's turn our attention to plucking some low-hanging fruit along the nanotech road.

  The roadmap focuses on energy and healthcare as opportunities for early benefits. We'll start with a few energy-related opportunities. Fuel cells that use catalyst nanoparticles to boost efficiency. Photovoltaic cells embedded with quantum dots to extract energy from wavelengths too short to be tapped by conventional solar cells. Cables and wires with fewer molecular-level defects—less electrical resistance—to distribute power with lower losses. Ultracapacitors, using the very high capacitance of nanoporous electrolytes, to replace batteries in many applications.

  Some short-term healthcare opportunities?[20] Nanoshells for improved control over drug delivery. Nanoparticle contrast agents for more sensitive MRI imaging. Engineered protein sensors for earlier detection of tumors. Engineered proteins that attack tumor cells. Engineered protein sensors that detect and report (fluoresce or change colors in the presence of) biohazards.

  Nanotech is receiving a lot of public funds. It may face a headwind of public skepticism. An emphasis on applications with obvious public benefit and visibility is sound strategy.

  * * * *

  Stronger and smarter

  Let's be a bit more speculative.[21]

  Fair warning: This isn't a comprehensive forecast—nanotech will change everything. That said, how might nanotech influence science fiction in the next few years, and our lives over the next few decades?

  Many payoffs will come from structural nanotech: vastly improved materials.

  Today's metals are rife with voids, cracks, and impurities. Metals without defects will be stronger by about two orders of magnitude. Imagine how much weight can be taken out of cars and trucks (and rockets) with no reduction in safety—and with major gains in fuel efficiency.[22]

  So there you have the nanotech-enabled auto of the future: electrical, powered by ultracapacitors or fuel cells, and weighing perhaps a few hundred pounds.

  An SF standby is the electromagnetic cannon. These weapons come in two variations, rail-guns and coil-guns. Their peaceful counterpart is the electromagnetic mass driver. All three use EM pulses, rather than chemical reactions, to launch projectiles. It's a very elegant notion, compellingly demonstrated decades ago.[23] So why don't we have them?

  Because the harder one hurls the projectile, the fiercer the electromagnetic forces exerted on the launcher itself. Today, those forces tear the launcher apart. But defect-free copper is approximately eighty times stronger than the natural stuff. Maybe we'll yet see rapid-fire rail-guns like those on Stargate Atlantis and Battlestar Galactica.

  Stronger is good, but we can also make material smarter. Materials coated or embedded with nanosensors can announce defects. Such sensors may be as simple as molecules that, when under mechanical stress, generate a voltage by piezoelectric effect or change their color. Nanoparticles may migrate within smart material to seal tiny cracks before they can grow.

  David Leigh (recall ultraviolet light chasing a water droplet uphill) commented, “Nature uses controlled molecular motion for everything.” How about using nanotech to give materials active surfaces? Paint hulls with bistable (hydrophobic/hydrophilic) molecules, and the ocean itself will push/pull ships. Nothing says only UV can flip such surfaces. To steer or change speed, merely tune control fields (electric or magnetic) or heaters in different parts of the hull.

  * * * *

  Moore's Law marches on

  We've noted that a state-of-the-art semiconductor foundry makes chips with 45-nm features. We've discussed some of the experimentation leading the way to yet smaller features.

  It's not a big leap to predict that our computers will use denser, higher-capacity random-access memories. But RAM in the future may not be wholly electronic. Fraser Stoddard of UCLA has demonstrated electrical readout of biomechanical RAM—an array of bistable molecular switches—with a density of 1011 bits/cm2. You read that correctly: 100 gigabits per square centimeter.

  Or, as suggested by the previously cited work at University of South Wales, memory may reside in the spin states of single atoms implanted into a semiconductor lattice.

  * * * *

  A hydrogen abstraction tool tip. Image courtesy of Nanorex, Inc.

  * * * *

  Processor chips will continue to get smaller and faster, and again not necessarily by familiar means. No more futuristic than nanotech, but tiptoeing into quite different (and also potentially revolutionary) research ... quantum computers manipulate quantum bits. A memory bit in a conventional computer takes the value zero or one; a qubit may be zero or one—or both simultaneously. In quantum-mechanical terms, that “don't know” situation is called a superposition of states.

  * * * *

  A six-way junction nanotube. Image courtesy of Nanorex, Inc.

  * * * *

  Ten qubits, to pick an arbitrary number, all of whose states are in superposition, simultaneously encompass 1,024 (2 to the 10th power) possibilities. Thirty qubits can simultaneously encompass more than one billion (2 to the 30th power) possibilities.

  It would take an entire article to properly introduce quantum computing[24] or why the spin states of single atoms implanted into a semiconductor lattice are candidates for implementing qubits for practical quantum computers.[25] It's worth noting, however, that the massive parallelism of fully realized quantum computing is well-suited to the computational challenges of further developing nanotech and to cracking the encryption schemes that underlie today's secure electronic communications.[26]

  * * * *

  To sum up

  Nanotechnology is no longer coming—it's here.

  Thoughtful researchers and practitioners have attempted to predict how nanotech will—and how it should—evolve. They see several paths forward and new applications soon to come within our grasp.

  Atomically precise control of matter is a very young technology. It is, almost certainly, a transformative technology. The industrial age, the computer age, and next up—the nanotech age...

  The one forecast I make with confidence is this: We will be surprised—often pleasantly, sometimes not—with the changes nanotech will bring in the coming decades.

  Copyright (c) 2008 Edward M. Lerner

  * * * *

  For further reading

&n
bsp; The National Nanotechnology Initiative: www.nano.gov

  National Institutes of Health: nihroadmap.nih.gov/nanomedicine/

  Department of Energy: www.energy.gov/sciencetech/ nanotechnology.htm

  The Foresight Nanotech Institute: www.foresight.org

  The Institute for Molecular Manufacturing: www.imm.org

  Nanotechnology Now (general news): www.nanotech-now.com

  * * * *

  About the author

  A physicist and computer scientist, Edward M. Lerner toiled in the vineyards of high tech for thirty years. Then, suitably intoxicated, he began writing SF full time. His recent books are Moonstruck (2005), Creative Destruction (2006), and, with Larry Niven, Fleet of Worlds (2007). His short fiction and occasional fact article appear most frequently in Analog. His website is www.sfwa.org/members/lerner/

  * * * *

  [Footnote 1: The concept of nanotech predates Engines by a bit. Experts cite a 1959 address by physicist Richard Feynman to the American Physical Society. (See “There's Plenty of Room at the Bottom,” a copy of which is posted at www.zyvex.com/nanotech/feynman.html.)]

  [Footnote 2: www.nano.gov/html/res/IntStratDevRoco.htm]

  [Footnote 3: Atoms don't have precise boundaries, due to pesky quantum-mechanical considerations. Quoted sizes for atoms derive from interatomic separations within molecules.]

  [Footnote 4: Held October 9-10, 2007, in Arlington, VA.]

  [Footnote 5: www.itrs.net]

  [Footnote 6: Gordon Moore remarked in 1965 that the number of transistors on a chip had roughly doubled every two years. He boldly predicted the trend would continue. In 1965, cramming sixty transistors onto a chip was a research project. In 2006, Intel (an apt example, since Moore cofounded Intel) introduced its Core Duo microprocessor line with 291 million transistors on a chip. 2006—1965 = 41 years, in which Moore's Law would suggest 20.5 doublings. 60 X 220.5 ~88 million. Not bad, considering that virtually every aspect of transistor manufacture has changed, some repeatedly, in 41 years. Chips didn't get appreciably larger, meaning that individual transistors have gotten a lot smaller over time.]

 

‹ Prev