Book Read Free

The Perfectionists

Page 28

by Simon Winchester


  Machines capable of performing such a task, known as photolithography, were already available. Letterpress printers, for example, were employing the idea when, at around this time, they began switching to the use of polymer plates. Instead of using hand-assembled forms of lead characters, a printer could now simply type in a page of work and feed it into a photolithography engine, and out would come a page reproduced as a sheet of flexible polymer. All the letters and other characters, all the p’s and q’s, would now be standing type high above the polymer plate’s surface, ready to be impressed onto paper with a platen press, say, which would give the same look and feel to the resulting page of paper as an old-fashioned piece of handmade letterpress work. Why not modify such a machine to print imagery, of circuitry rather than literature, onto not polymer or paper but silicon wafers?

  The mechanics of actually doing such a thing turned out to be formidable—all the imagery was tiny, all the work necessarily demanded the highest precision and the closest tolerances, and the results were minute in aspect and at first imperfect, almost every time. Yet, after months of work in the early 1960s, Robert Noyce and Gordon Moore and Moore’s team at Fairchild eventually managed to assemble the congregated devices, and to make them planar—to flatten them and reduce their volume and their power consumption and their heat emission, and to place them together on a flat substrate and market them as integrated circuits.

  This was the true breakthrough. Lilienfeld had been first with the idea, in the 1920s; Shockley and his team of Nobel laureates at Bell Labs had taken the first shaky steps; and then, with Hoerni’s invention of the planar transistor, with the internals being arranged in thin layers rather than as discrete crystals, suddenly it became possible to miniaturize the circuitry, to make electronics of ever-increasing speed and power and ever-decreasing size.

  The transistors in these circuits, with just the application of tiny bursts of power, could be switched on and off and on and off ceaselessly and very swiftly. These new minute baubles of silicon thus became crucial to the making of computers, which make all their analog and later their digital calculations on the basis of a transistor’s binary state, on or off—and if the transistors are numerous enough and swift enough in their performances of this task, they can render a computer very powerful, extremely quick, and enticingly cheap. So the making of integrated circuits inexorably led to the making of the personal computer—and to scores upon scores of other devices at the heart of which were ever-smaller and ever-quicker pieces of circuitry, conceived and designed initially by the clever group at Fairchild.

  Financially, though, Fairchild performed dismally, not least because other start-ups, such as Texas Instruments,* had the extra cash or a generous parent to allow them to expand into the emerging market. It was their frustration at Fairchild’s inability to compete that led the most ambitious of the company’s founders to leave yet again, and establish their own firm anew, one that would solely design and manufacture semiconductors. This company, set up by Gordon Moore and Robert Noyce—the “fairchildren,” they were called—was set up in July 1968 as Intel Corporation.

  Within three years of incorporation, the first-ever commercially available microprocessor (a computer on a chip) was officially announced. It was the Intel 4004, the famous “forty-oh-four.” And as an indication of the new kind of precision that was being brought to bear on this new kind of technology, it is worth remembering that buried within the inch-long processor was a tiny die of silicon, twelve millimeters wide, on which was engraved a marvel of integrated circuitry printed with no fewer than 2,300 transistors. In 1947, a transistor was the size of a small child’s hand. In 1971, twenty-four years later, the transistors in a microprocessor were just ten microns wide, a tenth of the diameter of a human hair. Hand to hair. Minute had now become minuscule. A profound change was settling on the world.

  Initially, Intel’s 4004 chip was created privately for a Japanese calculator-making firm named Busicom, which was struggling somewhat financially and needed to lower its production costs, and so thought to introduce computer chips into its calculating engines—and therefore approached Intel. It is part of Intel company lore that at a brainstorming session in a hotel in the old Japanese city of Nara, a woman whose name has since been forgotten designed the basic internal architecture of the calculator in such a way as to positively require Intel, with its unique new miniaturizing abilities, to make the necessary little processing unit.

  The calculating machine was eventually created, and launched in November 1971, with advertisements describing it as the world’s first desktop machine to use an integrated circuit, a processing chip with the power of one of the legendary ENIAC room-size computers at its heart. A year later, the firm asked Intel to lower its prices for the chips—they were then priced at about twenty-five dollars apiece. Intel said yes, but on condition that it take back the rights to sell its invention on the open market, a stipulation to which the Japanese firm reluctantly agreed. The 4004 was thereafter incorporated into a Bally computer-augmented pinball machine, and was reputedly, but wrongly, said to be aboard NASA’s Pioneer 10 space probe. NASA had thought about using it, but had decided it was too new—and the resulting chipless spacecraft spent thirty-one years after its launch in 1972 wandering through the solar system, its batteries finally giving out in 2003, seven billion miles from home.

  The repute of the 4004 spread, and Intel decided that the firm’s core business from now on would be to make microprocessors, guided by Gordon Moore’s insistence (first published in 1965, six years before his company actually made the first 4004, which hints at a certain prescience) that every year the size of these chips would halve and their speed and power would double. The minuscule would, in other words, become the microscopic, and then the submicroscopic, and then, perhaps, the atomic. Moore revised his prediction after seeing the workings and the challenges of the designing of the 4004, insisting now that the changes would occur every two years, not one. It was a prophecy that has become almost precisely self-fulfilling for all the years since 1971.

  And so the near-exponential process of chips becoming ever tinier and ever more precise got under way—with two decided advantages recognized by the accountants of all the companies that decided to make chips, Intel of course included: the smaller the chips became, the cheaper they were to make. They also became more efficient: the smaller the transistor, the less electricity needed to make it work, and the faster it could operate—and so, on that level, its operations were cheaper, too.

  No other industry with a fondness for small (the makers of wristwatches being an example) equates tininess with cheapness. A thin watch is likely to be much costlier to make than a fat one, but because of the exponentiality inherent in chip making, because the number of chips that can be crammed onto a single line is automatically squared once you translate the line to a chip, each individual transistor becomes less costly to manufacture. Place a thousand transistors onto a single line of silicon, and then square it, and without significant additional cost, you produce a chip with a million transistors. It is a business plan without any obvious disadvantage.

  The measure of a chip is usually expressed by what is confusingly called its process node, which, very crudely put, is the distance between any two of its adjoining transistors, or a measure of the time taken for an electrical impulse to move from one transistor to another. Such a measure is more likely to offer to semiconductor specialists a realistic picture of the power and speed of the circuitry. For an observer outside the industry, it is still the number of transistors on the wafer that offers the somewhat more dramatic illustration, even though a substantial number of those transistors are there to perform functions that have nothing to do with the chip’s performance.

  Node size has shrunk almost exactly as Gordon Moore predicted it would. In 1971, the transistors on the Intel 4004 were ten microns apart—a space only about the size of a droplet of fog separated each one of the 2,300 transistors on the board. By 1985, the
nodes on an Intel 80386 chip had come down to one micron, the diameter of a typical bacterium. By 1985, processors typically had more than a million transistors. And yet still more, more, were to be found on ever-newer generations of chips—and down, down, down the node distances came. Chips with names such as Klamath in 1995, Coppermine in 1999, Wolfdale, Clarkdale, Ivy Bridge, and Broadwell during the first fifteen years of the new millennium—all took part in what seemed to be a never-ending race.

  With all these last-named chips, measuring their nodes in microns had become quite valueless—only using nanometers, which were units one thousand times smaller, billionths of a meter, now made sense. When the Broadwell family of chips was created in 2016, node size was down to a previously inconceivably tiny fourteen-billionths of a meter (the size of the smallest of viruses), and each wafer contained no fewer than seven billion transistors. The Skylake chips made by Intel at the time of this writing have transistors that are sixty times smaller than the wavelength of light used by human eyes, and so are literally invisible (whereas the transistors in a 4004 could quite easily be seen through a child’s microscope).

  There are still ever-more-staggering numbers in the works, ever more transistors and ever-tinier node sizes yet to come—and all still fall within the parameters suggested by Moore in 1965. The industry, half a century old now, is doing its level best, egged on by the beneficial economics of the arrangement, to keep the law firmly in its sights, and to achieve it, or to better it, year after year for the foreseeable future. A confident Intel executive once remarked that the number of transistors on a chip made in 2020 might well exceed the number of neurons in the human brain—with all the incalculable implications such a statistic suggests.

  ENORMOUS MACHINES SUCH as the fifteen that started to arrive at Intel’s Chandler fab from Amsterdam in 2018 are employed to help secure this goal. The machines’ maker, ASML—the firm was originally called Advanced Semiconductor Materials International—was founded in 1984, spun out from Philips, the Dutch company initially famous for its electric razors and lightbulbs. The lighting connection was key, as the machine tools that the company was established to make in those early days of the integrated circuit used intense beams of light to etch traces in photosensitive chemicals on the chips, and then went on to employ lasers and other intense sources as the dimensions of the transistors on the chips became ever more diminished.

  Beginning with the Intel 4004 integrated circuit, which crammed 2,300 transistors onto a sliver of silicon 12 mm wide, and proceeding to today’s chips, which contain upwards of 10 billion discrete transistors on an even tinier chip, this graph displays the relentless truth of Moore’s law.

  Photograph courtesy of Max Roser/Creative Commons BY-SA-2.0.

  IT TAKES THREE months to complete a microprocessing chip, starting with the growing of a four-hundred-pound, very fragile, cylindrical boule of pure smelted silicon, which fine-wire saws will cut into dinner plate–size wafers, each an exact two-thirds of a millimeter thick. Chemicals and polishing machines will then smooth the upper surface of each wafer to a mirror finish, after which the polished discs are loaded into ASML machines for the long and tedious process toward becoming operational computer chips.

  Each wafer will eventually be cut along the lines of a grid that will extract a thousand chip dice from it—and each single die, an exactly cut fragment of the wafer, will eventually hold the billions of transistors that form the nonbeating heart of every computer, cellphone, video game, navigation system, and calculator on modern Earth, and every satellite and space vehicle above and beyond it. What happens to the wafers before the chips are cut out of them demands an almost unimaginable degree of miniaturization. Patterns of newly designed transistor arrays are drawn with immense care onto transparent fused silica masks, and then lasers are fired through these masks and the beams directed through arrays of lenses or bounced off long reaches of mirrors, eventually to imprint a highly shrunken version of the patterns onto an exact spot on the gridded wafer, so that the pattern is reproduced, in tiny exactitude, time and time again.

  After the first pass by the laser light, the wafer is removed, is carefully washed and dried, and then is brought back to the machine, whence the process of having another submicroscopic pattern imprinted on it by a laser is repeated, and then again and again, until thirty, forty, as many as sixty infinitesimally thin layers of patterns (each layer and each tiny piece of each layer a complex array of electronic circuitry) are engraved, one on top of the other. When the final etching is done and the wafer emerges, presumably now exhausted from its repeated lasering and etching and washing and drying, it is barely any thicker than when it entered as a virgin wafer three months before, such is the fineness of the work the machine has performed upon it.

  Cleanliness is of paramount importance. Imagine what might occur if the tiniest fragment of dust were to settle momentarily on top of the mask where the pattern was to be drawn, at the moment the laser was fired through it. Though the dust particle might well be invisible to the human eye, smaller than a wavelength of visible light, once its shadow passed through all the lenses, by way of all the mirrors, it would become a massive black spot on the wafer, with the result that hundreds of potential chips would have been ruined, thousands of dollars’ worth of product lost forever. This is why everything that goes on within the ASML boxes does so in warehouse-size rooms that are thousands of times cleaner than the world beyond.

  There are well-known and internationally agreed standards of cleanliness for various manufacturing processes, and while one might suppose that the clean room at the Goddard Space Center in Maryland, where NASA engineers assembled the James Webb Space Telescope, was clean, it was in fact clean only up to a standard known as ISO number 7, which allows there to be 352,000 half-micron-size particles in every cubic meter of air. Rooms within the ASML facility in Holland are very much cleaner than that. They are clean to the far more brutally restrictive demands of ISO number 1, which permits only 10 particles of just one-tenth of a micron per cubic meter, and no particles of any size larger than that. A human being existing in a normal environment swims in a miasma of air and vapor that is five million times less clean. Such are the demands of the modern integrated circuit universe, where precision seems to be reaching into the world of the entirely unreal, and near-incredible.

  With the latest photolithographic equipment at hand, we are able to make chips today that contain multitudes: seven billion transistors on one circuit, a hundred million transistors corralled within one square millimeter of chip space. But with numbers like this comes a warning. Limits surely are being reached. The train that left the railhead in 1971 may be about to arrive, after a journey of almost half a century, at the majesty of the terminus. Such a reality seems increasingly probable, not least because as the space between transistors diminishes ever more, it fast approaches the diameter of individual atoms. And with spaces that small, leakage of some properties of one transistor (whether electric, electronic, atomic, photonic, or quantum-related properties) into the field of another will surely soon be experienced. There will be, in short, a short circuit—maybe a sparkless and unspectacular short circuit, but a misfire nonetheless, with consequences for the efficiency and utility of the chip and of the computer or other device at the heart of which it lies.

  The main mirror for the James Webb Space Telescope. At more than twenty-four feet in diameter, it will, from its location a million miles from Earth, vastly increase our ability to peer into the very edge of the universe, and at the time the universe was forming. It is due to be launched in 2019.

  Thus is the tocsin being sounded. And yet, to a true chipaholic—or to a true believer that the world will be a better place if Moore’s law is rigidly observed and its predictions are followed to the letter—the mantra is a familiar one: “Just one more. One more try.” One more doubling of power, one more halving of size. Let impossible be a word that in this particular industry goes unspoken, unheard, and unheeded. Molecula
r reality may be about to try to impose new rules, but these are rules that fly in the face of everything that has passed before, and their observance denies the computer world the role of ambition, and of having its reach extend, as it has for all the years of its existence, well beyond its grasp.

  And so the chip-making-machine makers (particularly those in Holland, who have invested billions in this industry, and keenly want and need to preserve their investment) are now doing their level best to comply, to fulfill the wishes of the makers of what some might think are technically impossible dreams. Their new generation of devices does appear to have the ability to let the chip makers go even smaller, even beyond what seems to be possible, or prudent, or both.

  The new machines no longer employ visible-light lasers, but what is known as extreme ultraviolet (EUV) radiation, and at a specific wavelength of 13.5 billionths of a meter. This would enable, in theory, the making of transistors down to atomic scale, to edge-of-the-seat, leading-edge, bleeding-edge ultrasubmicroscopic precision, while maintaining some kind of commercial edge, too.

  Dealing with EUV radiation is far from easy. It is radiation that travels only in a vacuum. It cannot be focused by lenses, and it won’t work with mirrors as mirrors are generally known, but only through costly, many-layered devices known as Bragg reflectors. Moreover, EUV radiation is best produced from a plasma, a high-temperature gaseous form of molten metal that can best be procured by firing a conventional high-powered laser at a suitable metal.

  An American company (which ASML subsequently bought) had already developed a unique means of producing this particular and peculiar type of EUV radiation. Some said the company’s method verged on the insane, and it is easy to see why.

 

‹ Prev