by Paul Dueweke
“Do you allow the computer total freedom as to where it stores various kinds of data?”
“In a way, yes. But only after I’ve taught the machine certain basic principles about the interrelationship of data types; that is, how the descriptive parameters of objects interact. For example, I’d teach it about the functional relationships between a baseball glove and a hockey stick. You see, the baseball glove interacts with the player in the same way as—”
“I see, Dr. Planck, so what you’re saying is you teach the computer basic functional relationships among the various data sets, and then you just turn it loose?”
“Well … it’s not quite that simple. I have to create certain optimization parameters that guide the location of data with other data that may be similar in certain ways. I start the computer off with a suggested list of data-type correlations and their transfer functions, but then the computer refines that list and even restructures the transfer functions among the data types. Finally, it will be totally free to modify the data types in any way that best optimizes the output. You see, that’s the beauty of a neural network. It learns from its mistakes just as you and I do.”
“That sounds very much like the neural networks other researchers are developing.”
“That’s exactly right! But what I’ve done is to give the computer the freedom to design its own neural networks and to replicate them and intertwine them in such a way as to optimize the basic parameters and transfer functions of the system. That’s just the way the human brain works. Another thing I’m working on is the integration of digital processing and neural processing to achieve the benefits of both.”
“You have used the expression transfer function a number of times. Could you explain just what you mean by that?”
“Yes, of course. The transfer function is at the heart of a neural network computer. You can picture a neural network as a web of neurons with billions of intersections, which we call nodes—not to be confused with the nodes of digital computers. As an electrical pulse of information, which is analogous to the electro-chemical pulses in your brain, reaches each node, a decision must be made as to how much of the received signal is distributed into the connecting neurons. Two or more transfer functions define each of these inter-neuron distributions. Keep in mind that the neural network of my device is an electrical analog of the electro-chemical neural network known as your brain.”
“I see. The transfer functions are the mathematical equivalents of the inter-synaptic weighting factors of the brain.”
“Yes,” replied Dr. Planck. “I didn’t want to make it any more complex than necessary. It’s like the subtle interactions among basketball players to let the teammates know whether the play will be a drive to the paint or a kick-out to three-point range. That kind of—”
“Thank you for that consideration, Dr. Planck. Could you explain what a cellular automata is?”
“Yes, of course. That’s simply the smallest unit of autonomous, replicable code. It’s much like the human cell. It’s the building block of my digital/neural network hybrid. I designed a whole family of cellular automata, thousands of them. Each one was a block of computer code that would perform a specific function. They were similar to subroutines of the old days.”
“What do you mean—were?”
“That’s the interesting thing about this computer. Those cellular automata that I designed probably don’t even exist any more. I designed each one to seek other cellular automata that perform functions that are similar, in sometimes quite subtle ways, to the one it performs. Of course, each cellular automata had several built-in docking sites—sort of analogous to receptors in molecular biology. After link-up and testing, they’d make a decision about whether to continue the relationship. If it was positive, then a fracture would occur in the second cellular automata, and some functional part of it, maybe even all of it, would split off and adhere to the first one. Then the second one would go looking for a docking partner. This is a way for the system to evolve toward a higher function without the cellular automata growing excessively. I didn’t want any particular types of cellular automata to become too complex and dominate others or to grow without constraints. That’d be sort of like cancer. So you see, after a while the cellular automata might look and behave quite differently from the ones I created.”
“That’s very exciting. It’s an evolution somewhat analogous to what takes place between proteins as they make subtle changes to human cells.
“Right.”
“Now, if we could switch gears for a moment, I’d like to talk about an application. Why has the medical profession not accepted your diagnostic program in light of its enormous success over the last few years of testing?”
“They’re afraid. That’s all, just afraid.”
“Could you elaborate on that, please?”
“I always get in trouble with this question.” Dr. Planck paused while he stared at the ceiling for a moment. “We’ve performed over a hundred diagnoses with this system. In every case, my system plus a single physician was able to perform more accurate and faster diagnoses with far less patient testing than any of the teams we’ve gone up against. In the few cases where neither the team nor the machine were able to develop a correct diagnosis, teaming the machine with the medical team finally yielded the right answer. I can only assume that the medical profession is afraid of the technology, that they can’t see it for what it is, a helper not a threat.”
“In the past, you’ve had much more to say about it, using such expressions as ‘blue collar doctors’. Have you changed your mind?”
Dr. Planck looked directly into the camera and said very deliberately, “My opinion is not a variable. If you want a better answer, take your question to the medical community. You might ask them how job security figures into their rejection of this diagnostic system. It seems they have learned much from the teaching community in this respect.”
The interview continued, addressing several military and economic applications for AIPs, and then ended amicably. As a final comment, Dr. Planck picked up a piece of paper from his desk and read, “I have been engaged in artificial life and other advanced computer concepts for over thirty years, and the computer research community is finally beginning to appreciate my work. I have the satisfaction of knowing I was able to point the way toward the ultimate development of the greatest machines ever created by man. I believe I will live to see the day when these machines will demonstrate their ultimate capability to infallibly and diligently serve the interests of man. My three decades of exhausting work have occupied my time to the exclusion of many personal activities such as antique car restoration and auto and motorcycle racing. I have gladly forgone these interests for the sake of my profession, however there is one that I must attend to. I have been writing a book that will detail this great human experiment with computer evolution. For its sake, I have chosen to retire from the Institute and spend my time catching up on my book and other interests. Thank you.”
The next day Dr. Planck accepted, with no media attention, a position at COPE. There he found visionary thinkers who were willing to push the envelope of performance and applications to unprecedented heights. It would be an opportunity to develop an automated management system, that he felt would be the model for corporations and institutions everywhere. He became the Associate Director for Data Services and single handedly took over the role of computer-system advanced development while the existing Data Services staff continued with the day-to-day operations role.
Dr. Planck was a hybrid, his body was born of human parents, but his mind evolved more like the computers he nurtured than like the son of mortals. He was endowed with the passion of a human being, yet he was single-minded in his career. He didn’t just believe in computers, he was one with them. He approached computer development as a father would teach his son where the secret pools were with the most ancient trout and how to tie the perfect fly that would dance across the water to excite those docile leviathan
s with the vigor of youth. Dr. Planck had married twice, and each marriage lasted long enough for his wife to discover and capitulate to his lifelong mistress. In both cases, he felt betrayed by weak humans.
He did, however, have one great passion beyond computers—speed. The outlet for this passion was a 1956 Corvette with chrome heads and glass pack duals. At a time when the muscle cars, like most other motor vehicles, were equipped with silent electric motors and electronic synthesizers to replicate some engineer’s version of the sound of machismo, he had a mint-condition historic car that could outperform any other car, both on the pavement and in the testosteronic realm.
He’d become a more conservative driver since loosing his driver license twice in five years, but he still operated at the edge of the law. He’d had confrontations with judges, none of whom shared his comprehension. His position was that since he was a superior driver with racing skills honed by track experience that others could only envy and since his machine was far superior to others, that he should be allowed to supplement the legal limits with his own judgment. He came to court with data and trophies and charts, and the judges invariably failed to appreciate his genius.
Dr. Planck would drive his Vette to work each morning, never taking it out of third gear since he didn’t want to labor the engine at low RPMs. He parked exactly astride a yellow line and once used his influence to have a lady fired who dared park next to him.
His passion for speed and cars still took a distant back seat to his communion with computers. Within two years at COPE, Dr. Planck had constructed a new coprocessor to supplement the computer he had inherited. It would be a hybrid of hybrids, including a traditional digital, electronic computer; an analog, electronic parallel-processor; and an analog, optical, neural-network parallel-processor with ten times the learning capacity and a hundred times the computational power of the original central processor. This new coprocessor would be under the control of the original computer.
Jenner was a major user of computer time so she was one of a small group invited to tour the new computer before its formal debut. Dr. Planck conducted the tour himself as he led the group through a scrubber portal into a room about a hundred feet square with a ring-shaped platform eight feet high and forty feet in diameter at its center. The base of the platform was surrounded by white panels. Occasionally one of the panels would open to allow a white-coated technician to pass through. A cathedral of massively delicate, black tubing supported a pair of mirror-like objects twenty feet above the center of the platform. It had more the appearance of an astronomical observatory than a computer center. The lighting in the room was of ordinary brightness but had an eerie red tone to it.
“You might have noticed stepping over a strip in the concrete floor before you entered the scrubber. Since the heart of my processor is a very delicate optical system, it was necessary to isolate it from the vibrations of our imperfect world. The main inner pad floats in a gelatinous material contained by the outer pad, which is supported by active mechanical isolators. We are now standing on the outer pad. The inner pad is never violated by humans except during maintenance.
“I have reconfigured the original computer that I inherited when I joined COPE to fit around the base of this platform. It provides all the necessary inputs to the new optical processor and interprets and distributes the outputs. The optical processor is a massively parallel neural network. I developed the technology for this device at the Institute for Research on Artificial Life, however the device I have built here has approximately a billion times as many artificial neurons as that early processor. IRAL had neither the foresight nor the budget to attempt what I have accomplished here. I surveyed all the major computer centers throughout the world before choosing COPE as having a sufficiently powerful mainframe computer along with the required mindset and funding to make this historic leap in machine/hominid evolution.”
He paused and looked admiringly at the enormous machine before his select group to give them a chance to do the same. Jenner’s eyes roamed over the main display console as the tour guide’s fingers caressed the small nameplate in the chassis attached to the console that read in raised letters “MATTHEW I. PLANCK II.”
Just then, a tall, slender, strawberry blond woman entered and sat at a terminal across the room. Dr. Planck turned toward her. “Dr. Alvarez, would you please come here?” The woman arose and walked toward the group. “I’d like you all to meet Dr. Alvarez from IRAL, my most trusted consultant. She has helped develop some of the most innovative concepts used here in my lab. Dr. Alvarez, maybe you could explain what you are working on at this moment.”
“Yes, of course, Dr. Planck. I’d be most happy to. One of the greatest challenges we face is to interface with a particular neuron in an accurate and timely way. Much of the computer’s attention is focused on this seemingly straightforward task. The problem is that it is really quite a horrendous job from both the bookkeeping and the I/O—input/output—points of view. It thus takes a lot of computer power and slows down the other computer functions. To streamline this interface, we are developing an ASNI, an application-specific neural interface. The problem is that this device must be very flexible, and thus very complex, to have the benefit we envision. First of all, it must contain both electronic and optical circuits. Next, it must be real-time programmable by the mainframe computer as the neural network reconfigures itself continuously to meet its evolving missions. And last, but certainly not least, it must be extremely tiny and inexpensive since we will need possibly a billion of them, depending on just how many functions we can stuff into the little critters. So, you see, the emphasis is really on—”
“Yes. Thank you very much, Dr. Alvarez,” Dr. Planck said as he shifted nervously from one foot to the other. “That was most illuminating, but you have probably already exceeded the ability of our lay audience to cope—that is, to understand—such detailed concepts.”
Dr. Planck lured their attention back to the console whose nameplate he continued to fondle while Dr. Alvarez returned to her work. “When I joined COPE, the computer needed two very important ingredients before it could claim its pivotal role in history. The first was far more parallel processing power, the solution to which you see before you today. The second was more subtle but probably even more important in the long view of computer history. That is, a whole new approach to software design and development using artificial life concepts like the ones I developed while at IRAL. That part of the equation is in process as we speak, but I won’t be ready to present it until it is somewhat closer to operational. What I can say about it is that it is resident in a partitioned domain of the main digital computer.”
Jenner inspected the hardware racks and wondered about that part of the computer, speculating just how partitioned it really was. She guessed that it was quite isolated and that she’d have no way of hacking her way into it.
“I have prepared a short video that explains the computer’s operation in a way that lay people such as yourselves can begin to grasp. After you view it, I will take you up on the OPL, the Optical Processing Level, so you may experience today how machines will think in the future … everywhere else.” With that he nodded, and the wall behind the group came to life with a trained voice accompanying exquisite graphics.
“The COPE computer under development by the internationally preeminent, award-winning authority, Dr. Matthew I. Planck, is actually two computers in one. The main computer is a traditional digital serial processor that manages all the inputs and outputs for the whole system. The second computer is a very non-traditional hybrid analog/digital parallel processor normally referred to as an artificial neural network, or ANN, since it attempts to reproduce the operation of the neural network of the human brain. The ANN, however, is designed to go well beyond human performance.
“The main computer controls a pair of deep UV lasers. Those two beams interact by means of the optics on the structure looming above you, to form a hologram. This hologram is the main computer�
�s way of presenting all of the system inputs, translated into an optical format, to the input of the ANN. This ANN input is the most remarkable computer element ever devised. The intensity pattern of the hologram contains all the information about whatever problem or set of problems the machine is trying to solve. Suppose the computer is tasked to predict a set of trends of all the political candidates at all levels who use the promise of Government funding from a myriad of sources to appeal to their electorates. And suppose this must incorporate our most advanced VERM—that’s voter empathetic response model. And suppose there are a dozen similar tasks plus all the day-to-day operational issues inherent in managing an organization as complex as COPE with it’s 127,000 employees, each with their own hundred-variable motivation predictor and over ten thousand media fulfillment and optimization interfaces on top of that. All of this information is formatted appropriately and presented to the ANN as an instantaneous holographic pattern.”
Jenner watched the evolving pitch before her, wondering if one of the many operational problems might be to determine the optimal scenario to negate some unlucky person who managed to get on the wrong list at COPE. The computer technology being presented to her was, however, even in this watered-down format, far too exciting to be sidetracked by a civil rights consideration.