Three Scientific Revolutions: How They Transformed Our Conceptions of Reality

Home > Other > Three Scientific Revolutions: How They Transformed Our Conceptions of Reality > Page 23
Three Scientific Revolutions: How They Transformed Our Conceptions of Reality Page 23

by Richard H. Schlagel


  Strings can interact by splitting and rejoining, thus creating the interactions we see among electrons and protons in atoms. In this way, through string theory, we can reproduce all the laws of atomic and nuclear physics. The “melodies” that can be written on strings correspond to the laws of chemistry. The universe can now be viewed as a vast symphony of strings. (p. 197)

  He goes on to show how field string theory can explain Einstein’s special and general theories of relativity and even provide a possible explanation of the “riddle of dark matter” and “black holes.” But while he asserts that string theory can “effortlessly explain” the creation and interaction of the basic physical particles and many other puzzles confronting physics, one can understand why none of it has ever been confirmed. To me it reads like scripture in which declarations are presented with a kind of doctrinal authority, but based on mathematics rather than revelation.

  It not only seems unlikely that minuscule vibrating strings can be the source of the universe, the theory requires a multi-dimensional hyperspace for their existence.

  Only in ten-or eleven-dimensional hyperspace do we have “enough room” to unify all the forces of nature in a single elegant theory. Such a fabulous theory would be able to answer the eternal questions: What happened before the beginning? Can time be reversed? Can dimensional gateways take us across the universe? (Although its critics correctly point out that testing this theory is beyond our present experimental ability, there are a number of experiments currently being planned that may change this situation. . . . (p. 185; italics added)

  He goes on to discuss refinements in the theory, such as “supersymmetry,” “M-theory,” “heterotic string theory,” and the “Brane World,” along with the possible experiments being planned to confirm it, though no supporting empirical evidence that I know of has been announced since 2005 when Kaku’s book was published.

  Yet despite his somewhat optimistic assessment of the theory, he seems to have the same reservations about it that I have expressed, as the following quotation indicates. Recalling Pauli’s version of the unified field theory that he developed with Werner Heisenberg described by Niels Bohr as “crazy,” but not “crazy enough,” Kaku states:

  One theory that clearly is “crazy enough” to be the unified field theory is string theory, or M-theory. String theory has perhaps the most bizarre history in the annals of physics. It was discovered quite by accident, applied to the wrong problem, relegated to obscurity, and suddenly resurrected as a theory of everything. And in the final analysis, because it is impossible to make small adjustments without destroying the theory, it will either be a “theory of everything” or a “theory of nothing.” (pp. 187–88)

  Given how much is still unknown or conjectured about the universe, how likely is it that we are close to a “final theory of everything” that would resemble string theory, or that it is even attainable? I have recently read Marcelo Gleiser’s work titled The Island of Knowledge, published in 2014 after I had written my book, thus I did not have the benefit of reading his exceedingly informed and, in my opinion, correct interpretation of the current controversy in physics, as to whether quantum mechanics represents a realistic and final account of physical reality. His conclusion is that “Unless you are intellectually numb, you can’t escape the aweinspiring feeling that the essence of reality is unknowable” (p. 193), although there is no sounder method of inquiry now than science. While one can concede that quantum mechanics is in a sense correct in that it mainly agrees with the current experimental evidence, this does not mean that it is true and is thus a final theory of reality. I strongly recommend reading Gleiser’s book if one is interested in the prospects of quantum mechanics.

  In addition to the question of whether it is presumptuous or realistic to suppose that finite creatures living in this infinitesimal speck and moment of the universe will ever arrive at a final theory, there is the additional problem of whether we can afford the tremendous costs of further research. The discovery of the Higgs or Higgs-like boson cost 10 billion dollars involving 6,000 researchers and the creation of a 17-mile circular tunnel under the border of France and Switzerland with thousands of magnets. The international fusion mega-project now in construction in southern France is estimated to cost 23 billion dollars and whose completion is projected to take a decade. Even continuing research on whether WIMPS exist will ultimaely depend upon the costs, as well as the experimental and theoretical ingenuity.

  As examples of how difficult it has become to finance such projects, in 1993 the US Congress discontinued the financing of the superconducting, supercollider after already spending 2 billion dollars digging a tunnel 15 miles long in Texas in which to house it on the grounds that the cost of completing the project was too great. Given the current economy, President Obama has requested a budge cut for our fusion research by 16% to 248 million dollars, a foreboding sign of the future.

  I am not suggesting that funding scientific research has not paid off; just consider all the technological, economic, social, medical, and intellectual benefits derived from scientific inquiry to acknowledge the opposite. Everything we know about the universe and human existence and all the economic, educational, social, and medical improvements and advances in our standard of living we owe entirely to the genius and dedication of scientists. But one can’t help wondering whether the cost of delving further into the universe will overreach at some point our financial assets and/or capacities. Thus, though I admire most of what he says in his very stimulating book, I question Alex Rosenberg’s confident assertion that “Physics is causally closed and causally complete. The only causes in the universe are physical. . . . In fact, we can go further and confidently assert that the physical facts fix all the facts.”128 If true, this would confirm Einstein’s worldview. I wish I could be so confident.

  Having expressed my reservations that attaining a final theory of the universe is within reach or even possible, I will conclude this study of the major transitions in our conceptions of reality and way of life by citing the amazing scientific and technological advances that are predicted to take place by the end of this century or the next based on the knowledge already attained or anticipated. This, fortunately, has also been comprehensively described by Michio Kaku in his prophetic book previously cited, Physics of the Future: How Science Will Shape Human Destiny and Our Daily Lives by the Year 2100. As described on the back cover of the book:

  Renowned theoretical physicist Michio Kaku details the developments in computer technology, artificial intelligence, medicine, and space travel that are poised to happen over the next hundred years . . . interview[ing] three hundred of the world’s top scientists—already working in their labs on astonishing prototypes. He also takes into account the rigorous scientific principles that regulate how quickly, how safely, and how far technologies can advance . . . forecast[ing] a century of earthshaking advances in [science and] technology that could make even the last centuries’ leaps and bounds seem insignificant.129 (brackets added)

  An unexpected and exceptional added attraction of the book is his occasional indication of how the extraordinary modern scientific and technological achievements have often replicated the divine exploits attributed to the gods in ancient mythologies and current religions, such as creating miraculous cures and performing marvellous feats like conferring on humans supernatural powers and eternal life.

  The challenge is to present as briefly, clearly, and objectively as possible the range of the incredible developments, greater than the Industrial Revolution, that are predicted to radically change the conditions and nature of human existence in this century or the next, and try to discriminate between the fanciful and the possible, along with the beneficial and harmful outcomes. According to Kaku, one of the basic factors driving this process is the rapidity in the development of computers and how this has altered our lives owing to what is called Moore’s law.

  The driving source behind . . . [these] prophetic dreams is something call
ed Moore’s law, a rule of thumb that has driven the computer industry for fifty or more years, setting the pace for modern civilization like clockwork. Moore’s law simply says that computer power doubles about every eighteen months. First stated in 1965 by Gordon Moore . . . this simple law has helped to revolutionize the world economy, generated fabulous new wealth, and irreversibly altered our way life. (p. 22; brackets added)

  The technological developments that were most instrumental in creating the computer revolution apparently were the following: (1) by relying on electrical circuits computers could perform close to the speed of light that permits nearly instantaneous transmissions and communication with the rest of the world; (2) these electrical conductions are further enhanced by the development of miniaturized transistors or switches; and (3) the creation of the computer chip or silicon wafer the size of one’s fingernail that can be etched with millions of tiny transistors to form integrated units making it possible to carry out instantaneously enormously intricate calculations that, otherwise, would have taken decades, years, or even centuries.

  Turning now to Kaku’s account of the various conceptions and predictions of the future developments that will be brought about by the computer revolution, the one I find the most startling and threatening is based on computerized artificial intelligence and the creation of robots that in the most extreme case could, it is predicted, replace or convert human beings into computerized robots, as indicated in the initial section chapter 2 of his book The End of Humanity? (p. 75).

  As of now the most advanced robot is ASIMO created by the Japanese “that can walk, run, climb stairs, dance, and even serve coffee” and “is so lifelike that when it talked, I half expected the robot to take off its helmet and reveal the boy who was cleverly hidden inside” (p. 77). In addition, there “are also robot security guards patrolling buildings at night, robot guides, and robot factory workers. In 2006, it was estimated that there were 950,000 industrial robots and 3,540,000 service robots working in homes and buildings” (pp. 87–88). But while these are remarkable achievements, they are not indications that the robot has attained an ounce of control over or initiates any of its behavior. Everything ASIMO does has been preprogrammed so that its actions are entirely beyond its control. It of course has no conscious awareness of its surroundings or any feelings since every action it performs is computerized. In some cases it is controlled by a person who directs the actions from the images on a computer thousands of miles away, similar to controlling a drone.

  More remarkable was the event in 1997 when “IBM’s Deep Blue accomplished a historic breakthrough by decisively beating world chess champion Gary Kasparov. Deep Blue was an engineering marvel, computing 11 billion operations per second” (p. 80). Nonetheless, Deep Blue cannot take credit for the achievement that has to be attributed to the intelligence of the gifted programmers who devised all the correct moves to beat Kasparov.

  This fact was not lost on the artificial intelligence (AI) researchers who then began attempting to “simulate” conscious awareness by installing object recognition, expressing inner emotional states and feelings by facial expressions, and initiating intelligent actions. Thus, instead of the top-down approach of treating robots like digital computers with all the rules of intelligence preprogrammed from the very beginning, they began imitating the brain’s bottom-up approach. They tried to create an artificial neural network with the capacity of learning from experience that would require conscious awareness of the environment, along with emotions and affective feelings that are the source of value judgments, such as whether things are beneficial or harmful.

  In addition to attempting to replicate the learning process of human beings, they would have had to install such mental capacities as memory, conceptualizing, imagining, speaking, learning languages, and reasoning, all of which exceeds just following electronic rules. Given the fact that the brain is an organ with unique neuronal and synaptic connections composed of biomolecular components directed by numerous chemicals that produce a great deal of flexibility, the challenge of trying to duplicate this with just an electrical, digital network proved formidable.

  Unlike a computer program, the brain has evolved into various areas representing evolutionary transitions responsible for lesser or more advanced anatomical structures and functions. This includes the reptilian area near the base of the brain that is the source of basic instincts, automatic bodily processes, and behavioral functions; the limbic system or mid-brain that comprises the amygdala, hippocampus, and hypothalamus that together are responsible for memory, emotions, and learning, including much of the hormonal activity of more highly socialized mammals and primates; and the newest, most important convoluted gray matter called the cerebral cortex or cerebrum divided into the frontal, parietal, and occipital lobes that produces such human capacities as language acquisition, learning, reasoning, and creativity.

  That Kaku is aware of these differences between computers and human capabilities is indicated in the following statement.

  Given the glaring limitations of computers compared to the human brain, one can appreciate why computers have not been able to accomplish two key tasks that humans perform effortlessly: pattern recognition and common sense. These two problems have defied solution for the past half century. This is the main reason why we do not have robot maids, butlers, and secretaries. (pp. 82–83)

  But, as he adds, programmers have been able to overcome these obstacles to some extent. One robot developed at MIT scored higher on object recognition tests than humans, even performing equal to or better than Kaku himself. Another robot named STAIR developed at Stanford University, still relying on the top-down approach, was able to pick out different kinds of fruit, such as an orange, from a mixed assortment that seems simple enough to us, yet very difficult for robots because of the dependence on object recognition. Yet the best result was achieved at New York University where a robot named LAGR was programmed to follow the human bottom-up approach enabling it to identify objects in its path and gradually “learn” to avoid them with increased skill (cf., p. 86).

  Furthermore, an MIT robot named KISMET was programmed to respond lifelike to people with given facial expressions that mimicked a variety of emotions (which have now been programmed into dolls), yet “scientists have no illusion that the robot actually feels emotions” (p. 98). While programmers are striving to overcome these differences they still have a long way to go, as Kaku indicates.

  On one hand, I was impressed by the enthusiasm and energy of these researchers. In their hearts, they believe that they are laying the foundation for artificial intelligence, and that their work will one day impact society in ways we can only begin to understand. But from a distance, I could also appreciate how far they have to go. Even cockroaches can identify objects and learn to go around them. We are still at the stage where Mother Nature’s lowliest creatures can outsmart our most intelligent robots. (p. 87)

  Apparently there are two major approaches to resolving this problem. As indicated previously, Kaku identified two crucial capacities that robots lack that prevent their simulating human behavior: pattern recognition and common sense, both of which require conscious awareness that humans possess and computers and robots entirely lack. One way of solving the problem is to try to endow a computer or robot with consciousness, using a method called “reverse engineering of the human brain.” Instead of attempting to “simulate” the function of the brain with an artificial intelligence, it involves trying to reproduce human intelligence by replicating the neuronal structure of the brain neuron by neuron and then installing them in a robot.

  This new method, “called optogenetics, combines optics and genetics to unravel specific neural pathways in animals” (p. 101). Determining by optical means the neural pathways in the human brain presumably would enable optogeneticists not only to detect which neural pathways determine specific bodily and mental functions, but also duplicate them. At Oxford University Gero Meisenböck and his colleagues


  have been able to identify the neural mechanisms of animals in this way. They can study not only the pathways for the escape reflex in fruit flies but also the reflexes involved in smelling odors. They have studied the pathways governing food-seeking in roundworms. They have studied the neurons involved in decision making in mice. They found that while as few as two neurons were involved in triggering behaviors in fruit flies, almost 300 neurons were activated in mice for decision making. (p. 102)

  But the problem is that identifying the neuron’s function is not the same as reproducing it. The intended purpose was to model the entire human brain using two different approaches. The first approach was to “simulate” the vast number of neurons and their interconnections in the brain of a mouse by a supercomputer named Blue Gene constructed by IBM. Computing “at the blinding speed of 500 trillion operations per second . . . Blue Gene was simulating the thinking process of a mouse brain, which has about 2 million neurons (compared to the 100 billion neurons that we have)” (p. 104). But the question is whether simulating is equivalent to reproducing?

  This success was rivaled by another group in Livermore, California, who built a more powerful model of Blue Gene called “Dawn.” At first in “2006 it was able to simulate 40 percent of a mouse’s brain. In 2007, it could simulate 100 percent of a rat’s brain (which contains 55 million neurons, much more than the mouse brain” (p. 105). Then progressing very rapidly in 2009 it, “succeeded in simulating 1 percent of the human cerebral cortex . . . containing 1.6 billion neurons with 9 trillion connections” (p. 105).

  Although it convinced optogeneticists that simulating the human brain was not only possible, it was inevitable, yet again the crucial question is whether “simulating” is equivalent to “reconstructing” or “reproducing,” although it seems to me that the distinction has been overlooked and assumed to be the same. Significantly, in addition to meaning “imitating,” the term “simulate” has the additional adverse connotations of feigning, pretending, and faking.

 

‹ Prev