Book Read Free

Theater of the World

Page 30

by Thomas Reinertsen Berg


  This mounting pressure caused the Americans to declare that they would launch their satellite on 6 December. Millions watched the events unfolding live on television as the engines started, spewing out flames and billowing smoke, before the rocket rose exactly 1.2 metres into the air and tipped over onto its side, exploding on the launch pad. The tiny satellite lay on the ground beside the wreckage, mournfully sending out its signals.

  The next morning, von Braun–with a touch of Schadenfreude–read newspapers filled with headlines such as ‘Kaputnik’, ‘Flopnik’, ‘Goofnik’ and ‘Oopsnik’. The navy’s failure gave him an opportunity–and on 31 January 1958, von Braun and his team put the first American satellite–Explorer 1–into orbit.

  The navy and APL launched the Transit satellite from Cape Canaveral, Florida, the following year, and the satellite passed over the Atlantic Ocean for twenty-five minutes before it dropped into the sea just off the coast of Ireland. The rocket’s third stage had failed to ignite, but despite this the mood was positive–during its short flight, the satellite had emitted its signals exactly as planned.

  Many test satellites had to be launched before the Transit navigation programme could finally be put into operation. The project’s engineers discovered one thing after another that disturbed both the satellites’ signals and their trajectories, including the Earth’s irregular shape, which results in variations in gravity as the satellites pass above it. The satellites jumped around without the engineers understanding why, until they discovered that the Earth was more uneven than first thought, in addition to being slightly more pointed in shape in the north than in the south. The engineers had to combine technology with geodesy–the study of the Earth’s shape and size–and studied the work of Marie Tharp and Bruce Heezen, which showed how the Earth’s surface is in constant motion due to continental drift, and how this affects gravity, which in turn affects the satellites. All irregularities had to be mapped and programmed.

  Right from the experimental stage the scientists realised that localisation using satellites was more accurate than old-fashioned triangulation–discovering, for example, that ordinary maps put the islands of Hawaii several kilometres away from their true location.

  With the successful launch of Transit 5c1 in June 1964, the system was complete–three satellites orbiting the Earth, sending out signals that were received by the navy’s ships and submarines. Three years later, Transit was made available for both public and commercial use, and Norway was one of the system’s early adopters. During the first year in which the country had access to the system, Transit was used to determine positions on the mainland; in 1971 the Norwegian Polar Institute was able to determine positions on Svalbard, and measurements taken of Jan Mayen in 1979 showed that the island was situated 350 metres closer to Norway than previously believed. The Continental Shelf Office used the system to precisely calculate the dividing line between Norway and Scotland out in the North Sea, and to determine the exact positions of oil platforms. Even leisure boats installed the Transit system when receivers became smaller and more affordable. The system had a margin of error of just twenty-five metres.

  SPY MAPS | In the shadow of the space race, the American military developed spy satellites. Although they had equipped their U-2 aircraft with cameras in 1954–cameras that could photograph objects measuring just a metre across from altitudes of over 21,000 metres–they knew it was only a matter of time before the Soviets would manage to shoot the aircraft down. The Soviets detected and shot down a U-2 in 1960, but that same summer the Americans were also able to recover a roll of film containing images taken by a satellite.

  The images taken by digital cameras at the time were nowhere near detailed enough for reconnaissance purposes, and so the spy satellites had to use rolls of film. At 9,600 metres in length these were somewhat longer than usual, and dropped from the satellite in what was known as a ‘film bucket’–a capsule that plummeted 140 kilometres before the heat shield released at 18 kilometres above ground level, triggering a parachute. A plane was then sent out to catch the falling bucket mid-flight–if the plane was unsuccessful, searches were initiated on the ground or in the sea.

  The satellites travelled at a speed equivalent to eight kilometres per second at ground level. Each image covered an area of 16 x 190 kilometres, and so the satellites took one photograph every other second–enabling the entire enemy territory to be photographed in just a day or two. The first roll of film contained images of sixty-four Soviet airports and twenty-six launch sites for anti-aircraft missiles. The Soviets launched their first spy satellite, Zenit, in 1962.

  The Cold War made its mark on the period’s maps. A Soviet map of the British city of Chatham, for example, clearly showed the dockyard where the Royal Navy was building submarines, while on the British map the same location was simply a blank space. The Soviet map also featured information about the size and load capacity of the bridges in the dockyard’s immediate vicinity.

  The two superpowers had different mapping strategies, which reflected the differences in their militaries. The Americans’ command of the air meant that maps of areas of strategic interest showing more detail than that provided at a scale of 1:250,000 were rarely necessary. The Soviet Union, on the other hand, was at the forefront of tank warfare and possessed the world’s largest army, so required detailed maps that provided information such as road widths, the load capacity of bridges, river depths and forest topography. Many of them also featured meteorological information. The Soviets therefore mapped some parts of the world right down to building level, and Soviet military maps of western Europe and American terrain were often more detailed than those possessed by the countries themselves. They were also fiercely guarded–personnel were required to sign out any maps needed for exercises, and if a map was destroyed, the pieces had to be returned. At the same time, the Soviet Union’s civil maps were almost useless–not only did they lack detail, but they were also deliberately distorted using a special projection process that resulted in random variations. Famous landmarks such as rivers and cities were included, but the specified coordinates, directions and distances were completely wrong. The point was to prevent western spies from being able to get hold of an accurate map of Soviet territory from any local newsagents or kiosk. The cartographer who developed this system received an award from Stalin for his efforts.

  A map of Montreal and the surrounding areas from 1972. The map provides an overview of the various soil types, where orange and yellow denote areas suitable for agriculture, green and white indicate less suitable areas and red areas are unsuitable. The turquoise areas are unidentified.

  After Stalin’s death in 1953 the Soviet military had global ambitions–Khrushchev saw fertile ground for communism in a world where former European colonies were becoming independent states. The Soviet military therefore dispatched cartographers to survey and map a broad range of developing countries, and here too they were extremely thorough in their activities–so thorough, in fact, that previously classified Soviet maps of these areas were purchased and used by telecommunications companies in setting up mobile networks. This work requires a topographical overview so that cell towers can be erected in appropriate locations, and the Soviet maps provided the best available overview of hilly regions in these parts of the world.

  The Soviet Union used maps to systematise their knowledge of the globe. Their maps were like an analogue database–much like those of the Middle Ages–and provided more than geographical information alone. The Soviets presented extensive and varied information through the creation of a visual hierarchy, in which the most important aspects were emphasised and the less important remained in the background. Their maps prefigured the digital method, used today, of organising geographical information in several layers.

  GIS | In the early 1960s, the Canadian authorities wanted to map over two and a half million square kilometres of land using similar techniques. Their aim was to create a map that provided an overview of agricultural areas,
forests, areas rich in wildlife–and locations that could be promoted as tourist destinations and for other uses. A rough estimate indicated that the project would require 536 geographers, who would need to create 3,000 maps over a period of three years. The problem, however, was that Canada had a total of only sixty geographers. But in 1962, British geographer Roger Tomlinson set out a plan for how he thought the project could be completed nonetheless.

  The Kenyan authorities had previously asked Tomlinson to find an area suitable for planting trees for a new paper factory. The plantation should preferably be located on a gradual incline in an area with an appropriate climate, which could be easily accessed by the plantation’s workers. The area would also need to be free from monkeys, since they would eat the saplings, and be a safe distance from the routes taken by elephants. In order to identify such an area, Tomlinson would have to create several maps–meteorological, zoological, geological–and layer them on top of one another. This would be far too expensive, however, and so the project was dropped.

  Later on, Tomlinson had the idea of using a computer to process the information–so that he could enter the elephants’ routes on a map that showed both soil and weather conditions, for example. ‘Computers,’ said Tomlinson, ‘could become information storage devices as well as calculating machines. The technical challenge was to put maps into these computers, to convert shape and images into numbers.’ American cartographer Waldo R. Tobler had taken up this challenge three years earlier, when he programmed a computer to draw an outline of the United States in fifteen minutes using 343 punched cards–the first map ever to be drawn using a computer. ‘Automation, it would seem, is here to stay,’ wrote Tobler. ‘It seems that some basic tasks, common to all cartography, may in the future be largely automated, and that the volume of maps produced in a given time will be increased while the cost is reduced.’

  Tomlinson first tried to generate interest for digital maps among computer companies on his own initiative–without success. But he then met members of the team who were working on the Canadian mapping project. Tomlinson convinced them that the solution was to digitise the information, and together with computing company IBM, they developed a system through which the maps were converted into numerals and connected to information about conditions such as the surface area of fields, settlements, forestry matters and animal migration routes. Tomlinson called this a geographic information system–GIS–through which maps could provide a complete overview of the natural resources within a region or country.

  Progress was slow–in 1970 there were still only forty people in the world using the methods–but weighty institutions, such as the American National Aeronautics and Space Administration (NASA), soon became adopters. In 1972, NASA sent up Landsat 1, the first satellite specially designed to monitor the Earth’s surface. The satellite was equipped with cameras that had a poor resolution compared to those used by the military–56 × 69 metres–but they were at least completely digital.

  Norway was an active participant in the Landsat programme, and data from the satellite was used to study the ice around Svalbard, observe snow volumes in areas with hydroelectric power stations, perform geological surveys and monitor the environment. One day in 1973, Landsat 1 photographed the Finnmarksvidda plateau in northern Norway, showing areas of birch forest and moorland covered with heathers and reindeer moss. An image of the same area taken six years later showed that the areas of bare rock and destroyed vegetation had grown in size due to increasing pollution from the nearby Russian nickel works. Using the satellite images, it was possible to monitor the natural destruction of the plateau year by year.

  ‘Continuous satellite coverage may also be used in the monitoring and mapping of Norwegian areas. […] The technological aids used in mapping and surveying activities are currently undergoing significant developments, characterised by new measuring instruments, the automation of cartographic processes and the extensive use of electronic data processing techniques,’ stated an official report in 1975. The study emphasised that for Norway, with its responsibility for large areas in remote regions such as the Arctic and Antarctic, and recently increased activities at sea due to North Sea oil production, satellite images would be especially useful in monitoring oil spills from ships and leaks from drilling platforms and pipelines. Images showing the temperatures in coastal areas and out at sea would also be of interest for the fishing industry.

  The geographic surveying of Norway was gradually digitised, and in 1981 computing company Norsk Data supplied four computer systems to the county mapping offices in Møre og Romsdal, Hedmark, Telemark and Rogaland. The other county offices were linked to the systems via the Datex public data network operated by Televerket, the state-owned telecommunications company, and were able to share just 543 megabytes of information–a tiny fraction of the capacity of today’s mobile phones. The aim was to build up a geographic information system that could be combined with the authorities’ data-based registers. By connecting houses on the map to the Building and Housing Register and National Registry, for example, the computers would be able to find out who lived at a certain location and their respective ages, and automatically create a map of households with children due to start at school in the autumn term.

  WWW | In the 1400s, the art of printing enabled the Europeans to produce far greater numbers of books than previously; in the same way, digitisation meant that many more maps could be produced. The conversion of maps into computer code made it easy to connect them to other information and create thematic maps, statistical maps, topographical maps, vegetation maps or other types of maps–and to update them with no more than a few keystrokes. It also became easy to share maps between networked computers.

  During the work with the Canadian maps, Tomlinson allowed his thoughts to wander–wouldn’t it be something if there was a GIS database that everyone could connect to? One that covered the entire world down to the smallest detail? And he wasn’t the only person to be thinking along these lines–with the benefit of the increased computing power that accompanied transistors and microchips, several data engineers were working to develop computer programs that could share information with each other using an electronic network of users across the world–an Internet.

  The Internet of today is the result of work that was started by the United States Department of Defense in the late 1960s. The aim was to develop a communications network that would withstand a Soviet nuclear attack. Such a network couldn’t be based on a single master station, but must be able to function even if one of its parts was destroyed, and was therefore constructed using a flat structure in which everyone could send information to each other. Four computers in the states of California and Utah comprised the world’s first computer network when they were connected on 1 September 1969; two years later, the first email was sent using the @ symbol. In 1978, the invention of the modem meant that private individuals could send information to one another without having to go via the military network, and in 1994 the foundations for the World Wide Web, www, were laid when http (hypertext transfer protocol) made it possible to organise content into websites, and url (uniform resource locator) standardised web addresses, so that any computer anywhere in the world would reach the same location when typing in an address such as http://www.verdensteater.net. The world’s first online map service, mapquest.com, was launched in 1996, with Streetmap, Mappy, Multimap and Hot Maps following shortly after.

  In 1998, Vice President of the United States Al Gore took Tomlinson’s dream one step further. He started a speech by highlighting how a ‘new wave of technological innovation is allowing us to capture, store, process and display an unprecedented amount of information about our planet and a wide variety of environmental and cultural phenomena. Much of this information will be “georeferenced”–that is, it will refer to some specific place on the Earth’s surface.’ Gore imagined collating all this information using a single computer program that he called Digital Earth–‘a three-dimensional representati
on of the planet, into which we can embed vast quantities of geo-referenced data.’ He asked his audience to imagine a child using the program: ‘After donning a head-mounted display, she sees Earth as it appears from space. Using a data glove, she zooms in, using higher and higher levels of resolution, to see continents, then regions, countries, cities, and finally individual houses, trees, and other natural and man-made objects.’ Gore admitted that this scenario might sound somewhat far-fetched–but that, if it was possible, he imagined such a program being able to help with diplomacy, the reduction of crime, the preservation of natural diversity, climate change predictions and increased global food production. Gore believed that enough pieces of the puzzle were already in place to start planning such a project–‘we should endeavour to develop a digital map of the world at one meter resolution.’

  One existing piece of the ‘Digital Earth’ puzzle was a method of locating any place in the world digitally. The Transit system had been a success, but so had 621B, Secor and Timation–positioning systems developed by other branches of the United States military independently of the navy’s programme. In 1973 it was agreed that the best aspects of all the systems would be combined to create a new one–the Global Positioning System (GPS)–for which the first satellite was launched in 1978. The Norwegian Mapping and Cadastre Authority tested the new system in 1986, and concluded that the results provided by GPS exceeded all expectations. ‘It is not a question of if satellite positioning will become a part of land surveyors’ everyday activities, but when,’ they wrote enthusiastically; and just a short time later, in 1991, the positioning equipment that had been used more or less since the time of Johan Jacob Rick and Ditlev Wibe was put away for good.

 

‹ Prev