Book Read Free

Emily Lakdawalla

Page 29

by The Design


  230 The Mast, Engineering Cameras, Navigation, and Hazard Avoidance

  6.5 ROVER DRIVING

  The rover drivers plan rover motion using a variety of local coordinate systems. They can

  instruct the rover to use various amounts of artificial intelligence to complete a drive. From less to more autonomous, the rover driving modes include blind driving, visual odometry

  (“visodom”), and autonomous navigation (“autonav”). Another mode, “guarded motion,”

  is a hybrid of visodom and autonav. Rover autonomy has a trade-off, because the greater

  the rover computing power required to drive safely, the slower the rover moves. To drive

  for distance, a drive may include segments of blind driving, then visodom, then autonav

  until reaching a time limit.

  6.5.1 Coordinate systems

  Placing the rover’s scientific observations in geographic context is crucial to interpreting them. The rover has inertial measurement units to dead-reckon its position and orientation.

  Ideally, all rover measurements would be tied precisely to a latitude/longitude/elevation

  spatial frame, but this can’t happen automatically because of imprecise instantaneous

  knowledge of the rover’s location.

  The quality of the rover’s position information degrades with time, for two reasons.

  First, the wheels slip. This means that the amount of distance the rover has traveled is

  never quite the same as the distance commanded. If wheels on one side slip more than

  those on the other side, slip results in unexpected rotation as well as distance. And second, the bumping and jostling of the rover as it travels over rough terrain accelerates the inertial measurement units in ways that can be incorrectly interpreted as distance traveled.

  To help manage the uncertainty in rover position and to compartmentalize the errors,

  the mission keeps track of several different spatial reference frames.6 The two most commonly used ones are the rover frame and the site frame. The rover frame is fixed relative

  to the rover. The rover frame origin is at a spot on the ground between the middle wheels

  (assuming the rover is perfectly level). In the rover frame, +X is forward, +Y is to the right, and +Z is down. A site frame has its origin at a fixed point on the surface of Mars. The

  rover performs operations like camera pointing, arm activities, and drives relative to the

  site frame. The site frame has +X pointing north, +Y pointing east, and +Z pointing down-

  ward in a direction perpendicular to the map. Over time, error accumulates in the rover’s

  reckoning of its motion relative to the site origin. Periodically, the team declares a new site origin and increments the site number. By keeping careful track of where measurements

  were made in the rover frame, and precisely determining the geographic location of each

  site frame, science measurements can be precisely geolocated.

  When the mission declares a new site origin, the spatial position is determined by com-

  paring Navcam photos to orbital image data, but it’s harder to precisely identify the rover’s orientation in space. Curiosity’s inertial measurement units provide continuously up-to-date pitch (front-to-back tilt) and roll (side-to-side tilt) information, but the rover’s

  6 The various reference frames are described in detail in Alexander and Deen (2015).

  6.5 Rover Driving 231

  Figure 6.6. A typical right Navcam image of the Sun, taken to support a new site frame declared after a drive on sol 324. The horizontal line is pixel bleeding caused by overexpo-sure. Image NRB_426264304EDR_F0060864SAPP07612M. NASA/JPL-Caltech.

  knowledge of its yaw (compass orientation) degrades over time. Curiosity periodically

  updates its yaw knowledge by shooting a mid- to late-afternoon photo of the Sun with the

  right Navcam. Even with pixel bleeding, the rover can identify the location of the Sun

  precisely enough to identify its yaw relative to the local coordinate system (Figure 6.6).

  6.5.2 Driving modes

  6.5.2.1 Blind driving

  In a blind drive, the rover doesn’t employ any onboard intelligence to look at the landscape during the drive. Instead, the rover planners examine a 3D model of the landscape or “terrain mesh” calculated from Navcam and Hazcam images, and command the rover to roll

  its wheels a certain distance, turn through a specific number of degrees, and so on. The

  lengths of blind drives are limited to the distance that the rover can see well enough with the Navcams to develop a terrain mesh, usually no more than 50 meters. Blind drives can

  be longer than 50 meters if the terrain slopes upward and is benign. If the terrain is slippery (as it may be if it’s sandy or sloping), blind driving can be inaccurate. Blind driving is the fastest mode, achieving speeds of roughly 100 meters per hour.

  232 The Mast, Engineering Cameras, Navigation, and Hazard Avoidance

  When executing a blind drive, the rover doesn’t perform any checks to make sure it is

  on course. It does always perform checks to make sure that the mobility system is operat-

  ing within safety limits, and will stop the drive short if (for example) there is too much tilt or too much resistance to the motion of a wheel. The rover planners may set these limits

  differently for each and every drive: a drive over smooth terrain should result in little rover tilt, so they’ll set tilt limits lower than they would for a drive over rockier terrain.

  6.5.2.2 Visual odometry

  Visual odometry, or “visodom”, helps the rover maintain the course that the rover driv-

  ers set. During a drive, the rover looks to the side with its Navcams, taking stereo images at specified intervals (ranging from 50 to 150 centimeters). The rover computer compares pairs of images, matching features between image pairs, to determine how far the

  rover actually moved. The rover can then re-plan its path based upon its determination

  of how far it judges it has actually traveled, or can stop its travel if it is not making sufficient progress due to wheel slippage. Visual odometry slows the rover to roughly 50

  meters per hour.

  6.5.2.3 Autonomous navigation and guarded motion

  Autonomous navigation, or “autonav”, is an even more sophisticated autonomous driving

  capability that allows the rover to drive beyond its terrain mesh. The rover drivers identify a goal, specified as a position in the local site frame coordinate system. The rover moves a short distance of 50 to 150 centimeters. It snaps Hazcam images and processes them into

  3D information to update the terrain mesh. It identifies obstacles exceeding 50 centimeters in height and slopes steeper than 20°. The rover charts the “traversability” of a square of nearby terrain extending 5 meters around the rover, divided into a 20-centimeter grid. Each grid cell is assigned a “goodness” and “certainty” estimate that rolls together the rover’s determination of the safety of that patch of terrain. The rover fits models of itself into this map to find the safest path. It rolls forward by another increment of 50 to 150 centimeters depending on how safe it perceives the terrain to be, then repeats the Hazcam imaging and

  evaluation process. Because of all the calculation, autonav is slow: a top speed of about 50

  centimeters per minute, or about 30 meters per hour.

  A related form of driving is “guarded motion,” where the rover planners give the rover

  a specific path to follow using visual odometry, but then instruct the rover to use autonav to verify that the path is indeed safe as it moves forward.

  The use of autonav was ended following discovery of the wheel degradation problem

  (see section 4.6.4); mitigating wheel damage required rover planners to avoid hazardous terrain on a scale finer than the 20-centimeter grid used by autonav. It was re-enabled as of sol 1780,
and planners have discretion to choose whether the local terrain is benign enough to enable autonav.

  6.6 References 233

  6.5.2.4 Multi-sol driving

  When Curiosity landed, it could not save the terrain maps generated one sol and use them

  on the next sol. As part of a set of improvements included in flight software version R.11, implemented on sol 484, engineers added the ability to save on-board terrain maps during

  sleep to enable the rover to use the same one to continue a drive the next day, increasing

  the drive distances achieved during traverse periods.

  6.6 REFERENCES

  Alexander D and Deen R (2015) Mars Science Laboratory Project Software Interface

  Specification: Camera & LIBS Experiment Data Record (EDR) and Reduced Data

  Record (RDR) Data Products, version 3.5.

  Kloos J L et al (2016) The first Martian year of cloud activity from Mars Science Laboratory (sol 0–800). Adv Space Res 57:1223–1240, DOI: 10.1016/j.asr.2015.12.040

  Lemmon M T et al (2017) Dust devil activity at the Curiosity Mars rover field site. Paper

  presented at the 48th Lunar and Planetary Science Conference, The Woodlands, Texas,

  20–24 Mar 2017

  Maki J et al (2012) The Mars Science Laboratory engineering cameras. Space Sci Rev

  170:77–93, DOI: 10.1007/s11214-012-9882-4

  Moores J E et al (2014) Update on MSL atmospheric monitoring movies sol 100–360.

  Paper presented at the 45th Lunar and Planetary Science Conference, The Woodlands,

  Texas, 17–21 Mar 2014

  7

  Curiosity’s Science Cameras

  7.1 INTRODUCTION

  Curiosity has five science cameras. The color Mastcams view the rover’s world in color at

  two different resolutions. The Mars Hand Lens Imager (MAHLI, pronounced “Molly”) on

  the turret at the end of the arm, is a wide-angle color camera that can be held close to a

  target or perform distance imaging. The Mars Descent Imager (MARDI) is fixed to the

  rover body, pointing down, with a view of the surface as it passes under the rover. Together, these three instruments are often referred to as the “MMM” cameras. They have common

  detector and electronics and software design and differ only in their optics. Finally, there is the laser-equipped ChemCam, which measures elemental compositions of nearby rocks

  and also possesses the camera with the highest angular resolution on the rover, the Remote

  Micro-Imager (RMI). It will be described in Chapter 9 with the other composition analysis

  instruments.

  Figure 7.1 shows the locations of camera instruments and related hardware on the rover.

  The engineering cameras (Navcams and Hazcams, section 6.3) serve science functions as well. They provide context for science observations and perform remote sensing science

  observations, particularly atmospheric science. Table 7.1 compares all of Curiosity’s imaging capabilities.

  7.2 MASTCAM

  The Mastcam instrument consists of two camera heads located on the mast, an electronics

  assembly located in the belly of the rover, and a calibration target on the rover deck. With the Mastcams, the science team investigates geomorphology, stratigraphy, and texture of

  the landscape, rocks, and sediments around the rover. They also monitor atmospheric and

  even astronomical phenomena. They support the rover’s engineering activities and provide

  © Springer International Publishing AG, part of Springer Nature 2018

  234

  E. Lakdawalla, The Design and Engineering of Curiosity, Springer Praxis Books,

  https://doi.org/10.1007/978-3-319-68146-7_7

  7.2 Mastcam 235

  Figure 7.1. Locations of camera instrument components on the rover, as well as some devices often imaged with Mastcams. Mastcam, Navcam, and ChemCam covers in top image were

  used only during cruise and landing. Top image is cropped from the Gobabeb MAHLI self-

  portrait mosaic, sol 1228. Bottom image taken at JPL during assembly. NASA/JPL- Caltech/

  MSSS/Emily Lakdawalla.

  236 Curiosity’s Science Cameras

  ver

  vement

  MARDI

  1600 × 1200

  70–52

  0.76

  yes, with

  ro

  mo

  arbitrary

  no

  0.66

  420–690

  Bayer color

  vement

  MAHLI

  1600 × 1200

  34.0–38.5

  0.402–0.346

  yes, with arm

  mo

  arbitrary

  yes

  arbitrary

  420–690

  Bayer color

  RMI

  1024 × 1024

  1.3

  0.022

  no

  –

  yes

  2.1

  450–950

  monochrome

  ferent

  MR

  6.8 × 5.1

  0.074

  V

  8 plus

  Bayer

  ye

  ut with dif

  ML

  1600 × 1200

  20 × 15

  0.22

  yes, b

  resolution/FO

  in each e

  24.5

  yes

  1.9

  395–1100

  8 plus

  Bayer

  . as

  av

  s camer

  N

  1024 × 1024

  45 × 45

  0.82

  yes

  42.4

  no

  1.9

  600–800

  monochrome

  RHaz

  10

  0.78

  FHaz

  1024 × 1024

  124 × 124

  2.1

  yes

  16.7

  no

  0.68

  600–800

  monochrome

  el)

  els)

  ace (m)

  Comparison of the capabilities of Curiosity’

  ve surf

  V (°)

  V at center (mrad/pix

  Table 7.1.

  CCD Detector (pix

  FO

  IFO

  Stereo?

  Stereo

  separation (cm)

  Depth information from focal

  depth?

  Height abo

  Spectral bandpass (nm)

  Filters

  7.2 Mastcam 237

  Figure 7.2. Parts of the Mastcam instrument. Photos of the Mastcam-100 camera head and digital electronics assembly were taken at Malin Space Science Systems before their delivery to JPL for assembly. Bottom self-portrait taken at the John Klein drill site on sol 177 by MAHLI. Inset self-portrait showing the back of the camera heads and their wire harnesses taken at Okoruso drill site, sol 1338. NASA/JPL-Caltech/MSSS/Emily Lakdawalla.

  context images for data from other science instruments. The Mastcams were built by

  Malin Space Science Systems, San Diego, California. The principal investigator for the

  Mastcam experiment is Michael Malin of Malin Space Science Systems.

  The Mastcams differ from previous lander cameras in two significant ways. First,

  nearly all Mastcam views are in full, human-vision-like color. Second, the two camera

  “eyes” have different focal lengths, which makes stereo imaging more complex than for

  previous missions. (Read section 1.5.8 for the history of the development of Mastcam that

  238 Curiosity’s Science Cameras

  Table 7.2. Mastcam facts.

  Mastcam-34

  Mastcam- 100

  (Mastcam-L)
<
br />   (Mastcam-R)

  Boresight height above bottom of wheels

  1.97 m

  Elevator actuator axis height above

  1.91 m

  bottom of wheels

  Stereo separation

  24.64 cm

  FOV (horizontal 1600 pixels)

  20.6°

  6.8°

  FOV (vertical 1200 pixels)

  15°

  5.1°

  instantaneous field of view (IFOV)

  218 μrad

  74 μrad

  Pixel scale at a distance of 2 meters

  450 μm

  150 μm

  Pixel scale at a distance of 1 kilometer

  22 cm

  7.4 cm

  focal ratio

  f/8

  f/10

  effective focal length

  34 mm

  100 mm

  in-focus range

  0.4 m to infinity

  1.6 m to

  infinity

  exposure range

  0 to 838.8 s in 0.1 ms increments

  video frame rate

  5.9 to 7.7 fps at 720p (1280-by-720)

  3.9 to 4.7 fps for full frame

  led to the flight of a pair of Mastcams with different focal lengths.) The left Mastcam or

  Mastcam-34 has shorter focal length, lower angular resolution, and wider field of view.

  The right Mastcam or Mastcam-100 has longer focal length, higher angular resolution,

  and narrower field of view.

  7.2.1 How Mastcam works

  7.2.1.1 Camera heads

  The Mastcams are 2-megapixel color cameras with focusable lenses and filter wheels. 1

  The heads contain electronics, a detector, a filter wheel assembly, a focus mechanism, and

  a sunshade/baffle that also serves as a mount (Figure 7.2). Each head contains two stepper motors, one to drive the filter wheel and one to drive the focus mechanism. The two

  Mastcams have boresights separated by 24.64 centimeters, and they are angled inward by

  2.5° (1.25° each) in order to ensure that the smaller field of view of the Mastcam-100 is

  entirely contained within the wider field of view of the Mastcam-34 for any target located

  farther than 1.4 meters away from the rover. The boresights cross at a distance 2.8 meters

  1 Prior to landing, there was no peer-reviewed paper describing Mastcam or MARDI. Mastcam was described in two Lunar and Planetary Science Conference abstracts: Malin et al. (2010) and Bell

 

‹ Prev