Autonomy in Planetary Rovers

In this update, I focus on autonomous aspects of planetary rovers. Specifically, I’ve researched Martian rovers. This is because there are no other functioning rovers in the solar system. Importantly, note that Martian rovers face different environmental conditions and communication demands than rovers elsewhere would. For example, a lunar rover would need far less autonomy. Being much closer to the Earth, the moon would allow faster communication between rover and Earth. Further, the distance between the Earth and its moon varies much less than the distance between the Earth and Mars, which are in different orbits around the sun. A lunar rover would also not need to autonomously detect dust devils, as the moon has no atmosphere. In the opposite extreme, a rover on Titan, a moon of Jupiter, would need far more autonomy. This rover would not only suffer longer delays in communication with the Earth, it would also operate in a lesser known environment.

We saw above that autonomous systems of future rovers on other planets will not reflect those of Martian rovers. While core algorithms for navigation, science execution, and object recognition could be similar, the purposes of these algorithms and the scheduling systems that execute them would likely differ. Having addressed unwarranted generalizations, I now review the state and future of autonomous systems on Mars.

To discuss autonomy on Martian rovers, we must first understand their schedules. Human operators schedule activities for the Martian Science Laboratory (MSL, AKA Curiosity) in three different levels: tactical, supratactical, and strategic. Strategic planning arranges longer-term goals with less detail, and supratactical planning bridges the gap between strategic and tactical planning. Tactical planning is the daily (per Martian sol – a Martian day) scheduling of activities for the Martian rover. Humans on Earth do much of this, but humans also never have the full picture: orbital imagery cannot reveal all the conditions MSL faces. Even if it could, Martian conditions change rapidly. This necessitates a baseline degree of autonomy on MSL. Even so, MSL faces large productivity challenges. To understand why, consider a Martian sol. At its end, MSL transmits data to Earth, where operators are going to bed. The next Earth morning, operators use the data to tactically plan the MSL’s next operations. At the end of that Earth day, operators transmit commands to MSL. But because the Martian day is shorter, an entire sol has passed in that time. The MSL receives its commands the morning after a day of no commands. This and similar scenarios are “restricted sols.” Regrettably, such sols occur often – 41% of Martian sols are restricted [1].

To address these productivity challenges, Gaines et al. [2] devise “campaign intent” – a way of expressing goal relationships to rovers. By relating activities, the rover could deduce their values within a scientific campaign. The rover with campaign intent knowledge could then choose what goals to execute when it has too many goals subscribed. In addition, it could decide on and achieve goals when in areas unseen by human operators. Campaign intent is formalized for the rover’s ASPEN planning system in a “Plan Campaign.” This has three major types: goal-set campaign, temporal campaign, and state-based campaign. Each campaign type is satisfied most by different activity arrangements. The autonomously generated plan therefore specializes to the human-specified plan campaign [2].

Having the above framework, plan creation is a matter of generating multiple opportunities and then planning them. Gaines et al. [2] choose a branch-and-bound search; partial plans are generated, their extensions towards a fuller plan evaluated, and all suboptimal extensions pruned. This is repeated until a full plan is created. Such an algorithm explores the full solution space, but there are other optimization options. Genetic mutation algorithms might yield similar solutions with differing performance, and indeed a hybrid of genetic and branch-and-bound algorithms has achieved faster runtimes than brand-and-bound alone [3].

Plan creation via campaign intent, however, is just one aspect of what Gaines et al. later describe as Self-Reliant Rover Design. The other three components are slip-aware navigation, model-based health assessment, and global localization [4]. Slip-aware navigation helps the rover assess slippage likelihood and adjust planned paths accordingly. This navigation system is built on the Mars Exploration Rover’s GESTALT algorithm, which classifies obstructions via geometric grids and visual odometry. In addition, slip-aware’s SPOC algorithm classifies terrain as soil, sand, or flagstone. Fascinatingly, SPOC uses deep convolutional neural networks to classify images [5]. I find it promising that even the limited hardware of a rover can perform a classification in tests. After the results of terrain classification and visual odometry are combined, a RRT# planner arranges paths around obstructions.

I would like to comment about two aspects of slip-aware worth further research. First, in 5501 slip occurrences, Bothrock et al. [5] use SPOC to classify terrain. They then predict slippage for the terrain using regression models of slippage as a function of slope. These regression models, however, do not accurately predict slippage on sand. It might be worth future research to perform end-to-end prediction. That is, predict slippage without terrain classification as an intermediate. Part of the probability distribution returned by the CNN in [5] may help predict slippage, but this information is lost if we settle on one of three classifications before predicting slippage.

Another area of future navigation research may be bypassing the multi-step pipeline of slip-aware navigation in favor of end-to-end deep reinforcement learning. If JPL could vary its Mars Yard to present the test rover with a large number of possible combinations of terrain type and obstruction, the rover could train itself with reinforcement learning to navigate as well as or better than a slip-aware-navigated rover. Given the ability of MSL to predict with SPOC in tests, a neural network for deciding next actions may be feasible. The difference, however, is that SPOC would likely run less frequently than an action-deciding neural network. This might prove too computationally expensive for current rovers.

The third aspect of Self-Reliant Rover Design is model-based health assessment, a component more based on pre-defined models than learning. This component uses MONSID, a system that compares sensor data with a predefined model of normal system operation. MONSID detects faults when the sensor data and the model don’t match. MONSID then identifies where the fault occurred via a search algorithm (constraint suspension) that iterates through model components. While the rover constantly runs fault detection algorithms, it logically only runs fault identification algorithms once a fault is detected [6].

Lastly, the fourth component of Self-Reliant Rover Design is global localization. This corrects discrepancies between the position the rover believes it is in and its actual position as per the HiRISE camera on the Mars Reconnaissance Orbiter. In global localization, the rover and HiRISE take pictures of the rover’s environment, next comparing these images’ intensities and elevation maps. This comparison is based on mutual information. The rover aligns its map, completing global localization [4].

Above I described the state-of-the-art in autonomous rover design. It is not yet implemented on any rover, but primitive versions of its components have been or are in use. For example, the Mars Exploration Rover (MER) and MSL have used the AEGIS system to identify and sequence observations of science targets [7]. Note, though, that scientists cannot provide as broad guidance to AEGIS as they could with campaign intent (e.g. observation periodicity in time or in class) [4]. In addition, MSL has driven autonomously using Autonav, a technology seemingly like the slip-aware navigation described by Gaines et al. [4], but not accounting for terrain type [4], [8].

As autonomous exploration continues on Mars, the technologies of Self-Reliant Rover Design will optimize productivity. As I noted with slip-aware navigation, however, more advanced learning algorithms might be worth research. Unfortunately, limited computing power inhibits the use of newer “deep” algorithms. Indeed, NASA anticipates that by 2020 it will only have radiation-hardened computers with processing power analogous to the Raspberry Pi 3 [9]. This may prove too little for constant prediction with deep algorithms on Martian rovers. If so, however, there is still hope: Hewlett-Packard is testing one of the first supercomputers in space on the ISS. Instead of being radiation-hardened, the computer throttles down its speed when facing a radiation hazard [9]. If this is successful, such technology might be applied on rovers. This would allow increased autonomy on Mars and nearly-fully autonomous exploration of more distant targets.



A test of self-reliant rover technologies in the JPL Mars Yard. The operators specified the orange cone as a target and provided campaign-guidance for areas B, C, and D. This led the rover to autonomously plan the blue route and select the cyan targets near each area. The rover adjusted course in the green route to avoid obstructions or poor terrain conditions [4].


Reference List

[1]       D. Gaines, G. Doran, H. Justice, G. Rabideau, S. Schaffer, V. Verma, K. Wagstaff, V. Vasavada, W. Huffman, R. Anderson, R. Mackey, T. and Estlin, “Productivity challenges for Mars rover operations: A case study of Mars Science Laboratory operations,”  Jet Propulsion Laboratory, Cal. Tech. Technical Report D-97908, Jan. 2016.

[2]       D. Gaines, G. Rabideau, G. Doran, S. Schaffer, V. Wong, A. Vasavada, and R. Anderson, “Expressing Campaign Intent to Increase Productivity of Planetary Exploration Rovers,” in International Workshop on Planning and Scheduling for Space, IWPSS 2017, Pittsburgh, PA, June 15-16, 2017.

[3]       C. Pessan, J.L. Bouquard, and E. Néron, “Genetic Branch-and-Bound or Exact Genetic Algorithm?” in International Conference on Artificial Evolution, EA 2007, October 29-31, 2007.

[4]       D. Gaines, J. Russino, G. Doran, R. Mackey, M. Paton, B. Rothrock, S. Schaffer, A. Agha-mohammadi, C. Joswig, H. Justice, K. Kolcio, J. Sawoniewicz, V. Wong, K. Yu, G. Rabideau, R. Anderson, and A. Vasavada, “Self-Reliant Rover Design for Increasing Mission Productivity,” in 2018 ICAPS Workshop on Planning and Robotics, PlanRob 2018, Delft, The Netherlands, June 26, 2018. Also appears in International Symposium on Artificial Intelligence, Robotics, and Automation in Space, i-SAIRAS 2018, June 4-6, 2018.

[5]       B. Rothrock, J. Papon, R. Kennedy, M. Ono, and M. Heverly, “SPOC: Deep Learning-based Terrain Classification for Mars Rover Missions” in AIAA SPACE Forum, AIAA SPACE 2016, Long Beach, California, USA, September 13-16, 2016.

[6]       Allen Nikora, Priyanka Srivastava, Lorraine Fesq, Seung Chung, Ksenia Kolcio, “Assurance of model-based fault diagnosis”, Aerospace Conference 2018 IEEE, pp. 1-14, 2018.

[7]       T. Estlin, B. Bornstein, D. Gaines, D. Thompson, R. Castano, R.C. Anderson, C. de Granville, M. Burl, M. Judd, and S. Chien, “AEGIS Automated Targeting for the MER Opportunity Rover,” in International Symposium on Artificial Intelligence Robotics and Automation in Space, i-SAIRAS 2010, Sapporo, Japan, August 29-September 1 2010.

[8]       NASA, “NASA’s Mars Curiosity Debuts Autonomous Navigation,”, Aug. 27, 2013. [Online]. Available: [Accessed: Aug. 17, 2018].

[9]       S. Vaughan-Nichols, “The Space Station’s New Supercomputer,”, Aug. 14, 2017. [Online]. Available: [Accessed: Aug. 17, 2018].


  1. This was an awesome read! It never occurred to me that a lunar rover would need less autonomy because it’s so much closer and since the moon has no atmosphere, and thus no atmospheric events. I wonder how much scientists would need to know to send a rover somewhere- how exact do calculations have to be for distance and atmospheric conditions? It’s exciting to think about the possibilities of putting rovers on distant planets, like Titan, as you mentioned.

  2. wwcranford says:

    Fascinating stuff, Adam. The use of deep neural networks to identify terrain is especially interesting, it seems like neural networks and machine learning algorithms are everywhere. What is radiation hardening, and how would slowing down computing speed affect it? Could future rovers be more computationally powerful, or is there a limit?

  3. Adam Abate says:

    Hi there, thanks so much for your comment! Radiation hardening is a set of methods for protecting electronics used in environments with lots of charged particles around — space, nuclear reactors, etc. The methods could consist of putting chips on more “durable” materials, shielding the entire electronic system (though you can imagine that surrounding a spacecraft with lead might be a bit impractical), adding redundant electronic components, and more. Of course, this all gets very expensive. A less-expensive alternative is to use little or no shielding and instead throttle down processor speed when a radiation event happens. The intuition here deals with electromagnetism: a charged particle moving at high speeds (radiation) produces a strong magnetic field. Computers fundamentally consist of electrons moving around in circuits. On a very general level, the faster the computer operates, the faster these electrons move around (think of how processor speed is measured in GHz). But the faster the electrons move, the more they are acted on by the magnetic field produced by the radiation! So an intuitive way of addressing radiation is to make processing power inversely proportional to the amount of radiation — that is, in a general sense, what HP did with its supercomputer on the ISS. It’s not perfect, but I believe that supercomputer is still running its tests successfully a year later. So I suppose a rover’s computers could have more power if they used this throttling technique more than radiation hardening. But I’ve also read that Mars is exposed to more radiation than the ISS (Mars has no magnetic field; the ISS benefits a little bit from Earth’s). It would also be rather unfortunate if a multi-day radiation shower disabled the supercomputing rover for those days. But that’s not to say there’s no hope! Even radiation-hardened hardware is getting better (think Raspberry Pi level), and combining those advances with the HP throttling technique might yield more capable rovers that can use state-of-the-art machine learning.

  4. Adam Abate says:

    Thank you so much! What you bring up is part of the trick with deep space exploration! Space agencies have very low margins of error. I suppose part of the reason Huygens had no movement capabilities is because no one knew enough about Titan’s surface to justify putting wheels on the probe (though extra weight and expense were probably also major factors). And it was a good decision, too — Huygens sank a few centimeters when it landed! Turns out the surface where it was had a snow-like composition (hard on the surface, but something you sink into after enough pressure). But this is possibly less of an issue if your rover doesn’t rove on the ground. In Titan’s case, that could mean using a flying drone. The Hopkins Applied Physics Lab is doing work to this end. Certainly the composition of Titan’s atmosphere varies less (at a constant altitude) than the composition of its surface. And since Titan’s atmosphere is so thick, flying a drone there would be easier than on Earth. Of course, that drone would move much greater distances than Mars rovers. Combine that with a much larger communication delay with the Earth, the need to remain stable in wind storms (I’m not sure if Titan has strong winds), and other factors, and you have much greater autonomy requirements! Luckily, radiation is of less concern on Titan than on Mars because Titan has a thicker atmosphere. So making such computationally expensive autonomy might be feasible!