Sparse Graphical Memory for Robust Planning

Michael Laskin*, Scott Emmons*, Ajay Jain*,
Thanard Kurutach, Pieter Abbeel, Deepak Pathak

* Equal contribution

UC Berkeley, FAIR

[Github Code] [Paper]

Overview

Sparse Graphical Memory (SGM) combines deep RL and classical planning to solve long-horizon tasks from images. We introduce a two-way consistency check for merging nodes resulting in sparse graphs over memory that significantly improve the robustness of long-horizon planning.

Abstract

To operate effectively in the real world, artificial agents must act from raw sensory input such as images and achieve diverse goals across long time-horizons. On the one hand, recent strides in deep reinforcement and imitation learning have demonstrated impressive ability to learn goal-conditioned policies from high-dimensional image input, though only for short-horizon tasks. On the other hand, classical graphical methods like A* search are able to solve long-horizon tasks, but assume that the graph structure is abstracted away from raw sensory input and can only be constructed with task-specific priors. We wish to combine the strengths of deep learning and classical planning to solve long-horizon tasks from raw sensory input. To this end, we introduce Sparse Graphical Memory (SGM), a new data structure that stores observations and feasible transitions in a sparse memory. SGM can be combined with goal-conditioned RL or imitative agents to solve long-horizon tasks across a diverse set of domains. We show that SGM significantly outperforms current state of the art methods on long-horizon, sparse-reward visual navigation tasks.

BibTex

@unpublished{laskin2020sparse,
  title={Sparse Graphical Memory for Robust Planning},
  author={Laskin, Michael and Emmons, Scott and Jain, Ajay and Kurutach, 
  Thanard and Abbeel, Pieter and Pathak, Deepak},
  note={arXiv:2003.06417}
}