Learning to plan with uncertain topological maps

Edward Beeching
Jilles Dibangoye
Olivier Simonin
Christian Wolf
CITI, INRIA CHROMA, INSA Lyon
CITI, INRIA CHROMA, INSA Lyon
CITI, INRIA CHROMA, INSA Lyon
LIRIS, CNRS, INSA Lyon

Published at ECCV, 2020

[Paper]
[Code]
[Talk]
[Slides]



Abstract

We train an agent to navigate in 3D environments using a hierarchical strategy including a high-level graph based planner and a local policy. Our main contribution is a data driven learning based approach for planning under uncertainty in topological maps, requiring an estimate of shortest paths in valued graphs with a probabilistic structure. Whereas classical symbolic algorithms achieve optimal results on noise-less topologies, or optimal results in a probabilistic sense on graphs with probabilistic structure, we aim to show that machine learning can overcome missing information in the graph by taking into account rich high-dimensional node features, for instance visual information available at each location of the map. Compared to purely learned neural whitebox algorithms, we structure our neural model with an inductive bias for dynamic programming based shortest path algorithms, and we show that a particular parameterization of our neural model corresponds to the Bellman-Ford algorithm. By performing an empirical analysis ofour method in simulated photo-realistic 3D environments, we demonstrate that the inclusion of visual features in the learned neural planner outperforms classical symbolic solutions for graph based planning.


Presentation



Paper and Bibtex

[Paper]

Citation
 
Beeching, E., Dibangoye, J., Simonin, O., and Wolf, C., 2020. Learning to plan with uncertain topological maps. In proceedings of the European Conference on Computer Vision

[Bibtex]
@inproceedings{beeching2020learntoplan,
  title={Learning to plan with uncertain topological maps.
  },
  author={Beeching, Edward and Dibangoye, Jilles and 
          Simonin, Olivier and Wolf, Christian}
  booktitle={European Conference on Computer Vision},
  year={2020}}
                


Acknowledgements

This work was funded by grant Deepvision (ANR-15-CE23-0029, STPGP479356-15), a joint French/Canadian call by ANR \& NSERC. We gratefully acknowledge support from the CNRS/IN2P3 Computing Center (Lyon - France) for providing computing and data-processing resources needed for this work.
Website template from here and here.