Here are my publications in chronological order. You can also find me on google scholar.


  1. A Human-Inspired Controller for Fluid Human-Robot Handovers. International Conference on Humanoid Robots, 2016. Jose Medina, Felix Duvallet, Murali Karnam, Aude Billard. [PDF]

  2. Integrated Intelligence for Human Robot Teams. International Symposium on Experimental Robotics (ISER), 2016. J. Oh, T. Howard, M. Walter, D. Barber, M. Zhu, S. Park, A. Suppe, L. Navarro-Serment, F. Duvallet, A. Boularias, O. Romero, J. Vinokrov, T. Keegan, R. Dean, C. Lennon, B. Bodt, M. Childers, J. Shi, K. Daniilidis, N. Roy, C. Lebiere, M. Hebert, and A. Stentz. [PDF]

  3. Learning Qualitative Spatial Relations for Robotic Navigation. International Joint Conference on Artificial Intelligence, 2016. Abdeslam Boularias, Felix Duvallet, Jean Oh, Anthony Stentz. Best paper track. [PDF]

  4. Learning Models for Following Natural Language Directions in Unknown Environments. International Conference on Robotics and Automation (ICRA), 2015. Sachithra Hemachandra, Felix Duvallet, Thomas Howard, Nicholas Roy, Anthony Stentz, Matthew R. Walter. [PDF]

  5. Grounding Spatial Relations for Outdoor Robot Navigation. International Conference on Robotics and Automation (ICRA), 2015. Abdeslam Boularias, Felix Duvallet, Jean Oh and Anthony Stentz. Best Cognitive Robotics paper award. [PDF]

  6. Inferring Maps and Behaviors from Natural Language Instructions. International Symposium on Experimental Robotics (ISER), 2014. Felix Duvallet, Matthew R. Walter, Thomas Howard, Sachithra Hemachandra, Jean Oh, Seth Teller, Nicholas Roy, Anthony Stentz. [PDF]

  7. Imitation Learning for Natural Language Direction Following through Unknown Environments. International Conference on Robotics and Automation (ICRA), 2013. Felix Duvallet, Thomas Kollar, Anthony Stentz. Best Cognitive Robotics paper nominee. [PDF]

  8. Imitation Learning for Task Allocation. International Conference on Intelligent Robots and Systems (IROS), 2010. Felix Duvallet, Anthony Stentz. [PDF]

  9. WiFi Localization in Industrial Environments Using Gaussian Processes. International Conference on Intelligent Robots and Systems (IROS), 2008. Felix Duvallet, Ashley Tews. [PDF]

  10. Developing a Low-Cost Robot Colony. AAAI Fall Symposium: Regarding the Intelligence in Distributed Intelligent Systems, 2007. Felix Duvallet, J. Kong, E. Marinelli, K. Woo, A. Buchan, B. Coltin, C. Mar, B. Neuman. [PDF]

  11. Fun With Robots: A Student-Taught Robotics Course. International Conference on Robotics and Automation (ICRA), 2006. S. Shamlian, K. Killfoile, R. Kellogg, F. Duvallet. [PDF]

  12. Relative Localization in Colony Robots. Proceedings of the National Conference On Undergraduate Research (NCUR), 2005. [PDF]


Natural Language Direction Following for Robots in Unstructured Unknown Environments. Felix Duvallet, January 15, 2015 [PDF]


Robots are increasingly performing collaborative tasks with people, and with this increase in interaction comes a need for efficient communication between human and robot teammates. Natural language provides a flexible and intuitive way to issue commands to robots without requiring specialized interfaces or extensive user training. However, most existing approaches to natural language understanding require that the robot’s environment be known a priori, severely limiting the environments in which it can operate. Understanding natural language in unknown environments is more challenging, as the robot must now make decisions using only information about the parts of the environment it has observed so far. To date, no solution exists to the problem of real robots following natural directions through unstructured and unknown environments.

We address this gap by formulating the problem of following directions in unknown environments as one of sequential decision making under uncertainty. In this setting, a policy reasons about the robot’s knowledge of the world so far, and predicts a sequence of actions that follow the direction to bring the robot towards the goal.

We first show how robots can learn policies that reason about the uncertainty present in the environment. We describe an imitation learning approach to training policies that uses demonstrations of people giving and following directions. Building upon this work, we propose a novel view of language as a sensor, whereby we “fill in” the unknown parts of the environment beyond the range of the robot’s traditional sensors using information implicit in the given instruction. We find that this use of language as a sensor enables robots to follow navigation commands in unknown environments with performance comparable to that of operating in a fully-known environment.