Invited Speakers

Jeremy Wyatt 

University of Birmingham, UK

Title: Autonomous learning in interactive robots

Abstract: In this talk I will give an overview work on learning in robots that have multiple sources of input, and in particular that can use a combination of vision and dialogue to learn about their environment. I will describe the kinds of architectural problems and choices that need to be made to build robots that can choose learning goals, plan how to achieve those goals, and integrate evidence from different sources. To that end I will focus on a robot system called George, developed as part of the CogX project. I will also describe how methods for planning under state uncertainty can be used to drive information gathering and thus learning in interactive robots.

Oliver Lemon 

Heriot-Watt University, Edinburgh, UK

Title: Data-driven methods for Adaptive Multimodal Interaction

Abstract: How can we build more flexible, adaptive, and robust systems for interaction between humans and machines?  I'll survey several projects which combine language processing with robot control and/or vision (for example, WITAS and JAMES), and draw some lessons and challenges from them. In particular I'll focus on recent advances in machine learning methods for optimising multimodal input understanding, dialogue management, and multimodal output generation. I will argue that new statistical models (for example combining unsupervised learning with hierarchical POMDP planning) offer a unifying framework for integrating work on language processing, vision, and robot control.

MLIS2012: Machine Learning for Interactive Systems: Bridging the Gap between Language, Motor Control and Vision

ECAI Workshop 2012
August 27th 
Montpellier, France