We aim to build robots that perceive the world on a higher abstraction level than their raw sensors, and can communicate this perception to humans via natural language. The focus of this work is to enable a robot to ground antonym adjective pairs in its own sensors. We present a system where a robot is interactively trained by a user, grounding the robot’s multimodal continuous sensor data in natural language symbols. This interactive training plays on the strengths of the asymmetry in human-robot interaction: not only is the training intuitive for users who understand the natural language symbols and can demonstrate the concepts to the robot, but the robot can use quick data sampling and state of the art feature extraction to accelerate the learning. Such training allows the robot to reason not only about the learned concepts, but the spaces in-between them. We show a sample interaction dialog in which a user interactively grounds antonym adjective pairs with the robot, and data showing the state of the trained model.
I presented this paper at the HRI 2014 Workshop on Asymmetric Interactions.