Ripley the Robot: The world's first conversational robot with capabilities comparable to the Token Test for 3-year old Children
Ripley is a manipulator arm robot with grasping, vision, dialogue, and manipulator capabilities, aiming to become a conversational tabletop assistant. He is equipped with an innovative Grounded-Situation-Model-based cognitive architecture, which enables him to learn the meanings of the words that he uses through experience, and to hold multiple gradations of uncertainty for aspects of the current situation. His cognitive architecture also enables fluid bidirectional translation between language and the senses: Ripley can "imagine" situations that are described to him, and which might have never before seen; and he can turn the verbal descriptions to sensory expectations to be later potentially verified through his own eyes. Furthermore, Ripley has an event-based episodic memory, which allows resolution of temporal references, remembering, and answering questions about the past. Thus, Ripley's abilities become comparable to those required for passing the Token Test, a human test for language-sensorymotor coordination which is administered to three-year-old children. Enjoy the videos!
Public Appearances and Demonstrations
Ripley has been demonstrated during numerous MIT Media Lab Open House events, up to 2006, where more than 200 visiotors in total have experienced him while interacting.
Selected Papers and Other Material
N. Mavridis, "Grounded Situation Models for Situated Conversational Assistants", PhD thesis at MIT, available online at MIT DSpace Digital Thesis Repository (readable, non-printable), for printable version click here (pdf)
N. Mavridis, D. Roy, "Grounded Situation Models for Robots: Where words and percepts meet", IEEE IROS 2006 (>30 citations so far) (click here for pdf)