controlling Pepper’s nonverbal behaviors

Embodied Conversational Agents (ECAs) are virtual entities with human-like appearance. They also communicate verbally and nonverbally. They are used as interface in human-machine interaction taking several roles, such as assistant, tutor, or companion. They are endowed with communicative capability, that is, they can dialog with humans using verbal and nonverbal means. The platform Greta/VIB allows controlling the multimodal behaviours of ECAs. It takes as input a text to be said by the agent. The text has been enriched with information on the manner the text ought to be said (i.e. with which communicative intentions it should be said). The behavioral engine selects the multimodal behaviors to display and synchronizes the verbal and nonverbal behaviors of the agent.
While the Greta/VIB platform has been developed for virtual characters, its modular architecture ensures the portability of the platform for humanoid robot such as Pepper. The respective animation modules for the humanoid robot and the virtual agent are generated from a language command of the type ‘move the right arm forward with the palm up’. Both languages should be made compatible to ensure that they both encompass the limitation of the robot’s movement capabilities and are able to produce equivalent movements on the robot and on the virtual agent.
The purpose of the training is to adapt the VIB/Greta platform to the humanoid robot Pepper. To do so, the following tasks are foreseen:
Intended tasks:
• Create a repertoire of multimodal behaviours
• Design synchronization mechanisms to ensure gesture timing with speech
• Develop the behaviour realization module for Pepper
• Evaluation of the communicative behaviours of Pepper
Required skills:
• Programming skills: java, ROS
• Robotics

ISIR - Sorbonne Université
Catherine Pelachaud
Mohamed Chetouani
Référent Universitaire: 
2 019

User login