Expressive nonverbal behaviors model

Abstract:
This internship is part of the National project ANR ENHANCER which aims to develop an embodied conversational agent platform to ensure interaction with healthy persons and with persons suffering of schizophrenia.

Embodied conversational agents can take on a human appearance and can communicate verbally or non-verbally (Lugrin et al., 2021). They can be used as an interface in human-machine interaction by playing multiple roles such as assistant, teacher, guide or companion. They have communication skills, i.e. they can interact with humans through verbal and non-verbal means of communication.

Non-verbal behavior can be characterized by its shape (e.g., a facial expression, a hand shape), its trajectories (linear, sinusoidal), its timing (in correlation with speech) and its manner of execution (speed of movement, acceleration). The latter is referred to as behavior expressivity. Laban annotation (Laban and Ullmann, 1988) describes expressive dance movements along four dimensions (time, weight, space, and flow). Several of these behavioral characteristics are used to develop computational models controlling virtual agents. Laban’s model was implemented in virtual agents (Durupinar et al., 2017). On the other hand, to characterize emotional body movements, Wallbott and Scherer (1986) defined a set of six expressivity parameters namely: spatial extent, temporal extent, fluidity, power, repetition, and overall activation. These parameters have been implemented to control the dynamic quality of virtual agents’ behavior (Hartmann et al., 2005). An extension has been proposed by (Huang et al., 2012). Lately, data-driven approach has been applied to model expressive gaze and gait (Randhavane et al., 2019), facial expression of emotion (Festl & McDonnell, 2018) and gesture (Neff, 2016).

Internship Objectives:
The objective of the internship is to manipulate the behavior expressivity of the agent. The agent will be able to perform non-verbal behaviors with different expressivities during the whole interaction. Expressivity acts on the dynamics and amplitude of the behaviors as well as on their number of occurrences. It will allow us to create agents doing few behaviors with low expressivity or doing more behaviors with higher expressivity. To this aim, several steps are foreseen:

- expand the current behavior expressivity model where 6 parameters are implemented (Huang&Pelachaud, 2012), so it will act globally over the whole interaction or over a specific time span.
- make use of the database of expressive movement Emilya (Fourati&Pelachaud, 2016) that contains motion capture data of movements done by 11 persons performing 7 actions with 8 emotions, to characterize values of the behavior expressivity parameters.
- evaluate the model through objective measures and through experimental study to measure the naturalness and perceived expressivity of the agent’s behavior.

References:
Durupinar, F., Mubbasir Kapadia, Susan Deutsch, Michael Neff, and Norman I Badler. 2017. Perform: Perceptual approach for adding ocean personality to human motion using laban movement analysis. ACM Transactions on Graphics (TOG) 36, 1 (2017), 6

Ferstl, Y., and McDonnell, R., 2018. A perceptual study on the manipulation of facial features for trait portrayal in virtual agents. In Proceedings of the 18th International Conference on Intelligent Virtual Agents. ACM, 281–288

Fourati, N., & Pelachaud, C. (2016). Perception of emotions and body movement in the emilya database. IEEE Transactions on Affective Computing, 9(1), 90-101.

Hartmann, B., Mancini, M., & Pelachaud, C. (2005). Implementing expressive gesture synthesis for embodied conversational agents. In International Gesture Workshop (pp. 188-199). Springer.

Huang, J., & Pelachaud, C. (2012). Expressive body animation pipeline for virtual agent. In International Conference on Intelligent Virtual Agents (pp. 355-362). Springer.

van Laban, R. & Ullmann, L. (1988). The Mastery of Movement. Plymouth: Northcote House.

Lugrin, B., Pelachaud, C. & Traum, D. (2021) The Handbook on Socially Interactive Agents: 20 Years of Research on Embodied Conversational Agents, Intelligent Virtual Agents, and Social Robotics Volume 1: Methods, Behavior, Cognition. ACM / Morgan & Claypool 2021.

Neff, M., Hand Gesture Synthesis for Conversation, in Handbook of Human Motion, edited by Bertram Müller and Sebastian I. Wolf, Springer, 2016.

Randhavane, T., Aniket Bera, Kyra Kapsaskis, Rahul Sheth, Kurt Gray, and Dinesh Manocha. 2019. EVA: Generating Emotional Behavior of Virtual Agents using Expressive Features of Gait and Gaze. In ACM Symposium on Applied Perception 2019 (SAP ’19), September 19–20, 2019

Wallbott, H.G. & Scherer, K.R. (1986). Cues and channels in emotion recognition. Journal of Personality and Social Psychology 51: 690–699.

Lieu: 
ISIR
Thématiques: 
Encadrant: 
Catherine Pelachaud
Référent Universitaire: 
n/a
Fichier Descriptif: 
Attribué: 
No
Année: 
2 023

User login