Modeling the perceptual, cognitive and nonverbal behavior in embodied artificial social agents

Saturday, October 24th, 2 pm to 5 pm, UK BST

Registration via EventBrite:

https://www.eventbrite.co.uk/e/social-ai-cdt-tutorial-tickets-124295070903?utm_campaign=post_publish&utm_medium=email&utm_source=eventbrite&utm_content=shortLinkNewEmail
It

A Zoom Link for participating in the event will be sent via-email to all people that will register through the link above.

For participants that have not registered, the Zoom Link is as follows:

https://uofglasgow.zoom.us/j/95128203671?pwd=ZGdBeFJ6cjdLdHNRMEpvenJLaFF2Zz09

Meeting ID: 951 2820 3671
Passcode: 529617

The goal of the tutorial is to introduce the three main components of socially intelligent artificial agents, perception (the ability to perceive the social signals of its users), cognition (the ability to process social signals displayed by its users) and action (the ability to display social signals). Special attention will be paid to the interdisciplinary aspects of the field, including integration of psychological findings into the development of artificial agents and use of computational methodologies in the study of human behaviour. Furthermore, the tutorial will provide examples of applications ranging across the automatic analysis of personality traits, automatic generation of behaviour and the development of embodied agents for health interventions. This tutorial is being conducted by the University of Glasgow’s UKRI Centre for Doctoral Training in Socially Intelligent Artificial Agents.

Program

14.00-14.50: Alessandro Vinciarelli: An Introduction to Social Perception in Artificial Agents.

The goal of the talk is to introduce Social Signal Processing, the computing domain aimed at modelling, analysis and synthesis of nonverbal behaviour in human-human and human-machine interactions. After showing the main principles of the field, the talk will use the automatic analysis of conflict as an example of how it is possible to infer social and psychological phenomena from data. The focus will be on the main technological steps that lead from the definition of a phenomenon of interest, to the automation of social perception an possible insights about social and psychological phenomena.

14.50-15.00: Break.

15.00-15.50: Monika Harvey: Cognition for Social Agents.

Over the last 10 years big strides have been made in terms of the biological plausibility of artificial social agents, including their ability to emulate human capabilities. In this tutorial I will describe some of the brain mechanisms underpinning our understanding and designing of social agents. I will focus in particular on the different brain pathways driving perception and action and a specific model of brain function put forward by Goodale and Milner in 1992 (revised in 2018). This model constitutes a bridge between Psychology/Neuroscience and Social Robotics allowing specific predictions and hypothesis testing in relation to our perception and (inter)action with/designing of socially intelligent agents. I will argue that the vast majority of our communication with social agents will activate ‘ventral’ visual brain pathway function and that the processing taking place in this pathway is object centred and slow. I will conclude by mapping the challenges and possible solutions of special populations (such as older adults or stroke patients) interacting with artificial social agents and just how models of brain processing can predict how such populations may communicate with social agents and how such communication may be improved.

15.00-16.00: Break.

16.00-16.50: Stacy Marsella: Nonverbal Behaviour for Artificial Social Agents.

Computational models of human behavior are used in a wide range of technologies. At a large scale, social simulations are being used, for example, to model people’s response to a natural disaster. At the highly detailed individual scale, virtual replicas of humans are being crafted. These virtual humans are facsimiles of people that can engage people using the same verbal as well as nonverbal behavior people use in face-to-face interaction, including facial actions, postural shifts, gestures and gaze. The designs of these various models heavily leverage psychological theories and data. Psychology and the social sciences, in turn, are increasingly using these computational artifacts as means to formulate, test and explore theories about human behavior. In this presentation, I will first give a brief overview of my group’s work in social simulation and virtual humans. Then I will provide my perspective on the synergy between psychology and the engineering of these artifacts as well as illustrate this perspective using my group’s work on the computationally modeling of nonverbal behaviors for virtual humans.

16.50 – 17.00 Conclusions.