Cohort 4 (2022-2026)

Max Christou
Previously in my Psychology with Neuroscience B.Sc., at the University of Glasgow, I have taken research opportunities to implement explainable AI techniques into current facial expression recognition models, alongside Prof Rachael Jack’s FaceSyntax lab.
Ethical, transparent, and inclusive AI are crucial for implementing social technology into everyday life. Thus a key component of my research is challenging the host of western-centric biases introduced in machine learning and facial expression recognition systems.
Hailing from a background in psychology and statistics, my current research lies in integrating state-of-the-art computational methods with contemporary social psychological theory and the cutting edge research conducted across the Social AI CDT.
Towards a Culturally Inclusive Facial Expression Recognition System
Motivation and Novelty
The automatic recognition of facial expressions of the six classic basic emotions from images/videos is considered a matured technology. However, the entire paradigm assumes that such facial expressions are culturally universal. Rather, growing evidence shows high systematic variability in how these facial expressions are represented across cultures (Jack et al 2012; Chen et al. 2021; see Jack, 2013 for a review). Such findings thus challenge the validity, accuracy, and fairness of existing Facial Expression Recognition (FER) models. Consequently, it is now crucial to investigate exactly how facial expressions vary across cultures, how to model and quantify such differences, and finally, how to incorporate this knowledge into developing a culturally flexible and inclusive FER system.
Aims
The objectives of this project are to:
- Enable an objective understanding of how facial expressions vary across cultures
- Build a culturally flexible and inclusive FER system
Methodology
This project will use a data-driven, computational approach to achieve the above objectives. Starting with a database of facial expressions produced across cultures (e.g., image or video-based), we will develop a visual information retrieval system to discover the most (dis)similar facial expressions across cultures (Phase I). Using state-of-the-art methodologies from psychology and perception science, we will formally characterize the dynamic facial expression signals within and across cultures to identify the specific facial signals that support accurate cross-cultural communication and those that give rise to confusions. Insights from this and the retrieval results will inform the development of an inclusive FER system (Phase II).
Impact
The project addresses the considerable limitations of current FER systems that are based on the long-held assumption that facial expressions are culturally universal, thereby questioning their validity, accuracy, and fairness. Successful completion of the project therefore has the potential for far-reaching impact in the fields of Psychology, Affective Computing, and HCI, by contributing to our understanding of both cross-cultural facial expressions and the validity and utility of Facial AI. In practice, the project outcomes also have the potential to directly improve the experience of the broad range of users who interact with FER-related applications and to expand their utility and marketability.
References
Jack, R. E., Garrod, O. G. B., Yu, H., Caldara, R., & Schyns, P. G. (2012). Facial expressions of emotion are not culturally universal. Proceedings of the National Academy of Science of the USA, 109(19), 7241–7244. https://doi.org/10.1073/pnas.1200155109
Chen, C., Garrod, O. G., Schyns, P. G., & Jack, R. E. (October 2020). Dynamic Face Movement Texture Enhances the Perceived Realism of Facial Expressions of Emotion. In Proceedings of the 20th ACM International Conference on Intelligent Virtual Agents (pp. 1–3). https://doi.org/10.1145/3383652.3423912
Xu et al. Investigating Bias and Fairness in Facial Expression Recognition. ECCV Workshops, 2020.

Raphael Cunningham
My research interests lie primarily at the intersection of Computer Science and Neuroscience. This was the focus of my undergraduate degree at Keele University where I split my study evenly between both subjects. I focused on artificial intelligence, regenerative neuroscience and sensory processing. I continued this trend at the postgraduate level, completing my Master of research in Neurotechnology at Imperial College London.
My MRes project compared machine learning algorithms and evaluated the performance benefits of multimodal approaches for the intuitive control of myoelectric prosthetics in multiple degrees of freedom. The interdisciplinary nature of the course and project gave me the opportunity to study a wide range of subjects from biomechanics to brain-computer interfaces. In the process I gained experience with multiple modalities including electromyography, accelerometery and infrared motion capture.
It was during this time that I decided to undertake a PhD and apply to the Social AI CDT which represents a fantastic opportunity to study within an interdisciplinary research environment. Ultimately, I aim to develop predictive ML models of prognosis and recovery in small sample patient studies of stroke survivors using multimodal MRI data.
Neural Networks models to predict individual behaviour from Multimodal MRI Data
Understanding and predicting individual complex behaviour in healthy and pathological conditions is a key goal in neuroscience and psychology. Magnetic Resonance Imaging (MRI) allows us to image functional and structural human brain properties in vivo and to relate them to behavioural performance. Furthermore, multimodal MRI, such as functional MRI (fMRI), Diffusion Tensor Imaging (DTI) and Multiparameter Mapping (MPM) can be acquired in the same session, capturing different brain tissue properties (Lazari et al., 2021). However, multimodal MRI data remains underused to explore brain-behaviour relationships. The majority of unimodal MRI studies use simple correlation methods either voxel-wise or based on region of interest (ROI).
Machine Learning (ML) methods have seen major breakthroughs in the last decade in the domain of natural image understanding, making its way to medical image analysis. So far, most ML-based MRI analysis use large data sets with hundreds or thousands of individuals, making it less than ideal for MRI-based studies that, with some exceptions (e.g. Human Connectome project, Biobank), rely on small sample sizes. Recent advances in ML, in an attempt to address the criticism for being ‘data-hungry’, focus on learning from smaller datasets through approaches like self-supervised/unsupervised learning and data augmentation.
In this PhD project the student will leverage multimodal MRI (task and resting- state fmri, DTI, MPM, T1-weighted) using ML methods to perform data augmentation and to discover participant-specific attributes (biomarkers) that relate to the performance in different cognitive-motor tasks in healthy individuals in small sample studies.
The main objectives of this project are as follows:
- To identify the biomarkers from multimodal MRI that relate to behaviour/impairment in small sample studies
- How to effectively fuse information from multiple modalities to achieve objective 1
Building on this initial project the student will then develop models to predict impairment in stroke survivors from multimodal MRI.
Expected outcome/impact
This project will develop tools and models that can be applied to small sample studies to understand individual differences in complex behaviour and to patient studies, that are typically small, to make predictions about prognosis and recovery. The resulting predictive models can potentially be used to understand how brain traits relate to individual behavioural and learning characteristics.
References
Lazari, A., Salvan, P., Cottaar, M., Papp, D., Jens van der Werf, O., Johnstone, A., Sanders, Z.B., Sampaio-Baptista, C., Eichert, N., Miyamoto, K., Winkler, A., Callaghan, M.F., Nichols, T.E., Stagg, C.J., Rushworth, M.F.S., Verhagen, L., Johansen-Berg, H., 2021. Reassessing associations between white matter and behaviour with multimodal microstructural imaging. Cortex 145, 187-200.
Doersch, C., Zisserman, A. 2017. Multi-task Self-Supervised Visual Learning. IEEE International Conference on Computer Vision (ICCV). 2070–2079.
Shorten, Connor; Khoshgoftaar, Taghi M. (2019). “A survey on Image Data Augmentation for Deep Learning”. Mathematics and Computers in Simulation. springer. 6: 60.

Harry Dafas
I come from a computing background with a BSc in Computer Science and an MSc in Artificial Intelligence, both from Cardiff University. My undergraduate dissertation was on detecting echo chambers in social media, while my MSc dissertation focused on using AI to automatically transcribe drums in music.
I am excited to join the Social AI CDT as my research interests throughout the years have been quite varied and the program’s interdisciplinary nature fosters the study of various different fields. My PhD project will work towards improving human and quadruped robot interaction; this involves a mix of AI, robotics and psychology.
I look forward to applying the knowledge I gain of these disparate subjects to hopefully normalise the implementation of social robots among us.
Improving Human and Quadruped Robot Interaction
It is a clear trend that over the coming decade, we can expect to see quadruped robots, which offer great mobility, become widely adopted across different sectors, including industry, logistics, hospitality, and healthcare/social care. People across these sectors will be expected to work closely alongside such quadruped robots. However, it remains unknown (1) how best humans can effectively interact and collaborate with quadruped robots; and (2) for how to establish and maintain social bonds between human and quadruped robots.
In this project, we aim to tackle the above two challenges. We plan to conduct experiments in the advanced motion capture laboratory with the state-of-the-art quadruped and animal-like robots (e.g., spot from Boston Dynamics, Miro and Aibo). We will also bring some pet dogs into the laboratory for experiments. We will aim to understand what physical behaviors people want to see in quadruped robot companions; what physical behaviors will lead to building social relationships between humans and companion robots; the extent to which these physical behaviors can be implemented into quadruped robots (e.g., Spot, Miro, Aibo, etc).
Proposed methods
From a computing science perspective, the student will engage with system development and integration (developing operational models of animal behaviors and implementing them on quadruped robot platforms). From psychology/social science perspective, the student will measure human and robot interaction via qualitative measures (such as questionnaires and participatory design interview approaches), non-invasive mobile brain imaging (record human brain activity when interact with the quadruped robots), physical response time, and pupillometry (eye tracking when people engage with these robots).
Expected outcomes and impact
The outcome of this project will be new knowledge, strategies and technologies to support harmonious interaction between humans and quadruped robots. The topics spans Psychology, Neuroscience and Computing Science. Empirical research papers and demos will be published in journals and conferences that reach a wide academic audience, e.g., Psychological Science, Cognition (for psychology/neuroscience audience), ACM Transactions, ACM/IEEE HRI and ACM CHI (for computing science audience) and PNAS for multidisciplinary audience.
References
[Hua 2021] Huang L., Meng Z., Deng Z., Wang C., Li L., Zhao G. (2021, Oct.) Extracting human behavioral biometrics from robot motions, in Proc. 27th Annual International Conference on Mobile Computing and Networking (MobiCom2021), Oct. 2021. DOI: 10.1145/3447993.3482860)
[Hua 2021] Huang L., Meng Z., Deng Z., Wang C., Li L., Zhao G. (2021, Oct.) Towards verifying the user of motion-controlled robotic arm systems via the robot behavior, IEEE IoT Journal Special Issue on Security, Privacy, and Trustworthiness in Intelligent Cyber-Physical Systems and Internet-of-Things, Oct. 2021. (DOI: 10.1109/JIOT.2021.3121623)
[Cro 2021] Cross, E. S. & Ramsey, R. (2021). Mind meets machine: Toward a cognitive science of human-machine interactions. Trends in Cognitive Sciences. 25(3), 200-212.
[Rid 2020] Riddoch, K. A. & Cross, E.S. (2020, May 15). Investigating the effect of heart rate synchrony on prosocial behaviour toward a social robot. preprint: https://psyarxiv.com/eyjv7
[Rid 2021] Riddoch, K. A. , Hawkins, R. D. & Cross, E.S. (2021, September 3). Exploring behaviours perceived as important for human-dog bonding and their translation to a robotic platform. preprint: https://psyarxiv.com/5xds4/

Monica Duta
I am a Psychology Graduate from the University of Aberdeen. Throughout my undergraduate degree I have discovered my interest in studying social signals derived from facial expressions of emotion. My internship funded by the Experimental Psychology Society and my undergraduate dissertation have focused on the use of prediction in emotion categorisation. I have taken my interest in studying facial expressions further when I completed my MSc in Research Methods in Psychology. My MSc dissertation focused on contrasts and similarities between data-driven models of culturally valued emotional facial expressions. Thus, I have an intensive interest in working with facial expressions.
My project focuses on building a formal, 3D generative model of hand-over-face gestures i. e. the combination of facial expressions and hand gestures. This will be done by building a complex dataset of hand-over-face gestures and deriving social perceptions from these gestures across and within cultures. The purpose of this project is to explore perception of hand-over-face gesture and to transfer this complex knowledge to digital agents.
My background is psychology-focused which is why I am excited to gain an insight into computing science and explore interdisciplinary perspectives into AI. The goal of my project is to create a widely applicable hand-over-face gesture model which has the potential to equip digital agents with social perception capabilities, such as inferring mental states from facial expressions and gestures.
Thoughtful gestures: Designing a formal framework to code and decode hand-over-face gestures
Human faces provide a wealth of social information for non-verbal communication. From observing the complex dynamic patterns of facial expressions and associated gestures, such as hand movements, both human and non-human agents can make myriad inferences about the person’s emotions, mental states, personality traits, cultural background, or even certain medical conditions. Specifically, people often place their hands on their face as a gesture during social interaction and conversation, which could provide information over and above what is provided by their facial expressions. Indeed, some specific hand-over-face gestures serve as salient cues for the recognition of cognitive mental states, such as thinking, confusion, curiosity, frustration, and boredom (Mahmoud et al 2016). Such hand gestures therefore provide a valuable additional channel for multi-modal inference (Mahmoud & Robinson, 2011).
Knowledge gap/novelty
However, the systematic study of hand-over-face gestures—i.e., the complex combination of face and hand gestures—remains limited due to two main empirical challenges: 1. the lack of a large objective labelled datasets, and 2. the demands of coding signals that have a high degree of freedom. Thus, while early studies have provided initial quantitative analyses and interpretation of hand-over-face gestures (Mahmoud et al 2016, Nojavanasghari et al. 2017), these challenges have hindered the development of coding models and interpretations frameworks.
Aims/objectives
This project aims to address this gap by designing and implementing the first formal objective model for hand-over-face gestures by achieving three main aims:
- Build a formal naturalistic synthetic dataset of hand-over-face gestures by extending generative models of dynamic social face signals (Jack & Schyns, 2017) and modelling dynamic hand-over-face gestures as an additional social cue in combination with facial expressions and eye and head movements (Year 1)
- Use state-of-the-art methodologies from human perception science to systematically model the specific face and hand gesture cues that communicate social and emotion signals within and across cultures e.g., Western European vs. Eastern Asian population (Year 2)
- Produce the first formal objective model for coding hand-over-face gestures (Year 3)
Methods
The project will build on and extend state-of-the-art 3D modelling of dynamic hand gesture and facial expression to produce an exhaustive dataset of hand-over-face gestures based on natural expressions. It will also use theories and experimental methods from the study of emotion theories to run human perception experiments to identity taxonomies and coding schemes for these gestures and validate interpretations within and across cultures.
Outputs/impact/industrial interests
The formal objective framework produced from this project will serve as a vital building block for vision-based facial expression and gesture inference systems and applications in many domains, including emotion recognition, mental health applications, online education platforms, robotics, and marketing research.
References
Mahmoud, M. and Robinson P. (2011). Interpreting hand-over-face gestures. International Conference on Affective Computing and Intelligent Interaction (ACII).
Mahmoud, M., Baltrušaitis T. & Robinson P. (2016). Automatic analysis of naturalistic hand-over-face gestures. ACM Transaction on Interactive Intelligent Systems, Vol. 6 Issue 2.
Nojavanasghari B., Hughes C.E., Baltrušaitis T., Morency L.P. (2017). Hand2Face: Automatic synthesis and recognition of hand over face occlusions. International Conference on Affective Computing and Intelligent Interaction (ACII).
Jack, R.E., & Schyns, P.G. (2017). Toward a social psychophysics of face communication. Annual review of psychology, 68, 269-297.
Jack, R.E., Garrod, O.G.B., Yu, H., Caldara, R., & Schyns, P.G. (2012). Facial expressions of emotion are not culturally universal. Proceedings of the National Academy of Sciences, 109(19), 7241-7244.
Jack, R.E., & Schyns, P.G. (2015). The human face as a dynamic tool for social communication. Current Biology, 25(14), R621-R634

Elena Minucci
My PhD project will focus on building a persuasive AI to help people adopt more sustainable lifestyles. I chose this project because I am passionate about researching ways to bridge the belief-behaviour gap around sustainable habits, and I am fascinated by the potential of AI in producing behaviour change interventions which are both individually tailored and scalable.
Prior to my PhD studies, I undertook a BA in Psychology and an MSc in Marketing at the University of Strathclyde, where I cultivated an interest in persuasion theories and their applications in real-world contexts. This culminated in my recent MSc dissertation exploring how different conspicuous symbols in advertising persuade different demographics to consume less.
In-between my studies, I worked as a Marketing Co-ordinator for a Scottish SME specialising in circular and AI-integrated lighting. Through this experience, I got acquainted with common barriers to adoption of sustainable systems, as well as some of the opportunities and challenges concerning human centred technology in business settings. Through my PhD studies, I look forward to building on this knowledge by exploring how interactive agents can help users make choices based on their individual preferences, circumstances, and motivations.
I am very excited to join the Social AI CDT. I am looking forward to developing my skills further while being part of a vibrant, supportive, and diverse student community.
Sustainable Me: A persuasive social AI for adopting sustainable lifestyles
Adopting a sustainable lifestyle in a modern society is challenging because it requires not only to change the status quo but also to digest a wealth of information that spans across many domains, for example diet, shopping, transport, waste management, heating, leisure activities, etc. Persuasive technology has been studied for its role in behaviour change [1] and decision-making [2] where the technology is seen as a persuasive social actor which can exploit physical, psychological, language, and social cues to persuade people to take a desired action [3]. This research is set against a background of human-human persuasion and decision-making which has a long history of research and study. Individual changes can have a dramatic impact on sustainability, for example, by reducing meat consumption, switching to more environmentally friendly transportation, or making changes to everyday practices, all of which can reduce a person’s carbon footprint. However, a ‘one-size-fits-all’ approach to behaviour change does not work well for people in different circumstances, with different preferences and motivations. This can be exploited by building an AI system that communicates with an individual as a social actor [4], and suggests the most suitable sustainable decisions as well as behaviours in a personalised conversation with the user. Previous research in the energy domain has shown the feasibility of this approach [5,6].
Main aims and objectives
In this PhD project we set out to develop an interactive AI system that is tailored to individual preferences and maximises individual behaviour change while customizing the interaction with the user. As part of this work, the PhD student will gather background on persuasive technology and human-human persuasion to build a conceptual framework for persuasive social interactions that can be applied to adopting a more sustainable lifestyle. The student will design and implement an AI agent that builds a user model of an individual’s circumstances and preferences, and then instantiates and customises the persuasive dialogue with the user to generate individualised interventions. A major aspect of this work will be evaluating the acceptability and usability of this AI agent and its effects on decision-making and behaviour change.
Proposed methods
This PhD will draw on a theoretical basis of behaviour change and persuasion to build a practical system which can be empirically evaluated. It will combine skills in AI development, the design of interactive systems/HCI and experimental studies.
Likely outputs and impacts
This PhD will contribute to a better understanding of how to influence decision-making and change behaviour. It will provide a conceptual framework for building and evaluating persuasive AI agents that make personalized suggestions and engage effectively in a dialogue with the user. It will demonstrate how to design and implement such a socially interactive and persuasive AI. Finally, it will provide guidelines for developing effective socially interactive and persuasive AI. We envisage that this AI system could be made available to individuals, households, local communities, and local government to reduce greenhouse emissions in order to keep global average temperature rise below 1.5C as pledged by the UN at COP26 in Glasgow.
References
[1] Thaler RH and Sunstein CR (2008) Nudge: improving decisions about health wealth and happiness. Yale University Press.
[2] Lages M, and Jaworska K (2012) How predictable are “spontaneous decisions” and “hidden intentions”. Comparing classification results based on previous responses with multivariate pattern analysis of fMRI BOLD signals. Frontiers in Psychology, 3, 56.
[3] Fogg BJ (2002) Persuasive technology: using computers to change what we think and do. Ubiquity 2002, December: 5:2.
[4] Crosswhite J, Fox J, Reed C, Scaltsas T, and Stumpf S (2004) Computational Models of Rhetorical Argument. In Argumentation Machines: New Frontiers in Argument and Computation, Chris Reed and Timothy J. Norman (eds.). Springer Netherlands, Dordrecht, 175–209.
[5] Mogles N, Padget J, Gabe-Thomas E, Walker I, and Lee JH (2018) A computational model for designing energy behaviour change interventions. User Modeling and User-Adapted Interaction 28, 1: 1–34.
[6] Skrebe S and Stumpf S (2017) An exploratory study to design constrained engagement in smart heating systems. In Proceedings of the 31st British Human Computer Interaction Conference.

Niina Seittenranta
I am excited to join the Social AI CDT programme. In my PhD project, I will employ deep learning feature extraction and fMRI to understand how the human brain creates predictions about social interaction. I am fascinated by the brain processes that underlie social interaction, as social situations are complex phenomena with several possible outcomes. Therefore the brain must update predictions efficiently based on incoming sensory information that involves not only a person themselves but also other people, whose behaviour can be challenging to predict. I have a keen interest in brain function and computational research methods, and I am excited to pursue these topics during my PhD project here at the University of Glasgow.
Before starting at the University of Glasgow, I completed Bachelor’s and Master’s degrees in Cognitive Science at the University of Helsinki. My previous studies were interdisciplinary, including neuroscience, psychology, statistics and programming. During my previous degrees and after graduation, I worked as a research and technical assistant in several research projects. The projects have concerned flow experience during visuomotor learning, social interaction and trust, and cognitive and emotional mechanisms of technology-mediated human learning.
Deep Learning feature extraction for social interaction prediction in movies and visual cortex
While watching a movie, a viewer is immersed in the spatiotemporal structure of the movie’s audiovisual and high level conceptual content [Raz19]. The nature of the movies induces a natural waxing and waning of more and less social immersive content. This immersion can be exploited during brain imaging experiments to emulate as closely as possible the every-day human life experience, including brain processes involved in social perception. The human brain is a prediction machine: in addition to receiving sensory information, it actively generates sensory predictions. It implements this by creating internal models about the world which are used to predict upcoming sensory inputs. This basic but powerful concept is used in several studies in Artificial Intelligence (AI) to perform different type of predictions: from video inner-frames for video interpolation [Bao19], to irregularity detection [Sabokrou18], passing through future sound prediction [Oord18]. Despite different studies on AI focusing on how to use visual features to detect and track actors in a movie [Afouras20], it is not clear in the brain how cortical networks for social cognition involve layers in the visual cortex for processing the social interaction cues occurring between actors. Several studies suggest that biological motion recognition (the visual processing of others’ actions) is central to understanding interactions between agents and involves top-down social cognition with bottom up visual processing. We will use cortical layer specific fMRI at Ultra High Field to read brain activity during movie stimulation. Using the latest advances in Deep Learning [Bao19, Afouras20], we will study how the interaction between two people in a movie is processed, trying to analyse predictions that occur between frames. The comparison between the two representation sets, which involves the analysis of the movie video with Deep Learning and its response measured within the brain, will occur doing model comparison with Representational Similarity Analysis (RSA) [Kriegeskorte08]. The work and its natural extensions will help clarify how the early visual cortex is responsible for guiding attention in social scene understanding. The student will spend time in both domains: studying and analysing the state-of-the-art methods in pose estimation and scene understanding in Artificial Intelligence. In brain imaging, they will learn how to perform a brain imaging study with fMRI: from data collection and understanding, to analysis methods. These two fields will provide a solid background in both brain imaging and artificial intelligence, teaching the student the ability to transfer skills and draw conclusions across domains.
References
[Afouras20] Afouras, T., Owens, A., Chung, J. S., & Zisserman, A. (2020). Self-supervised learning of audio-visual objects from video. European Conference on Computer Vision (ECCV 2020).
[Bao19] Bao, W., Lai, W. S., Ma, C., Zhang, X., Gao, Z., & Yang, M. H. (2019). Depth-aware video frame interpolation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 3703-3712).
[Kriegeskorte08] Kriegeskorte, N., Mur, M., & Bandettini, P. A. (2008). Representational similarity analysis-connecting the branches of systems neuroscience. Frontiers in systems neuroscience, 2, 4.
[Oord18] Oord, A. V. D., Li, Y., & Vinyals, O. (2018). Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748.
[Raz19] Raz, G., Valente, G., Svanera, M., Benini, S., & Kovács, A. B. (2019). A Robust Neural Fingerprint of Cinematic Shot-Scale. Projections, 13(3), 23-52.
[Sabokrou18] Sabokrou, M., Pourreza, M., Fayyaz, M., Entezari, R., Fathy, M., Gall, J., & Adeli, E. (2018, December). Avid: Adversarial visual irregularity detection. In Asian Conference on Computer Vision (pp. 488-505). Springer, Cham.

Ana Carolina Torres Cresto
I am interested in designing a Human-Computer Interface adaptable to the limits of every user. I wish to investigate brain phenomena that limit perception and adapt the interface in terms of processing speed and ease of use in view of a fully inclusive design.
While an undergraduate in Biomedical Engineering at the Federal University of Uberlandia in Brazil I was drawn to how technology has revolutionised healthcare assessment, in particular relating to neuroscience. During that time, I worked with grip force control, motion tracking devices and neural engineering.
Later, during my MSc in Brain and Cognition at the Pompeu Fabra University, I had the opportunity to work in the computational neuroscience field investigating functional brain dynamics in Alzheimer’s Disease.
I am excited to be joining the Social AI CDT cohort and to be part of a group of people with diverse backgrounds and interests.
Brain Based Inclusive Design
It is clear to everybody that people differ widely, but the underlying assumption of current technology designs is that all users are equal. The large cost of this, is the exclusion of users that fall far from the average that technology designers use as their ideal abstraction (Holmes, 2019). In some cases, the mismatch is evident (e.g., a mouse typically designed for right-handed people is more difficult to use for left-handers) and attempts have been made to accommodate the differences. In other cases, the differences are more subtle and difficult to observe and no attempt has been made, to the best of our knowledge, as yet to take them into account. This is the case, in particular, for change blindness (Rensink, 2004) and inhibition of return (Posner & Cohen, 1984), two brain phenomena that limit our ability to process stimuli presented too closely in space and time.
The overarching goal of the project is thus to design Human-Computer Interfaces capable of adapting to the limits of every user, in view of a fully inclusive designe capable putting every user at ease, i.e., enabling him/her to interact with technology according to her/his processing speed and not according to the speed imposed by technology designers.
The proposed approach includes four steps:
- Development of the methdologies for the automatic measurement of the phenomena described above through their effect on EEG signals (e.g., changes in P1, N1 components (McDonald et al., 1999) and behavioural performance (e.g., in/decreased accuracy, in/decreased reaction times);
- Identification of the relationship between the phenomena above and observable factors such as age, education level, computer familiarity, etc. of the user;
- Adaptation of the technology design to the factors above;
- Analysis of the improvement of the users’ experience.
The main expected outcome is that technology will become more inclusive and capable of accommodating the individual needs of its users in terms of processing speed and ease of use. This will be particularly beneficial for those groups of users that, for different reasons, tend to be penalised in terms of processing speed, in particular older adults and special populations (e.g., children with developmental issues, stroke survivors, and related cohorts).
The project is of great industrial interest because, ultimately, improving the inclusion of technical design greatly increases user satisfaction, a crucial requirement for every company that aims to commercialise technology.</p.
References
Holmes, K. (2019). Mismatch, MIT Press.
McDonald,J., Ward,L.M. &.Kiehl,A.H. (1999). An event-elated brain potential study of inhibition of return. PerceptionandPsychophysics, 61, 1411–1423.
Posner, M.I. & Cohen, Y. (1984). “Components of visual orienting”. In Bouma, H.; Bouwhuis, D. (eds.). Attention and performance X: Control of language processes. Hillsdale, NJ: Erlbaum. pp. 531–56.
Rensink, R.A. (2004). Visual Sensing without Seeing. Psychological Science, 15, 27-32.

Amelie Voges
Prior to starting my PhD, I completed a bachelor’s degree in Psychology with a specialism in Neuroscience at the University of Glasgow. There, I was first introduced to the field of social robotics and was immediately fascinated by how it combines social cognition and social neuroscience research with the study of new and emerging technologies. I went on to complete a MSc in Psychological Research at the University of Edinburgh, where my thesis investigated how navigational affordances are extracted from real-life scenes with EEG. At the University of Edinburgh, I also became involved with the Edinburgh Open Science Initiative, which taught me the benefit of open and reproducible research. As a Social AI CDT PhD student, I am excited to be part of a cohort of transdisciplinary researchers and to contribute novel insights to the study of how the appearances of artificial agents shape how we interact with them.
You Never get a Second Chance to Make a First Impression - Establishing how best to align human expectations about a robot’s performance based on the robot’s appearance and behaviour
Main aims and objectives
A major aim of social robotics is to create embodied agents that humans can instantly and automatically understand and interact with, using the same mechanisms that they use when interacting with each other. While considerable research attention has been invested in this endeavour, it is still the case that when humans encounter robots, they need time to understand how the robot works; in other words, people need time to learn to read the signals the robot generates. People consistently have expectations that are far too high for the artificial agents they encounter, which often leads to confusion and disappointment.
If we can better understand human expectations about robot capabilities based on the robot’s appearance (and/or initial behaviours) and ensure that those are aligned with the actual robot abilities, this should accelerate progress in human-robot interaction, specifically in the domains of human acceptance of robots in social settings and cooperative task performance between humans and robots. This project will combine expertise in robotic design and the social neuroscience of how we perceive and interact with artificial agents to develop a socially interactive robot designed for use in public spaces that requires (little or) no learning or effort for humans to interact with while carrying out tasks such as guidance, cooperative navigation, and interactive problem-solving tasks.
Proposed methods
Computing Science: System development and integration (Developing operational models of interactive behaviour and implementing them on robot platforms); deployment of robot systems in lab-based settings and in real-world public spaces
Psychology/Brain Science: Behavioural tasks (questionnaires and measures of social perception, such as the Social Stroop task), non-invasive mobile brain imaging (functional near infrared spectroscopy) to record human brain activity when encountering the artificial agent in question.
Likely outputs
Empirically-based principles for social robot design to optimize alignment between robot’s appearance, user expectations, and robot performance, based on brain and behavioural data.
A publicly available, implemented, and validated robot system embodying these principles.
Empirical research papers detailing findings for a computing science audience (e.g., ACM Transactions on Human-Robot Interaction) a psychology/neuroscience audience (e.g., Psychological Science, Cognition) and a general audience, that draws on the multidisciplinary aspects of the work (PNAS, Current Biology), as well as papers at appropriate conferences and workshops such as Human-Robot Interaction, Intelligent Virtual Agents, CHI, and similar.
References
Foster, M. E.; Gaschler, A.; and Giuliani, M. Automatically Classifying User Engagement for Dynamic Multi-party Human–Robot Interaction. International Journal of Social Robotics. July 2017.
Foster, M. E.; Alami, R.; Gestranius, O.; Lemon, O.; Niemelä, M.; Odobez, J.; and Pandey, A. K. The MuMMER project: Engaging human-robot interaction in real-world public spaces. In Proceedings of the Eighth International Conference on Social Robotics (ICSR 2016), November 2016.
Cross, E. S., Riddoch, K. A., Pratts, J., Titone, S., Chaudhury, B. & Hortensius, R. (2019). A neurocognitive investigation of the impact of socialising with a robot on empathy for pain. Philosophical Transactions of the Royal Society B.
Hortensius, R. & Cross, E.S. (2018). From automata to animate beings: The scope and limits of attributing socialness to artificial agents. Annals of the New York Academy of Science: The Year in Cognitive Neuroscience.