Cohort 2 (2020-2024)

Morgan Bailey
I am a current PhD student within the Social AI CDT at the University of Glasgow. My research will be based on social intelligence and how we can use this when building human-AI teams, and will be in collaboration with Qumodo whose main goal is to advance human-AI teams. Qumodo have previously developed software for the Home Office, which drew me to this project as I have a passion for conducting research that has real-world applications to ultimately improve quality of life.
Prior to joining the Social AI CDT, I completed a BSc in Psychology and MSc in Psychological Research Methods at the University of Plymouth. My undergraduate dissertation looked at Psycholinguistics with a focus on the relationship between phonemic awareness and reading ability in the general population; having been diagnosed with dyslexia at the age of 8, language acquisition has always fascinated me. My love of Psycholinguistics continued into my MSc but as I wanted to expand my knowledge and challenge myself with different areas of research, I chose to focus my MSc dissertation on the affects of pitch on social cooperation between humans and robots. During my literature review, I discovered how complex and intriguing the relationship between humans and AI is, which motivated me to look for opportunities in the future to complete more research into improving the relationships between humans and AI.
In my opinion, the most appealing aspect of theSocial AI CDT is the unique opportunity to work with a group of academics who have such a diverse set of skills and interests; it is inspiring to be working as part of a team who can all help train and support each other.
Social Intelligence towards Human-AI Teambuilding
Visions of the workplace-of-the-future include applications of machine learning and artificial intelligence embedded in nearly every aspect (Brynjolfsson & Mitchell, 2017). This “digital transformation” holds promise to broadly increase effectiveness and efficiency. A challenge to realising this transformation is that the workplace is substantially a human social environment and machines are not intrinsically social. Imbuing machines with social intelligence holds promise to help build human-AI teams and current approaches to teaming one human and one machine appear reasonably straightforward to design. However, if there are more than one human and more than one system that are working together we can see that the complexity of social interactions increases and we need to understand the society of human-AI teams. This research proposes to take a first step in this direction to consider the interaction of triads containing humans and machines.
Our proposed testbed will be concerned with automatic image classification and we choose this since identity and location recognition is a primary work context of our industrial partner Qumodo. Moreover, there are many image classification systems that have recently shown the ability to approach or exceed human performance. There are two scenarios we would like to examine involving human-AI triads and we term them the sharing problem and the consensus problem:
In the sharing problem we examine two humans teamed with the same AI and examine how the human-AI team is influenced by the learning style of the AI, which after initial training can either learn from a single trainer or from multiple trainers. We will examine how trust in the classifier evolves depending upon the presence/absence of another trainer and the accuracy of the other trainer(s). To obtain precise control the “other” trainer(s) could either be actual operators or simulations obtained by parametrically modifying accuracy based on ground truth. Of interest are the questions of when human-AI teams benefit from pooling of human judgment and if pooling can lead to reduced trust.
In the consensus problem we use the scenario of a human manager who must reach a consensus view based on input from a pair of judgments (human-human, human-AI). This consensus will be reached either with or without “explanation” from the two judgments. To make the experiment tractable we will consider the case of a binary decision (e.g. two facial images are of the same person or a different person). Aspects of the design will be taken from a recent paper examining recognition of identity from facial images (Phillips, et al., 2018).
In addition to these experimental studies we also wish to conduct qualitative studies involving surveys or structured interviews in the workplace to ascertain whether the experimental results are consistent or not with people’s attitudes towards the scenarios depicted in the experiments.
As industry moves further towards AI automation, this research will have substantial impact on future practices within the workplace. Even as AI performance increases, in most scenarios a human is still required to be in the loop. There has been very little research into what such a human-AI integration/interaction should look like. Therefore this research is of pressing importance across a myriad of different sectors moving towards automation.
References
[BRY17] Brynjolfsson, E., & Mitchell, T. (2017). What can machine learning do? Workforce implications. Science, 358(6370), 1530-1534.
[PHI18] Phillips, P. J., Yates, A. N., Hu, Y., Hahn, C. A., Noyes, E., Jackson, K., … & Chen, J. C. (2018). Face recognition accuracy of forensic examiners, superrecognizers, and face recognition algorithms. Proceedings of the National Academy of Sciences, 115(24), 6171-6176.

Jacqueline Borgstedt
I am a current Social AI CDT PhD Student interested in how socially intelligent artificial systems can be shaped and adopted in order to improve mental health and well-being of humans. My doctoral research explores how multimodal interaction between robots and humans can facilitate reduction of stress or anxiety and may foster emotional support. Specifically, I am investigating the role of different aspects of touch during robot-human interaction and its potential for improving psychological well-being.
Prior to my PhD studies, I completed an MA in Psychology at the University of Glasgow. During my undergraduate studies, I was particularly interested in the neural circuitry underlying emotion recognition and regulation. As part of my undergraduate dissertation, I thus investigated emotion recognition abilities and sensitivity to affective expressions in epilepsy patients. Further research interests include the evaluation and development of interventions for OCD, anxiety disorders and emotion regulation difficulties.
During my PhD I am looking forward to integrating my knowledge of psychological theories within the development and evaluation of socially assistive robots. Furthermore, I hope to contribute to novel solutions for the application of robots in mental health interventions as well the enhancement of human-robot interaction.
Multimodal Interaction and Huggable Robot
The aim of the project is to investigate the combination of Human Computer Interaction and social/huggable robots for care, the reduction of stress and anxiety, and emotional support. Existing projects, such as Paro and the Huggable, focus on very simple interactions. The goal of this PhD project will be to create more complex feedback and sensing to enable a richer interaction between the human and the robot.
The plan would be to study two different aspects of touch: thermal feedback and squeeze input/output. These are key aspects of human-human interaction but have not been studied in human-robot settings where robots and humans come into physical contact.
Thermal feedback has strong associations with emotion and social cues [Wil17]. We use terms like ‘warm and loving’ or ‘cold and distant’ in everyday language. By investigating different uses of warm and cool feedback we can facilitate different emotional relationships with a robot. (This could be used alongside more familiar vibration feedback, such as purring). A series of studies will be undertaken looking at how we can use warming/cooling, rate of change and amount of change in temperature to change responses to robots. We will study responses in terms of, for example, valence and arousal.
We will also look at squeeze interaction from the device. Squeezing in real life offers comfort and support. One half of this task will look at squeeze input, with the human squeezing the robot. This can be done with simple pressure sensors on the robot. The second half will investigate the robot squeezing the arm of the human. For this we will need to build some simple hardware. The studies will look at human responses to squeezing, the social acceptability of these more intimate interactions, and emotional responses to them.
The output of this work will be a series of design prototypes and UI guidelines to help robot designers use new interaction modalities in their robots. The impact of this work will be enable robots have a richer and more natural interaction with the humans they touch. This has many practical applications for the acceptability of robots for care and emotional support.
References
[Wil17] Wilson, G., and Brewster, S.: Multi-moji: Combining Thermal, Vibrotactile & Visual Stimuli to Expand the Affective Range of Feedback. In Proceedings of the 35th Conference on Human Factors in Computing Systems – CHI ’17, ACM Press, 2017.

Robin Bretin
From the deep forests of Sologne in France, I grew to become an extremely curious person, sensitive to my environment and beings living in it. This curiosity and sensitivity led me, one way or another, to the National Graduate School of Cognitive Engineering (ENSC) in Bordeaux, France. Cognitics aims to understand and improve the flow of human-machine symbiosis, in terms of performance, substitution, safety, ease and comfort, and augment human through technologies.
I am passionate about our future and the infinite possibilities that are presented to us. Being part of the Social AI CDT programme as a PhD student is the first step to what I hope will be a great journey toward the integration of new technologies in our society, designed around and for humanity
Flying Robot Friends: Studying Social Drones in Virtual Reality
Designing and implementing autonomous drones to assist and interact with people in social contexts, so-called “social drones”, is an emerging area of robotics. Human-drone Interaction (HDI) applications range from supporting users in exercising and recreation [1], to providing navigation cues [2] and serving as flying interfaces [3]. To truly make drones “social”, we must understand how humans perceive them and behave around them. Thus, researchers have traditionally run experiments that allow observing users in direct contact with drones. However, such studies can be difficult, expensive and inflexible. For example, it can be difficult, infeasible, or even dangerous to conduct an experiment in the real world to study the impact of different drone sizes or flying altitudes on the user’s behavior. On the other hand, if valid experiments can be conducted in immersive virtual reality (VR), researchers can reach out to a higher number and potentially more diverse participants, and control the environment variables to a greater degree. For example, changing the size of a drone in VR involves merely changing a variable’s value. Similarly, drones in VR are not bound by the physical limitations of the word. But if VR-based studies of human-drone interactions are to be used as a springboard for informing our understanding of HDI in the real world, it is imperative to understand the extent to which findings generalize to in situ HDI.
This project aims to explore the use of immersive virtual reality (VR) as a test bed for studying human behavior around social drones. The main objectives are to understand whether results from studies conducted in VR would match results from corresponding real-world settings, and use VR to inform embodied/in-person HDI studies. As prior work suggests [5], it is expected that some behaviors will be similar across VR and the real world, thereby allowing researchers to use VR as an alternative to real world studies in some contexts. However, understanding the limitiations of VR for developing social drones will be vital as well.
To this end, the project will involve studying and comparing human proxemic behavior around drones both in VR and in the real world. While proxemic behavior has been investigated for human-robot Interaction [4], it has never been studied when interacting with drones. It is expected that attributes of drones, such as their flying altitude or their size, impact how people distance themselves from drones. As has also been studied in HRI proxemics, the extent to which people’s preferred distance to social drones differs based on first and third person viewpoints is also of interest to examine. The results from the real world and VR studies will be compared to allow an assessment of the opportunities, challenges, and limitations of using virtual reality to conduct experiments on introducing drones in close quarters to people to serve social purposes, and provide guidelines can help researchers decide whether to employ VR in their experiments and what factors to account for if doing so.
References
[1] Florian “Floyd” Mueller and Matthew Muirhead. 2015. Jogging with a Quadcopter. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI ’15). Association for Computing Machinery, New York, NY, USA, 2023–2032.
[2] Pascal Knierim, Steffen Maurer, Katrin Wolf, and Markus Funk. 2018. Quadcopter-Projected In-Situ Navigation Cues for Improved Location Awareness. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI ’18). Association for Computing Machinery, New York, NY, USA, Paper 433, 1–6.
[3] Markus Funk. 2018. Human-drone interaction: let’s get ready for flying user interfaces! interactions 25, 3 (May-June 2018), 78–81.
[4] Jonathan Mumm and Bilge Mutlu, “Human-robot proxemics: Physical and psychological distancing in human-robot interaction,” 2011 6th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Lausanne, 2011, pp. 331-338.
[5] Ville Mäkelä, Rivu Radiah, Saleh Alsherif, Mohamed Khamis, Chong Xiao, Lisa Borchert, Albrecht Schmidt, and Florian Alt. 2020. Virtual Field Studies: Conducting Studies on Public Displays in Virtual Reality. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (CHI ’20). Association for Computing Machinery.

Christopher Chandler
I am a PhD student on the Social AI CDT interested in societies of humans and machines. The focus of my research is on developing formal models of trust between humans and autonomous robots, using data harvested from a mobile game and techniques in formal verification to test model assumptions.
As a philosophy undergraduate at the University of Aberdeen, I was interested in modal metaphysics, gravitating to logic and issues in the foundations of mathematics in the later stages of my degree. My dissertation investigated the status of axiomatic set theory as fundamental ontology for mathematics given controversies in the search for large cardinal axioms. However, I started out studying for a joint honours in philosophy and psychology and so gained fair experience in the latter. I subsequently completed a masters in software development at the University of Glasgow with a dissertation on model checking applied to gain insight into user engagement for a mobile game, spending a couple of years thereafter developing analytics for a company in the commercial HVAC and refrigeration space.
I look forward to working with a diverse team of talented people, challenging discussions and the inevitable maturation of thought that follows. It is my hope to make a novel contribution to the development of practical and reliable autonomous systems capable of integrating into human society.
Game-based techniques for the investigation of trust for Autonomous robots
Trustworthiness is a property of an agent or organisation that engenders trust in others. Humans rely on trust in their day-to-day social interactions, be they in the context of personal relationships, commercial negotiation, or organisational consultation (with healthcare providers or employers for example). Social success therefore relies on the evaluation of the trustworthiness of others, and our own ability to present ourselves as trustworthy. If autonomous agents are to be used in a social environment, it is vital that we understand the concept of trustworthiness in this context [DEV18].
Some formal models of trust for autonomous systems have been proposed (e.g. [BAS16]), but these models are geared specifically towards autonomous vehicles. Any proposed model must be evaluated by testing. In many cases this would involve deploying complex hardware in sufficiently realistic scenarios in which trust would be a consideration. However, it is also possible to investigate trust in other scenarios. For example, it has been shown that different interfaces to an automatic image classifier change the calibration of human trust towards the classifier [ING20]. Relevant to social processing, in [GAL19] trust was examined via the use of videos. Here, the responses of human participants to videos involving an autonomous robot in a range of scenarios were used to investigate different aspects of trust.
Another way to generate user data to test formal models is using mobile games. In a recent paper [KAV19], a model of the way that users play games was used to investigate a concept known as game balance. A software tool known as a probabilistic model checker [KWI17] was used to predict user behaviour under the assumptions of the model. Subsequently the game has been released to generate user data in order to evaluate the credibility of the model used.
In this PhD project you will use a similar technique to evaluate trust for autonomous systems. The crucial aspects are the formal models of trust and the question of how to design a suitable game so that the way users respond to different scenarios reflect how much they trust (autonomous robot or animated) characters in the game. You will:
- Develop and evaluate models of trust for autonomous robots
- Devise a mobile game for which players will respond according to their trust in autonomous robot or animated characters
- Use an automatic technique such as model checking or simulation to determine player behaviour under the assumptions of your trust models
- Analyse how well player behaviour matches that predicted using model checking
References
[DEV18] Trustworthiness of autonomous systems– K. Devitt, Foundations of Trusted Autonomy, Studies in Systems, Decision and Control, 2018
[BAS16] Trust dynamics in human autonomous vehicle interaction: a review of trust models – C. Basu et al. AAAI 2016.
[ING20] Calibrating trust towards an autonomous image classifier: a comparison of four interfaces – M. Ingram et al., submitted.
[GAL19] Trusting Robocop: Gender-Based Effects on Trust of an Autonomous Robot – D. Gallimore et al. Frontiers in Psychology 2019
[KAV19] Balancing turn-based games with chained strategy generation – W. Kavanagh, A. Miller et. al. IEEE Transactions on Games 2019.
[KWI17] Probabilistic model checking: advances and applications – M. Kwiatkowska et al. Formal System Verification 2017.

Radu Chirila
My research focuses on designing an artificial smart e-Skin with embedded microactuators, able to convey expressions and emotions biomimetically. To achieve this, we first need to understand the stimuli behind human emotions and how these link to the various kinematic aspects of the human body.
I come to the Social AI CDT with a diverse technical background. I have worked as a software engineer in the United States of America (focusing on developing low-level neural network nodes) and as an electronic systems consulting intern in the UK. My academic journey started with a Bachelor of Engineering (BEng) degree in Electronics and Electrical Engineering at the University of Glasgow. Joining the Social AI CDT was a natural step in my technical evolution, given the increasing ability of machine intelligence algorithms and the essential role social robotics plays in today’s world.
Soft eSkin with Embedded Microactuators
Research on tactile skin or e-skin has attracted significant interest recently as it is the key underpinning technology for safe physical interaction between humans and machines such as robots. Thus far, eSkin research has focussed on imitating some of the features of human touch sensing. However, skin is not just designed for feeling the real world, it is also a medium to express feeling through gestures. For example, the skin on the face, which can fold and wrinkle into specific patterns, allows us to express emotions such as varying degrees of happiness, sadness or anger. Yet, this important role of skin has not received any attention so far. Here, for the first time, this project will explore the emotion signal generation capacities of skin by developing programmable soft e-skin patches with embedded micro actuators that will emulate real skin movements. Building on the flexible and soft electronics research in the James Watt School of Engineering and the social robotics research in Institute of Neuroscience & Psychology, this project aims to achieve the following scientific and technological goals:
- Identify suitable actuation methods to generate simple emotive features such as wrinkles on the forehead
- Develop a soft eSkin patch with embedded microactutors
- Use dynamic facial expression models for specific movement patterns in the soft eSkin patch
- Develop an AI approach to program and control the actuators

Serena Dimitri
I started my journey with a BSc in Psychology at the University of Pavia where I learned to appreciate individual differences. Following, I undertook a joint Master degree course at the University of Pavia and University School of Advanced Studies where I specialized In Neuroscience with the growing will of understanding more about the brain. During my BSc and MSc, I took part in exchange programs at the University Complutense of Madrid, Trinity College of Dublin and the University of Plymouth where I dealt with different ways of doing research. I worked as a researcher on a 3-years-project, which culminated in my Master dissertation: “Neuroscience, Psychology and Computer Science: An Innovative Software on Aggressive Behaviour”. My research interests follow my academic path and my personality, I am mainly captivated by the exploration of how the brain and the individuals work in reaction to technology and how to shape technology to interact effectively with individuals. I am now a PhD student at the University of Glasgow, where I found the perfect harmony between my psychology background, my neuroscience studies and the world of AI and computer science. These three are for me: my knowledge, my specialization and my greatest interest.
The Project: Testing social predictive processing in virtual reality
Virtual reality (VR) is a powerful entertainment tool allowing highly immersive and richly contextual experiences. At the same time, it can be used to flexibly manipulate the 3D (virtual) environment allowing to tailor behavioural experiments systematically. VR is particularly useful for social interaction research, because the experimenter can manipulate rich and realistic social environments, and have participants behave naturally within them [RB18].
While immersed in VR, a participant builds an inner map of the virtual space and stores multiple expectations about the environment mechanisms i.e., where objects or rooms are and their interaction with them, but also about physical and emotional properties of virtual agents (e.g. theory of mind). Using this innovative and powerful technology, it is possible to manipulate both the virtual space and virtual agents within the virtual world, to test internal participants’ expectations and register their reactions to predictable and unpredictable scenarios.
The phenomenon of “change blindness” demonstrates the surprising difficulty observers have in noticing unpredictable changes to visual scenes[SR05]. When presented with two almost identical images, people can fail to notice small changes (e.g. in object colour) and even large changes (e.g. object disappearance). This process arises because the brain cannot attend to the entire wealth of environmental signals presented to our visual systems at any given moment, and instead use attentional networks to selectively process the most relevant features whilst ignoring others. Testing which environmental attributes drive the detection of changes can give useful insights on how humans use predictive processing in social contexts.
In this PhD the student will run behavioural and brain imaging experiments in which they will use VR to investigate how contextual information drives predictive expectations in relation to changes to the environment and agents within it. They will investigate if change detection is due to visual attention or to a social cognitive mechanism such as empathy. This will involve testing word recognition whilst taking the visuospatial perspective of the agents previously seen in the VR (e.g. [FKS18]). The student will examine if social contextual information originating in higher brain areas modulates the processing of visual information. In brain imaging literature, an effective method to study contextual feedback information is the occlusion paradigm [MPM19]. Cortical layer specific fMRI is possible with 7T brain imaging; the student will test how top-down signals during social cognition activate specific layers of cortex. This data would contribute to redefining current theories explaining the predictive nature of the human brain.
The student will also develop quantitative models in order to assess developed theories. In recent work [PMT19], model checking was proposed as a simple technology to test and develop brain models. Model checking [CHVB18] involves building a simple, finite state model, and defining temporal properties which specify behaviour of interest. These properties can then be automatically checked using exhaustive search. Model checking can replace the need to perform thousands of simulations to measure the effect of an intervention, or of a modification to the model.
References
[MPM19] Morgan, A. T., Petro, L. S., & Muckli, L. (2019). Scene representations conveyed by cortical feedback to early visual cortex can be described by line drawings. Journal of Neuroscience, 39(47), 9410-9423.
[SR05] Simons, D. J., & Rensink, R. A. (2005). Change blindness: Past, present, and future. Trends in cognitive sciences, 9(1), 16-20.
[RB18] de la Rosa, S., & Breidt, M. (2018). Virtual reality: A new track in psychological research. British Journal of Psychology, 109(3), 427-430.
[FKS18] Freundlieb, M., Kovács, Á. M., & Sebanz, N. (2018). Reading Your Mind While You Are Reading—Evidence for Spontaneous Visuospatial Perspective Taking During a Semantic Categorization Task. Psychological science, 29(4), 614-622.
[PMT19] Porr, B., Miller, A., & Trew, A. (2019). An investigation into serotonergic and environmental interventions against depression in a simulated delayed reward paradigm. Adaptive behaviour, (online version available).
[CHVB-8] Clarke, E. M., Henzinger, T. and Veith, H. & Bloem, R (2018). Handbook of model checking. Springer.

Andreas Drakopoulos
It is a pleasure to be joining the Social AI CDT at the University of Glasgow as a PhD student. My research is concerned with how humans perceive virtual and physical space, the simultaneous modelling of the two, and determining whether they are represented by different areas in the brain.
I come to the centre from a mathematical background: I completed a BSc and MSc in Mathematics at the Universities of Glasgow and Leeds respectively, gravitating towards pure mathematics. My undergraduate dissertation was on Stone duality, a connection between algebra and geometry expressed in the language of category theory; my master’s dissertation focused on the Curry-Howard correspondence, which is the observation that aspects of constructive logic harmonise with aspects of computation (e.g. proofs can be viewed as programs).
I developed my academic skills by studying abstract mathematics, and I am excited to now have the opportunity to use them in an applied setting. I am also particularly looking forward to being part of a group with diverse backgrounds and interests, something that drew me to the CDT in the first place.
Optimising Interactions with Virtual Environments
Virtual and Mixed Reality systems are socio-technical applications in which users experience different configurations of digital media and computation that give different senses of how a “virtual environment” relates to their local physical environment. In Human-Computer Interaction (HCI), we recently developed computational models capable of representing physical and virtual space, solving the problems of how to recognise virtual spatial regions starting from the detected physical position of the users (Benford et al., 2016). The models are bigraphs [MIL09] derived from the universal computational model introduced by Turing Award Laureate Robin Milner. Bigraphs encapsulate both dynamic and spatial behaviour of agents that interact and move among each other, or within each other. We used the models to investigate cognitive dissonance, namely the inability or difficulty to interact with the virtual environment.
How the brain represents physical versus virtual environments is also an issue very much debated within Psychology and Neuroscience with some researchers arguing that the brain makes little distinction between the two [BOZ12]. Yet more in line with Sevegnani’s work, Harvey and colleagues have shown that different brain areas represent these different environments and that they are further processed in different time scales HAR12; ROS09]. Moreover, special populations struggle more with virtual over real environments [ROS11].
The overarching goal of this PhD project is, therefore, to adapt the computational models developed in HCI and apply them to psychological scenarios, to test whether the environmental processing within the brain is different as proposed. This information will then refine the HCI model and ideally allow a refined application to special populations.
References
[BEN16] Benford, S., Calder, M., Rodden, T., & Sevegnani, M., On lions, impala, and bigraphs: Modelling interactions in physical/virtual spaces. ACM Transactions on Computer-Human Interaction (TOCHI), 23(2), 9, 2016.
[BOZ12] Bozzacchi., C., Giusti, M.A., Pitzalis, S., Spinelli, D., & Di Russo, F., Similar Cerebral Motor Plans for Real and Virtual Actions. PLOS One (7910), e47783, 2012.
[HAR12] Harvey, M. and Rossit, S., Visuospatial neglect in action. Neuropsychologia, 50, 1018-1028, 2012.
[MIL09] Milner, R., The space and motion of communicating agents. Cambridge University Press, 2009.
[ROS11] Rossit, S., Malhotra, P., Muir, K., Reeves, I., Duncan G. and Harvey, M., The role of right temporal lobe structures in off-line action: evidence from lesion-behaviour mapping in stroke patients. Cerebral Cortex, 21 (12), 2751-2761, 2011.
[ROS09] Rossit, S., Malhorta, P., Muir, K., Reeves, I., Duncan, G., Livingstone, K., Jackson H., Hogg, C., Castle P., Learmonth G. and Harvey, M., No neglect- specific deficits in reaching tasks. Cerebral Cortex, 19, 2616-2624, 2009.

Thomas Goodge
My PhD research will be looking at Human-Car interactions in the context of autonomous vehicles, with a focus on the point of handover of control between the driver and the car.
I started studying Psychology at University of Nottingham, with an interest in visual perception, decision making and interaction with technology. I then worked as a Research Assistant with the Transport Research in Psychology group at Nottingham Trent University. Here, I was involved in various projects looking at hazard perception across different presentation formats and in different vehicle types. Our focus was on understanding the strategies drivers use to decide what a hazard is and the risk associated with it, and then developing training interventions to try and impart these skills to new drivers. During this time, I also studied MSc Information Security with Royal Holloway University of London, which looked at the decisions and attitudes people form about their personal data depending on environment.
I am excited to be joining the Social AI CDT cohort and to be working with a diverse group of academics across Computer Science and Psychology. I am particularly looking forward to developing the work I have been conducting over previous years as well as learning how incorporating artificial agents for drivers to engage with can further assist and improve driver safety.
Human-car interaction
The aim of this project is to investigate, in the context of social interactions,the interaction between a driver and an autonomous vehicle. Autonomous cars are sophisticated agents that can handle many driving tasks. However, they may have to hand control to the human driver in different circumstances, for example if sensors fail or weather conditions are bad [MCA16, BUR19]. This is potentially difficult for the driver as they may have not been driving the car for a long period and have to quickly take control [POL15]. This is an important issue for car companies as they want to add more automation to vehicles in a safe manner.Key to this problem is whether this interface would benefit from conceptualizing the exchange between human and car as a social interaction.
This project will study how best to handle handovers, from the car indicating to the driver that it is time to take over, the takeover event, and then the return to automated driving. They key factors to investigate are: situational awareness (the driver needs to know what the problem is and what must be done when they take over), responsibility (who’s task is it to drive at which point), the in-car context (what is the driver doing: are they asleep, talking to another passenger), and driver skills (is the driver competent to drive or are they under the influence).
We will conduct a range of experiments in our driving simulator to test different types of handover situations and different types of multimodal interactions involving social cuesto support the 4 factors outlined above.
The output will be experimental results and guidelines that can help automotive designers know how best to communicate and deal with handover situations between car and driver. We currently work with companies such as Jaguar Landrover and Bosch and our results will have direct application in their products.
References
[MCA16] Mcall, R., McGee, F., Meschtscherjakov, A. and Engel, T., Towards A Taxonomy of Autonomous Vehicle Handover Situations, Publication: Proceedings of the International Conference on Automotive User Interfaces and Interactive Vehicular Applications, pp. 193–200, 2016.
[BUR19] Burnett, G., Large, D. R. & Salanitri, D., How will drivers interact with vehicles of the future? Royal Automobile Club Foundation for Motoring Report, 2019.
[POL15] Politis, I, Pollick, F and Brewster S. Language-based multimodal displays for the handover of control in autonomous cars, Publication, Proceedings of the International Conference on Automotive User Interfaces and Interactive Vehicular Applications, pp. 3–10, 2015.

Casper Hyllested
I am a PhD student in the Social AI CDT and my research focuses primarily on understanding, deconstructing, and measuring the individual differences in people and data. The primary current aim is to implement this understanding to create both generalizable and individualized user models, allowing virtual agents to adapt to the dynamic state and trait profiles that can influence a person’s behaviors and responses.
During my psychology undergraduate at the University of Glasgow I was particularly interested in the patterns which could be attributed to the plethora of individual differences that underlie most research in psychology. I specialized largely in reliability in research and generalizability theory, and my dissertation was predominantly focused on how responses could vary dynamically over time. Since then I have begun to explore the versatility of situated measures, having participants respond to a set of situations rather than generalized items on a questionnaire, once more with the same focus of uncovering the variance in data that forms the composition of any individual or group-wide responses.
Exploring both situational and individual differences, not to mention the unlimited pool of potential facets and confounds that amalgamate to generate just a single response or behavior, is a daunting task. Implementing programming and virtual agents can simultaneously allow for easier collection and analysis of the required data. In turn, categorizing underlying personal mechanics may allow virtual agents to better tailor themselves to, and understand, the individuals they encounter. It will still require copious amounts of time and expertise from a multitude of different fields, which is why I am particularly looking forward to collaborating in a cohort of people with very different backgrounds in a highly interdisciplinary setting.
A framework for establishing situated and generalizable models of users in intelligent virtual agents
Aims
Increasing research suggests that intelligent virtual agents are most effective and accepted when they adapt themselves to individual users. One way virtual agents can adapt to different individuals is by developing an effective model of a user’s traits and using it to anticipate dynamically varying states of these traits as situational conditions vary. The primary aims of the current work are to develop: (1) empirical methods for collecting data to build user models, (2) computational procedures for building models from these data, (3) computational procedures for adapting these models to current situations. Although the project’s primary goal is to develop a general framework for building user models, we will also explore preliminary implementations in digital interfaces.
Novel Elements
One standard approach to building a model of a user’s traits—Classical Test Theory—uses a coherent inventory of measurement items to assess a specific trait of interest (e.g., stress, conscientiousness, neuroticism). Typically, these items measure a trait explicitly via a self-report instrument or passively via a digital device. Well-known limitations of this approach include its inability to assess the generalizability of a model across situations and occasions, and its failure incorporate specific situations into model development. In this project, we expand upon the classic approach by incorporating two new perspectives: (1) Generalizability Theory, (2) The Situated Assessment Method. Generalizability Theory will establish a general user model that varies across multiple facets, including individuals, measurement items, situations, and occasions. The Situated Assessment Method replaces standard unsituated assessment items with situations, fundamentally changing the character of assessment.
Approach
We will develop a general framework for collecting empirical data that enables building user models across many potential domains, including stress, personality, social connectedness, wellbeing, mindfulness, eating, daily habits, etc. The data collected—both explicit self-report and passive digital—will assess traits (and states) relevant to a domain across facets for individuals, measurement items, situations, and occasions. These data will be to Generalizability Theory and the Situated Assessment Method to build user models and establish their variance profiles. Of particular interest will be how well user models generalize across facets, the magnitude of individual differences, and clusters of individuals sharing similar models. Situated and unsituated models will both be assessed to establish their relative strengths, weaknesses, and external validity. Once models are built, their ability to predict a user’s states on particular occasions will be assessed, using procedures from Generalizability Theory, the Situated Assessment Method, and autoregression. Prediction error will be assessed to establish optimal model building methods. App prototypes will be developed and explored.
Outputs and Impact
Generally, this work will increase our ability to construct and understand user models that virtual agents can employ. Specifically, we will develop novel methods that: (1) collect data for building user models, (2) assess the generalizability of models; (3) generate state-level inferences in specific situations. Besides being relevant for the development of intelligent social agents, this work will contribute to deeper understanding of classic assessment instruments and to alternative situated measurement approaches across multiple scientific domains. More practicially, the framework, methods, and app prototypes we develop are of potential use to clinicians and individuals interested in understanding both functional and dysfunctional health behaviours.
References
[BLO12] Bloch, R., & Norman, G. (2012). Generalizability theory for the perplexed: A practical introduction and guide: AMEE Guide No. 68. Medical Teacher, 34, 960–992.
[PED20] Pedersen, C.H., & Scheepers, C. (2020). An exploratory meta-analysis of the state-trait anxiety inventory through use of generalizability theory. Manuscript in prepration.
[DUT19] Dutriaux, L., Clark, N., Papies, E. K., Scheepers, C., & Barsalou, L. W. (2019). Using the Situated Assessment Method (SAM2) to assess individual differences in common habits. Manuscript under review.
[LEB16] Lebois, L. A. M., Hertzog, C., Slavich, G. M., Barrett, L. F., & Barsalou, L. W. (2016). Establishing the situated features associated with perceived stress. Acta Psychologica, 169,119–132.
[MAR14] Stacy Marsella and Jonathan Gratch. Computationally Modeling Human Emotion. Communications of the ACM, December, 2014.
[MIL11] Lynn C. Miller, Stacy Marsella, Teresa Dey, Paul Robert Appleby, John L. Christensen, Jennifer Klatt and Stephen J. Read. Socially Optimized Learning in Virtual Environments (SOLVE). The Fourth International Conference on Interactive Digital Storytelling (ICIDS), Vancouver, Canada, Nov. 2011.

Gordon Rennie
Before starting at the University of Glasgow I carried out an undergraduate in Psychology and then a MSc in Human Robot Interaction at Heriot-Watt University. Initially I was drawn to psychology because of the sheer number of unknowns in the science and the possibility of discovering new knowledge, while also being aware that it could have a real impact on improving people’s lives. My MSc continued this by taking psychological knowledge and attempting to apply it to real world computing technologies targeted at improving people’s lives. There I began working with Conversational Agents, computer programs which attempt to speak with users using natural language.
My MSc project took one such agent, Alana created by Heriot-Watt’s Interaction Lab, and attempted to enable it to speak with multiple users at once. This was one improvement to the agent that builds on top of the brilliant work done on it previously and gave me insight into how difficult it is to improve such systems.
The current PhD I am studying, via the Social AI CDT, offered me the chance to continue this vein of research by working on other areas where current conversational agents fail. Specifically understanding conversational occurrences such as ‘uh’, ‘ah’, and laughter. I find conversational agents a particularly exciting area of research because of the future it promises. Imagine: a computer stops working for some unknowable reason – a common occurrence for even the most technically literate. Imagine also that you could ask it why and how to fix the issue, in plain English, without navigating a myriad of menus. That’s the dream of voice interaction. Of every user being able to interact with computers in the most natural way possible.
Language Independent Conversation Modelling
According to Emmanuel Schegloff, one of the most important linguists of the 20thCentury, conversation is the “primordial site of human sociality”, the setting that has shaped human communicative skills from neural processes to expressive abilities [TUR16]. This project focuses on these latter and, in particular, on the use of nonverbal behavioural cues such as laughter, pauses, fillers and interruptions during dyadic interactions. In particular, the project targets the following main goals:
- To develop approaches for the automatic detection of laughter, pauses, fillers, overlapping speech and back-channel events in speech signals;
- To analyse the interplay between the cues above and social-psychological phenomena such as emotions, agreement/disagreement, negotiation, personality, etc.
The experiments will be performed over two existing corpora. One includes roughly 12 hours of spontaneous conversations involving 120 persons [VIN15] that have been fully annotated in terms of the cues and the phenomena above. The other is the Russian Acted Multimodal Affective Set (RAMAS) − the first multimodal corpus in Russian language, including approximately 7 h of high-quality close-up video recordings of faces, speech, motion-capture data and such physiological signals as electro-dermal activity and photoplethysmogram [PER18].
The main motivation behind the focus on nonvebal behavioural cues is that these tend to be used differently in different cultural contexts, but they can still be detected independently of the language being used. In this respect, an approach based on nonverbal communication promises to be more robust to the application over data collected in different countries and linguistic areas. In addition, while the importance of nonverbal communication is widely recognised in social psychology, the way certain cues interplay with social and psychological phenomena still requires full investigation [VIN19].
From a methodological point of view, the project involves the following main aspects:
- Development of corpus analysis methodologies (observational statistics) for the investigation of the relationships between nonverbal behaviour and social phenomena;
- Development of signal processing methodologies for the conversion of speech signals into measurements suitable for computer processing;
- Development of Artificial Intelligence techniques (mainly based on deep networks) for the inference of information from raw speech signals.
From a scientific point of view, the impact of the project will be mainly in Affective Computing and Social Signal Processing [VIN09] while, from an industrial point of view, the impact will be mainly in the areas of Conversational Interfaces (e.g., Alexa and Siri), multimedia content analysis and, in more general terms, Social AI, the application domain encompassing all attempts of making machines capable to interact with people like people do with one another. For this reason, the project is based on the collaboration between the University of Glasgow and Neurodata Lab, one of the top companies in Social and Emotion AI.
References
[PER18] Perepelkina O., Kazimirova E., Konstantinova M. RAMAS: Russian Multimodal Corpus of Dyadic Interaction for Affective Computing. In: Karpov A., Jokisch O., Potapova R. (eds) Speech and Computer. Lecture Notes in Computer Science, vol 11096. Springer, 2018.
[TUR16] S.Turkle, “Reclaiming conversation: The power of talk in a digital age”, Penguin, 2016.
[VIN19] M.Tayarani, A.Esposito and A.Vinciarelli, “What an `Ehm’ Leaks About You: Mapping Fillers into Personality Traits with Quantum Evolutionary Feature Selection Algorithms“, accepted for publication by IEEE Transactions on Affective Computing, to appear, 2019.
[VIN15] A.Vinciarelli, E.Chatziioannou and A.Esposito, “When the Words are not Everything: The Use of Laughter, Fillers, Back-Channel, Silence and Overlapping Speech in Phone Calls“, Frontiers in Information and Communication Technology, 2:4, 2015.
[VIN09] A.Vinciarelli, M.Pantic, and H.Bourlard, “Social Signal Processing: Survey of an Emerging Domain“, Image and Vision Computing Journal, Vol. 27, no. 12, pp. 1743-1759, 2009.

Tobias Thejll-Madsen
I am a PhD student with the Social AI CDT at the University of Glasgow. My research focuses on facial expressions in social signaling and on using this knowledge to autogenerate effective humanlike facial expressions on virtual agents. To do this, we need to understand how expressions link to underlying emotional states and social judgements and translate this to models that the computer can use. I’m excited to work with a range of people in both psychology and computer science.
Previously, I have completed an MA in Psychology and an MSc in Human Cognitive Neuropsychology with the University of Edinburgh. There I focused on cognitive psychology, most recently looking at active learning in a social setting, and I am very curious about social inference and cognition in general. However, as many others, I find it hard to just have one interest so in no particular order, I enjoy: moral philosophy/psychology, statistics and methodology, education/pedagogy (prior to my MSc, I worked developing educational resources and research), reinforcement learning, epistemology, philosophy of science, outreach and science communication, cooking, stand-up comedy, roleplaying games, staying hydrated, and basically anything outdoors.
Effective Facial Actions for Artificial Agents
Face signals play a critical role in social interactions because humans make a wide range of inferences about others from their facial appearance, including emotional, mental and physiological states, culture, ethnicity, age, sex, social class, and personality traits (e.g., see Jack & Schyns, 2017). These judgments in turn impact how people interact with others, oftentimes with significant consequences such as who is hated or loved, hired or fired (e.g., Eberhardt et al., 2006). However, identifying what face features drive these social judgment is challenging because the human face is highly complex, comprising a high number of different facial expressions, textures, complexions, and 3D shapes. Consequently, no formal model of social face signalling currently exists, which in turn has limited the design of artificial agents’ faces to primarily ad hoc approaches that neglect the importance of facial dynamics (e.g., Chen et al., 2019). This project aims to address this knowledge gap by delivering a formal model of face signalling for use in socially interactive artificial agents.
Specifically, this project will a) model the space of 3D dynamic face signals that drive social judgments during social interactions, b) incorporate this model into artificial agents and c) evaluate the model in different human-artificial agent interactions. The result promises to provide a powerful improvement in the design of artificial agents’ face signalling and social interaction capabilities with broad potential for applications in wider society (e.g., social skills training; challenging stereotyping/prejudice).
Modelling the face signals will be derived using methods from human psychophysical perception studies (e.g., see Jack & Schyns, 2017) that extends the work of Dr Jack to include a wider range of social signals used in social interactions (e.g., empathy, agreeableness, skepticism). Face signals that go beyond natural boundaries such as hyper-realistic or super stimuli will also be explored. The resulting model will be incorporated into artificial agents using the public domain SmartBody (Thiebaux et al., 2018) animation platform with possible extension to other platforms. Finally, the model will be evaluated in human-agent interaction using the SmartBody platform with possible combination with other modalities including head and eye movements, hand/arm gestures, transient facial changes such as blushing, pallor, or sweating (e.g., Marsella et al., 2013).
Although there is not a current industrial partner, we expect the work to be very relevant to companies interested in the use of virtual agents for social skills training, such as Medical CyberWorld, and companies working on realistic humanoids robots, such as Furhat and Hanson Robotics. Jack and Marsella have pre-existing relations with these companies.
References
Jack, R. E., & Schyns, P. G. (2017). Toward a social psychophysics of face communication. Annual review of psychology, 68, 269-297.
Eberhardt, J. L., Davies, P. G., Purdie-Vaughns, V. J., & Johnson, S. L. (2006). Looking deathworthy: Perceived stereotypicality of Black defendants predicts capital-sentencing outcomes. Psychological science, 17(5), 383-386.
Chen, C., Hensel, L. B., Duan, Y., Ince, R., Garrod, O. G., Beskow, J., Jack, R. E. & Schyns, P. G. (2019). Equipping Social Robots with Culturally-Sensitive Facial Expressions of Emotion Using Data-Driven Methods. In: 14th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2019), Lille, France, 14-18 May 2019, (Accepted for Publication).
Stacy Marsella, Yuyu Xu, Margaux Lhommet, Andrew Feng, Stefan Scherer, and Ari Shapiro, “Virtual Character Performance From Speech”, in Symposium on Computer Animation , July 2013.
Stacy Marsella and Jonathan Gratch, “Computationally modeling human emotion”, Communications of the ACM , vol. 57, Dec. 2014, pp. 56-67.
Marcus Thiebaux, Andrew Marshall, Stacy Marsella, and Marcelo Kallmann, “SmartBody Behavior Realization for Embodied Conversational Agents”, in Proceedings of Autonomous Agents and Multi-Agent Systems (AAMAS), 2008.

Maria Vlachou
I am coming to the Social AI CDT from a diverse background. I hold an MSc in Data Analytics (University of Glasgow) funded by the Data Lab Scotland, a Research Master in (KU Leuven, Belgium), and a BSc in Psychology (Panteion University, Greece). I have worked as an Applied Behavior Analysis Trainer (Greece & Denmark) & as a Research Intern at the Department of Quantitative Psychology (KU Leuven), where we focused on statistical methods for psychology and reproducibility of science. I have worked for the last two years as a Business Intelligence Developer in the Pharmaceutical Industry. As an MSc student in Glasgow, I was exposed to more advanced statistical methods and my Thesis focused on Auto-Encoder models for dimensionality reduction.
I consider my future work on the project Conversational Venue Recommendation” (supervised by Dr. Craig MacDonald and Dr. Philip McAleer) as a natural evolution of all the above. Therefore, I look forward to working on deep learning methods for building Conversational AI chatbots and discovering new complex methods for recommendations in social networks by incorporating users’ characteristics. Overall, I am excited to be part of an interdisciplinary CDT and have the opportunity to work with people from different research backgrounds.
Conversational Venue Recommendation
Increasingly, location-based social networks, such as Foursquare, Facebook or Yelp are replacing traditional static travel guidebooks. Indeed, personalize venue recommendation is an important task for location-based social networks. This task aims to suggest interesting venues that a user may visit, personalized to their tastes and current context, as might be detected form their current location, recent venue visits and historical venue visits. The recent development of models for venue recommendation have encompassed deep learning techniques, able to make effective personalized recommendations.
Venue recommendation is typically deployed such that the user interacts with a mobile phone application. To the best of our knowledge, voice-based venue recommendation has seen considerably less research, but is a rich area for potential improvement. In particular, a venue recommendation agent may be able to elicit further preferences, ask if they prefer one venue or another, or ask for clarification in the type of venue, or distance to be travelled to the next venue.
This project aims to:
- Develop and evaluate models for making venue recommendation using chatbot interfaces, that can be adapted to voice through integration of text-to-speech technology, building upon recent neural network architectures for venue recommendation.
- Integrate additional factors about personality of the user, or other voice-based context signals (stress, urgency, group interactions) that can inform the venue recommendation agent.
Venue recommendation is an information access scenario for citizens within a “smart city” – indeed, smart city sensors can be used to augment venue recommendation with information about which areas of the city are busy.
References
[Man18] Contextual Attention Recurrent Architecture for Context-aware Venue Recommendation. Jarana Manotumruksa, Craig Macdonald and Iadh Ounis. In Proceedings of SIGIR 2018.
[Man17] A Deep Recurrent Collaborative Filtering Framework for Venue Recommendation. Jarana Manotumruksa, Craig Macdonald and Iadh Ounis. In Proceedings of CIKM 2017.
[Dev15] Experiments with a Venue-Centric Model for Personalised and Time-Aware Venue Suggestion. Romain Deveaud, Dyaa Albakour, Craig Macdonald, Iadh Ounis. In Proceedings of CIKM 2015.

Sean Westwood
I completed my undergraduate and postgraduate degrees in the School of Psychology here at the University of Glasgow. For my PhD I will be continuing to work under the supervision of Dr Marios Philiastides, who specialises in the neuroscience of decision making and has guided me through my MSc. I will also be working under Professor Alessandro Vinciarelli from the School of Computing Science, who specialises in developing computational models involved in human-AI interactions.
My main research interests are reinforcement learning and decision making in humans, as well as the neurological basis for individual differences between people. These interests stem from my background in childcare and sports coaching. For my undergraduate dissertation I studied how gender and masculinity impact risk-taking under stress to investigate why people may act differently in response to physiological arousal. My postgraduate research has focused on links between noradrenaline and aspects of reinforcement learning, using pupil dilation as a measure of noradrenergic activity.
I am looking forward to continuing with this line of research, with the aim of building computational models that reflect individual patterns of learning based on pupil data. It is my hope that this will open up exciting possibilities for AI programmes that are able to dynamically respond to individual needs in an educational context.
Neurobiologically-informed optimization of gamified learning environments
Value-based decisions are often required in everyday life, where we must incorporate situational evidence with past experiences to work out which option will lead to the best outcome. However, the mechanisms that govern how these two factors are weighted are not yet fully understood. Gaining insight into these processes could greatly help towards the optimisation of feedback in gamified learning environments. This project aims to develop a closed-loop biofeedback systemthat leverages unique ways of fusing electroencephalographic (EEG) and pupillometry measurements to investigate the utilityof the noradrenergic arousal system in value judgements and learning.
In recent years,it has become well established that pupil diameter consistently varies with certain decision making variables such as uncertainty, predictions errors and environmental volatility (Larsen & Waters, 2018). The noradrenergic(NA) arousal system in the brainstem is thought to be driving the neural networks involved in controlling these variables. Despite the increasing popularity of pupillometry in decision-making research, there are still many aspects that remain unexplored, such as the role of the NA arousal system in regulating learning rate, which is the rate at which new evidence outweighs past experiences in value-based decisions (Nasar et al., 2012).
Developing a neurobiological framework ofhow NA influences feedback processingand the effect ithas on learning rates can potentially enablethe dynamic manipulation of learning. Recent studies have used real-time EEG analysis to manipulate arousal levels in a challenging perceptual task, showing that it is possible to improve task performance by manipulating feedback (Faller et al., 2019).
Apromising area of application of such real-time EEG analysis is the gamification of learning, particularly in digital learning environments. Gamification in a pedagogical context is the idea of using game features (Landers, 2014)to enable a high level control over stimuli and feedback. This project aimsto dynamically alter learning rates via manipulation of the NA arousal system using known neural correlates associated withlearning and decision making such as attentional conflict and levels of uncertainty (Sara & Bouret, 2012). Specifically, the main aims of the project are:
- To model the relationship between EEG, pupil diameter and dynamic learning rate during reinforcement learning (Fouragnan et al., 2015).
- To model the effect of manipulating arousal, uncertainty and attentional conflict on dynamic learning rate during reinforcement learning.
- To develop a digital learning environment that allows for these principles to be applied in a pedagogical context.
Understanding the potential role of NA arousal system in the way we learn, update beliefs and explore new options could have significantimplications in the realm of education and performance. This project will facilitate the creation of an online learning environment which will provide an opportunity to benchmarkthe utility of neurobiological markers in an educational setting. Success in this endeavour would pave the way for a wide variety of adaptations to learning protocols that could in turn empower alevel of learning optimisation and individualisation as feedback is dynamically and continuously adapted to the needs of the learner.
References
[FAL19] Faller, J., Cummings J., Saproo, S., & Paul Sajda (2019). Regulation of arousal via online neurofeedback improves human performance in a demanding sensory-motor task. Proceedings of the National Academy of Sciences, 116(13), 6482-6490.
[FOU15] Fouragnan, E., Retzler, C., Mullinger, K., & Philiastides, M. G. (2015). Two spatiotemporally distinct value systems shape reward-based learning in the human brain. Nature communications, 6, 8107.
[LAN14] Landers, R. N. (2014). Developing a theory of gamified learning: Linking serious games and gamification of learning. Simulation & gaming,45(6), 752-768.
[LAR18] Larsen, R. S., & Waters, J. (2018). Neuromodulatory correlates of pupil dilation. Frontiers in neural circuits, 12, 21.
[NAS12] Nassar, M. R., Rumsey, K. M., Wilson, R. C., Parikh, K., Heasly, B., & Gold, J. I. (2012). Rational regulation of learning dynamics by pupil-linked arousal systems. Nature neuroscience, 15(7), 1040.
[SAR12] Sara, S. J., & Bouret, S. (2012). Orienting and reorienting: the locus coeruleus mediates cognition through arousal. Neuron, 76(1), 130-141.