Cohort 2 (2020-2024)

Morgan Bailey (CDT Candidate)


I am a current PhD student within the SOCIAL CDT at the University of Glasgow. My research will be based on social intelligence and how we can use this when building human-AI teams, and will be in collaboration with Qumodo whose main goal is to advance human-AI teams. Qumodo have previously developed software for the Home Office, which drew me to this project as I have a passion for conducting research that has real-world applications to ultimately improve quality of life.

Prior to joining the SOCIAL CDT, I completed a BSc in Psychology and MSc in Psychological Research Methods at the University of Plymouth. My undergraduate dissertation looked at Psycholinguistics with a focus on the relationship between phonemic awareness and reading ability in the general population; having been diagnosed with dyslexia at the age of 8, language acquisition has always fascinated me. My love of Psycholinguistics continued into my MSc but as I wanted to expand my knowledge and challenge myself with different areas of research, I chose to focus my MSc dissertation on the affects of pitch on social cooperation between humans and robots. During my literature review, I discovered how complex and intriguing the relationship between humans and AI is, which motivated me to look for opportunities in the future to complete more research into improving the relationships between humans and AI.

In my opinion, the most appealing aspect of the SOCIAL CDT is the unique opportunity to work with a group of academics who have such a diverse set of skills and interests; it is inspiring to be working as part of a team who can all help train and support each other.

Social Intelligence towards Human-AI Teambuilding

Supervisors: Frank Pollick (School of Psychology) and Reuben Moreton (Qumodo).

Visions of the workplace-of-the-future include applications of machine learning and artificial intelligence embedded in nearly every aspect (Brynjolfsson & Mitchell, 2017). This “digital transformation” holds promise to broadly increase effectiveness and efficiency. A challenge to realising this transformation is that the workplace is substantially a human social environment and machines are not intrinsically social. Imbuing machines with social intelligence holds promise to help build human-AI teams and current approaches to teaming one human and one machine appear reasonably straightforward to design. However, if there are more than one human and more than one system that are working together we can see that the complexity of social interactions increases and we need to understand the society of human-AI teams. This research proposes to take a first step in this direction to consider the interaction of triads containing humans and machines.

Our proposed testbed will be concerned with automatic image classification and we choose this since identity and location recognition is a primary work context of our industrial partner Qumodo. Moreover, there are many image classification systems that have recently shown the ability to approach or exceed human performance. There are two scenarios we would like to examine involving human-AI triads and we term them the sharing problem and the consensus problem:

In the sharing problem we examine two humans teamed with the same AI and examine how the human-AI team is influenced by the learning style of the AI, which after initial training can either learn from a single trainer or from multiple trainers. We will examine how trust in the classifier evolves depending upon the presence/absence of another trainer and the accuracy of the other trainer(s). To obtain precise control the “other” trainer(s) could either be actual operators or simulations obtained by parametrically modifying accuracy based on ground truth. Of interest are the questions of when human-AI teams benefit from pooling of human judgment and if pooling can lead to reduced trust.

In the consensus problem we use the scenario of a human manager who must reach a consensus view based on input from a pair of judgments (human-human, human-AI). This consensus will be reached either with or without “explanation” from the two judgments. To make the experiment tractable we will consider the case of a binary decision (e.g. two facial images are of the same person or a different person). Aspects of the design will be taken from a recent paper examining recognition of identity from facial images (Phillips, et al., 2018).

In addition to these experimental studies we also wish to conduct qualitative studies involving  surveys or structured interviews in the workplace to ascertain whether the experimental results are consistent or not with people’s attitudes towards the scenarios depicted in the experiments.

As industry moves further towards AI automation, this research will have substantial impact on future practices within the workplace. Even as AI performance increases, in most scenarios a human is still required to be in the loop. There has been very little research into what such a human-AI integration/interaction should look like. Therefore this research is of pressing importance across a myriad of different sectors moving towards automation.

[BRY17] Brynjolfsson, E., & Mitchell, T. (2017). What can machine learning do? Workforce implications. Science358(6370), 1530-1534.

[PHI18] Phillips, P. J., Yates, A. N., Hu, Y., Hahn, C. A., Noyes, E., Jackson, K., … & Chen, J. C. (2018). Face recognition accuracy of forensic examiners, superrecognizers, and face recognition algorithms. Proceedings of the National Academy of Sciences115(24), 6171-6176.

Jacqueline Borgstedt (CDT Candidate)

Jacqueline BordgstedtI am a current SOCIAL PhD Student interested in how socially intelligent artificial systems can be shaped and adopted in order to improve mental health and well-being of humans. My doctoral research explores how multimodal interaction between robots and humans can facilitate reduction of stress or anxiety and may foster emotional support. Specifically, I am investigating the role of different aspects of touch during robot-human interaction and its potential for improving psychological well-being.

Prior to my PhD studies, I completed an MA in Psychology at the University of Glasgow. During my undergraduate studies, I was particularly interested in the neural circuitry underlying emotion recognition and regulation. As part of my undergraduate dissertation, I thus investigated emotion recognition abilities and sensitivity to affective expressions in epilepsy patients. Further research interests include the evaluation and development of interventions for OCD, anxiety disorders and emotion regulation difficulties.

During my PhD I am looking forward to integrating my knowledge of psychological theories within the development and evaluation of socially assistive robots. Furthermore, I hope to contribute to novel solutions for the application of robots in mental health interventions as well the enhancement of human-robot interaction.

 The Project: Multimodal Interaction and Huggable Robot.

Supervisors: Stephen Brewster (School of Computing Science) and Frank Pollick (School of Psychology).

The aim of the project is to investigate the combination of Human Computer Interaction and social/huggable robots for care, the reduction of stress and anxiety, and emotional support. Existing projects, such as Paro ( and the Huggable (, focus on very simple interactions. The goal of this PhD project will be to create more complex feedback and sensing to enable a richer interaction between the human and the robot.

The plan would be to study two different aspects of touch: thermal feedback and squeeze input/output. These are key aspects of human-human interaction but have not been studied in human-robot settings where robots and humans come into physical contact.

Thermal feedback has strong associations with emotion and social cues [Wil17]. We use terms like ‘warm and loving’ or ‘cold and distant’ in everyday language. By investigating different uses of warm and cool feedback we can facilitate different emotional relationships with a robot. (This could be used alongside more familiar vibration feedback, such as purring). A series of studies will be undertaken looking at how we can use warming/cooling, rate of change and amount of change in temperature to change responses to robots. We will study responses in terms of, for example, valence and arousal.

We will also look at squeeze interaction from the device. Squeezing in real life offers comfort and support. One half of this task will look at squeeze input, with the human squeezing the robot. This can be done with simple pressure sensors on the robot. The second half will investigate the robot squeezing the arm of the human. For this we will need to build some simple hardware. The studies will look at human responses to squeezing, the social acceptability of these more intimate interactions, and emotional responses to them.

The output of this work will be a series of design prototypes and UI guidelines to help robot designers use new interaction modalities in their robots. The impact of this work will be enable robots have a richer and more natural interaction with the humans they touch. This has many practical applications for the acceptability of robots for care and emotional support.

[Wil17] Wilson, G., and Brewster, S.: Multi-moji: Combining Thermal, Vibrotactile & Visual Stimuli to Expand the Affective Range of Feedback. In Proceedings of the 35th Conference on Human Factors in Computing Systems – CHI ’17, ACM Press, 2017.

Robin Bretin (CDT Candidate)


From the deep forests of Sologne in France, I grew to become an extremely curious person, sensitive to my environment and beings living in it. This curiosity and sensitivity led me, one way or another, to the National Graduate School of Cognitive Engineering (ENSC) in Bordeaux, France. Cognitics aims to understand and improve the flow of human-machine symbiosis, in terms of performance, substitution, safety, ease and comfort, and augment human through technologies.

I am passionate about our future and the infinite possibilities that are presented to us. Being part of the SOCIAL CDT programme as a PhD student is the first step to what I hope will be a great journey toward the integration of new technologies in our society, designed around and for humanity

Flying Robot Friends: Studying Social Drones in Virtual Reality

Supervisors: Mohamed Khamis (School of Computing Science) and Emily Cross (School of Psychology)

Designing and implementing autonomous drones to assist and interact with people in social contexts, so-called “social drones”, is an emerging area of robotics. Human-drone Interaction (HDI) applications range from supporting users in exercising and recreation [1], to providing navigation cues [2] and serving as flying interfaces [3]. To truly make drones “social”, we must understand how humans perceive them and behave around them. Thus, researchers have traditionally run experiments that allow observing users in direct contact with drones. However, such studies can be difficult, expensive and inflexible. For example, it can be difficult, infeasible, or even dangerous to conduct an experiment in the real world to study the impact of different drone sizes or flying altitudes on the user’s behavior. On the other hand, if valid experiments can be conducted in immersive virtual reality (VR), researchers can reach out to a higher number and potentially more diverse participants, and control the environment variables to a greater degree. For example, changing the size of a drone in VR involves merely changing a variable’s value. Similarly, drones in VR are not bound by the physical limitations of the word. But if VR-based studies of human-drone interactions are to be used as a springboard for informing our understanding of HDI in the real world, it is imperative to understand the extent to which findings generalize to in situ HDI.

This project aims to explore the use of immersive virtual reality (VR) as a test bed for studying human behavior around social drones. The main objectives are to understand whether results from studies conducted in VR would match results from corresponding real-world settings, and use VR to inform embodied/in-person HDI studies. As prior work suggests [5], it is expected that some behaviors will be similar across VR and the real world, thereby allowing researchers to use VR as an alternative to real world studies in some contexts. However, understanding the limitiations of VR for developing social drones will be vital as well.

To this end, the project will involve studying and comparing human proxemic behavior around drones both in VR and in the real world. While proxemic behavior has been investigated for human-robot Interaction [4], it has never been studied when interacting with drones. It is expected that attributes of drones, such as their flying altitude or their size, impact how people distance themselves from drones. As has also been studied in HRI proxemics, the extent to which people’s preferred distance to social drones differs based on first and third person viewpoints is also of interest to examine. The results from the real world and VR studies will be compared to allow an assessment of the opportunities, challenges, and limitations of using virtual reality to conduct experiments on introducing drones in close quarters to people to serve social purposes, and provide guidelines can help researchers decide whether to employ VR in their experiments and what factors to account for if doing so.

[1] Florian “Floyd” Mueller and Matthew Muirhead. 2015. Jogging with a Quadcopter. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI ’15). Association for Computing Machinery, New York, NY, USA, 2023–2032.

[2] Pascal Knierim, Steffen Maurer, Katrin Wolf, and Markus Funk. 2018. Quadcopter-Projected In-Situ Navigation Cues for Improved Location Awareness. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI ’18). Association for Computing Machinery, New York, NY, USA, Paper 433, 1–6.

[3] Markus Funk. 2018. Human-drone interaction: let’s get ready for flying user interfaces! interactions 25, 3 (May-June 2018), 78–81.

[4] Jonathan Mumm and Bilge Mutlu, “Human-robot proxemics: Physical and psychological distancing in human-robot interaction,” 2011 6th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Lausanne, 2011, pp. 331-338.

[5] Ville Mäkelä, Rivu Radiah, Saleh Alsherif, Mohamed Khamis, Chong Xiao, Lisa Borchert, Albrecht Schmidt, and Florian Alt. 2020. Virtual Field Studies: Conducting Studies on Public Displays in Virtual Reality. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (CHI ’20). Association for Computing Machinery.

Christopher Chandler (CDT Candidate)


I am a PhD student on the SOCIAL CDT interested in societies of humans and machines.  The focus of my research is on developing formal models of trust between humans and autonomous robots, using data harvested from a mobile game and techniques in formal verification to test model assumptions.

As a philosophy undergraduate at the University of Aberdeen, I was interested in modal metaphysics, gravitating to logic and issues in the foundations of mathematics in the later stages of my degree.  My dissertation investigated the status of axiomatic set theory as fundamental ontology for mathematics given controversies in the search for large cardinal axioms.    However, I started out studying for a joint honours in philosophy and psychology and so gained fair experience in the latter.  I subsequently completed a masters in software development at the University of Glasgow with a dissertation on model checking applied to gain insight into user engagement for a mobile game, spending a couple of years thereafter developing analytics for a company in the commercial HVAC and refrigeration space.

I look forward to working with a diverse team of talented people, challenging discussions and the inevitable maturation of thought that follows.  It is my hope to make a novel contribution to the development of practical and reliable autonomous systems capable of integrating into human society.

Game-based techniques for the investigation of trust for Autonomous robots

Supervisors: Frank Pollick (School of Psychology) and Alice Miller (School of Computing Science)

Trustworthiness is a property of an agent or organisation that engenders trust in others. Humans rely on trust in their day-to-day social interactions, be they in the context of personal relationships, commercial negotiation, or organisational consultation  (with healthcare providers or employers for example). Social success therefore relies on the evaluation of the trustworthiness of others, and our own ability to present ourselves as trustworthy. If autonomous agents are to be used in a social environment, it is vital that we understand the concept of trustworthiness in this context [DEV18].

Some formal models of trust for autonomous systems have been proposed (e.g. [BAS16]), but these models are geared specifically towards autonomous vehicles. Any proposed model must be evaluated by testing. In many cases this would involve deploying complex hardware in sufficiently realistic scenarios in which trust would be a consideration. However, it is also possible to investigate trust in other scenarios. For example, it has been shown that different interfaces to an automatic image classifier change the calibration of human trust towards the classifier [ING20]. Relevant to social processing, in [GAL19] trust was examined via the use of videos. Here, the responses of human participants to videos involving an autonomous robot in a range of scenarios were used to investigate different aspects of trust.

Another way to generate user data to test formal models is using mobile games.  In a recent paper [KAV19], a model of the way that users play games was used to investigate a concept known as game balance. A software tool known as a probabilistic model checker [KWI17] was used to predict user behaviour under the assumptions of the model. Subsequently the game has been released to generate user data in order to evaluate the credibility of the model used.

In this PhD project you will use a similar technique to evaluate trust for autonomous systems. The crucial aspects are the formal models of trust and the question of how to design a suitable game so that the way users respond to different scenarios reflect how much they trust (autonomous robot or animated) characters in the game. You will:

  1. Develop and evaluate models of trust for autonomous robots
  2. Devise a mobile game for which players will respond according to their trust in autonomous robot or animated characters
  3. Use an automatic technique such as model checking or simulation to determine player behaviour under the assumptions of your trust models
  4. Analyse how well player behaviour matches that predicted using model checking

[DEV18] Trustworthiness of autonomous systems– K. Devitt, Foundations of Trusted Autonomy, Studies in Systems, Decision and Control, 2018

[BAS16] Trust dynamics in human autonomous vehicle interaction: a review of trust models – C. Basu et al. AAAI 2016.

[ING20] Calibrating trust towards an autonomous image classifier: a comparison of four interfaces – M. Ingram et al., submitted.

[GAL19] Trusting Robocop: Gender-Based Effects on Trust of an Autonomous Robot – D. Gallimore et al. Frontiers in Psychology 2019

[KAV19] Balancing turn-based games with chained strategy generation – W. Kavanagh, A. Miller et. al. IEEE Transactions on Games 2019.

[KWI17] Probabilistic model checking: advances and applications – M. Kwiatkowska et al. Formal System Verification 2017.

Radu Chirila (CDT Candidate)


My research focuses on designing an artificial smart e-Skin with embedded microactuators, able to convey expressions and emotions biomimetically.  To achieve this, we first need to understand the stimuli behind human emotions and how these link to the various kinematic aspects of the human body.

I come to the SOCIAL CDT with a diverse technical background. I have worked as a software engineer in the United States of America (focusing on developing low-level neural network nodes) and as an electronic systems consulting intern in the UK. My academic journey started with a Bachelor of Engineering (BEng) degree in Electronics and Electrical Engineering at the University of Glasgow. Joining the SOCIAL CDT was a natural step in my technical evolution, given the increasing ability of machine intelligence algorithms and the essential role social robotics plays in today’s world.

Soft eSkin with Embedded Microactuators

Supervisors: Philippe Schyns (School of Psychology) and Ravinder Dahiya (School of Engineering).

Research on tactile skin or e-skin has attracted significant interest recently as it is the key underpinning technology for safe physical interaction between humans and machines such as robots. Thus far, eSkin research has focussed on imitating some of the features of human touch sensing. However, skin is not just designed for feeling the real world, it is also a medium to express feeling through gestures. For example, the skin on the face, which can fold and wrinkle into specific patterns, allows us to express emotions such as varying degrees of happiness, sadness or anger. Yet, this important role of skin has not received any attention so far. Here, for the first time, this project will explore the emotion signal generation capacities of skin by developing programmable soft e-skin patches with embedded micro actuators that will emulate real skin movements. Building on the flexible and soft electronics research in the James Watt School of Engineering and the social robotics research in Institute of Neuroscience & Psychology, this project aims to achieve the following scientific and technological goals:

  • Identify suitable actuation methods to generate simple emotive features such as wrinkles on the forehead
  • Develop a soft eSkin patch with embedded microactutors
  • Use dynamic facial expression models for specific movement patterns in the soft eSkin patch
  • Develop an AI approach to program and control the actuators

Industrial Partners: Project briefly discussed with BMW. They have shown interest, but details could not be discussed as currently we are in the process of filing a patent application.

Serena Dimitri (CDT Candidate)

Dimitri SerenaI started my journey with a BSc in Psychology at the University of Pavia where I learned to appreciate individual differences. Following, I undertook a joint Master degree course at the University of Pavia and University School of Advanced Studies where I specialized In Neuroscience with the growing will of understanding more about the brain. During my BSc and MSc, I took part in exchange programs at the University Complutense of Madrid, Trinity College of Dublin and the University of Plymouth where I dealt with different ways of doing research. I worked as a researcher on a 3-years-project, which culminated in my Master dissertation: “Neuroscience, Psychology and Computer Science: An Innovative Software on Aggressive Behaviour”. My research interests follow my academic path and my personality, I am mainly captivated by the exploration of how the brain and the individuals work in reaction to technology and how to shape technology to interact effectively with individuals. I am now a PhD student at the University of Glasgow, where I found the perfect harmony between my psychology background, my neuroscience studies and the world of AI and computer science. These three are for me: my knowledge, my specialization and my greatest interest.

The Project: Testing social predictive processing in virtual reality.

Supervisors: Lars Muckli (School of Psychology) and Alice Miller (School of Computing Science).

Virtual reality (VR) is a powerful entertainment tool allowing highly immersive and richly contextual experiences. At the same time, it can be used to flexibly manipulate the 3D (virtual) environment allowing to tailor behavioural experiments systematically. VR is particularly useful for social interaction research, because the experimenter can manipulate rich and realistic social environments, and have participants behave naturally within them [RB18].

While immersed in VR, a participant builds an inner map of the virtual space and stores multiple expectations about the environment mechanisms i.e., where objects or rooms are and their interaction with them, but also about physical and emotional properties of virtual agents (e.g. theory of mind). Using this innovative and powerful technology, it is possible to manipulate both the virtual space and virtual agents within the virtual world, to test internal participants’ expectations and register their reactions to predictable and unpredictable scenarios.

The phenomenon of “change blindness” demonstrates the surprising difficulty observers have in noticing unpredictable changes to visual scenes[SR05]. When presented with two almost identical images, people can fail to notice small changes (e.g. in object colour) and even large changes (e.g. object disappearance). This process arises because the brain cannot attend to the entire wealth of environmental signals presented to our visual systems at any given moment, and instead use attentional networks to selectively process the most relevant features whilst ignoring others. Testing which environmental attributes drive the detection of changes can give useful insights on how humans use predictive processing in social contexts.

In this PhD the student will run behavioural and brain imaging experiments in which they will use VR to investigate how contextual information drives predictive expectations in relation to changes to the environment and agents within it. They will investigate if change detection is due to visual attention or to a social cognitive mechanism such as empathy. This will involve testing word recognition whilst taking the visuospatial perspective of the agents previously seen in the VR (e.g. [FKS18]). The student will examine if social contextual information originating in higher brain areas modulates the processing of visual information.  In brain imaging literature, an effective method to study contextual feedback information is the occlusion paradigm [MPM19]. Cortical layer specific fMRI is possible with 7T brain imaging; the student will test how top-down signals during social cognition activate specific layers of cortex. This data would contribute to redefining current theories explaining the predictive nature of the human brain.

The student will also develop quantitative models in order to assess developed theories. In recent work [PMT19], model checking was proposed as a simple technology to test and develop brain models. Model checking [CHVB18] involves building a simple, finite state model, and defining temporal properties which specify behaviour of interest.  These properties can then be automatically checked using exhaustive search. Model checking can replace the need to perform thousands of simulations to measure the effect of an intervention, or of a modification to the model.

[MPM19] Morgan, A. T., Petro, L. S., & Muckli, L. (2019). Scene representations conveyed by cortical feedback to early visual cortex can be described by line drawings. Journal of Neuroscience, 39(47), 9410-9423.

[SR05] Simons, D. J., & Rensink, R. A. (2005). Change blindness: Past, present, and future. Trends in cognitive sciences, 9(1), 16-20.

[RB18] de la Rosa, S., & Breidt, M. (2018). Virtual reality: A new track in psychological research. British Journal of Psychology109(3), 427-430.

[FKS18] Freundlieb, M., Kovács, Á. M., & Sebanz, N. (2018). Reading Your Mind While You Are Reading—Evidence for Spontaneous Visuospatial Perspective Taking During a Semantic Categorization Task. Psychological science29(4), 614-622.

[PMT19] Porr, B., Miller, A., & Trew, A. (2019). An investigation into serotonergic and environmental interventions against depression in a simulated delayed reward paradigm. Adaptive behaviour, (online version available).

[CHVB-8] Clarke, E. M., Henzinger, T. and Veith, H. & Bloem, R (2018). Handbook of model checking. Springer.

Andreas Drakopoulos (CDT Candidate)

AndreasDIt is a pleasure to be joining the SOCIAL CDT at the University of Glasgow as a PhD student. My research is concerned with how humans perceive virtual and physical space, the simultaneous modelling of the two, and determining whether they are represented by different areas in the brain.

I come to the centre from a mathematical background: I completed a BSc and MSc in Mathematics at the Universities of Glasgow and Leeds respectively, gravitating towards pure mathematics. My undergraduate dissertation was on Stone duality, a connection between algebra and geometry expressed in the language of category theory; my master’s dissertation focused on the Curry-Howard correspondence, which is the observation that aspects of constructive logic harmonise with aspects of computation (e.g. proofs can be viewed as programs).

I developed my academic skills by studying abstract mathematics, and I am excited to now have the opportunity to use them in an applied setting. I am also particularly looking forward to being part of a group with diverse backgrounds and interests, something that drew me to the CDT in the first place.

The Project: Optimising Interactions with  Virtual Environments 

Supervisors: Michele Sevegnani (School of Computing Science) and Monika Harvey (School of Psychology).

Virtual and Mixed Reality systems are socio-technical applications in which users experience different configurations of digital media and computation that give different senses of how a “virtual environment” relates to their local physical environment. In Human-Computer Interaction (HCI), we recently developed computational models capable of representing physical and virtual space, solving the problems of how to recognise  virtual spatial regions starting from the detected physical position of the users (Benford et al., 2016). The models are bigraphs [MIL09] derived from the universal computational model introduced by Turing Award Laureate Robin Milner. Bigraphs encapsulate both dynamic and spatial behaviour of agents that interact and move among each other, or within each other. We used the models to investigate cognitive dissonance, namely the inability or difficulty to interact with the virtual environment.

How the brain represents physical versus virtual environments is also an issue very much debated within Psychology and Neuroscience with some researchers arguing that the brain makes little distinction between the two [BOZ12]. Yet more in line with Sevegnani’s work, Harvey and colleagues have shown that different brain areas represent these different environments and that they are further processed in different time scales HAR12; ROS09]. Moreover, special populations struggle more with virtual over real environments [ROS11].

The overarching goal of this PhD project is, therefore, to adapt the computational models developed in HCI and apply them to psychological scenarios, to test whether the environmental processing within the brain is different as proposed. This information will then refine the HCI model and ideally allow a refined application to special populations.

[BEN16] Benford, S., Calder, M., Rodden, T., & Sevegnani, M., On lions, impala, and bigraphs: Modelling interactions in physical/virtual spaces. ACM Transactions on Computer-Human Interaction (TOCHI), 23(2), 9, 2016.

[BOZ12] Bozzacchi., C., Giusti, M.A., Pitzalis, S., Spinelli, D., & Di Russo, F., Similar Cerebral Motor Plans for Real and Virtual Actions. PLOS One (7910), e47783, 2012.

[HAR12] Harvey, M. and Rossit, S., Visuospatial neglect in action. Neuropsychologia, 50, 1018-1028, 2012.

[MIL09] Milner, R.,  The space and motion of communicating agents. Cambridge University Press, 2009.

[ROS11] Rossit, S., Malhotra, P., Muir, K., Reeves, I., Duncan G. and Harvey, M., The role of right temporal lobe structures in off-line action: evidence from lesion-behaviour mapping in stroke patients. Cerebral Cortex, 21 (12), 2751-2761, 2011.

[ROS09] Rossit, S., Malhorta, P., Muir, K., Reeves, I., Duncan, G., Livingstone, K., Jackson H., Hogg, C., Castle P., Learmonth G. and Harvey, M., No neglect- specific deficits in reaching tasks. Cerebral Cortex, 19, 2616-2624, 2009.

Thomas Goodge (CDT Candidate)


My PhD research will be looking at Human-Car interactions in the context of autonomous vehicles, with a focus on the point of handover of control between the driver and the car.

I started studying Psychology at University of Nottingham, with an interest in visual perception, decision making and interaction with technology. I then worked as a Research Assistant with the Transport Research in Psychology group at Nottingham Trent University. Here, I was involved in various projects looking at hazard perception across different presentation formats and in different vehicle types. Our focus was on understanding the strategies drivers use to decide what a hazard is and the risk associated with it, and then developing training interventions to try and impart these skills to new drivers. During this time, I also studied MSc Information Security with Royal Holloway University of London, which looked at the decisions and attitudes people form about their personal data depending on environment.

I am excited to be joining the SOCIAL CDT cohort and to be working with a diverse group of academics across Computer Science and Psychology. I am particularly looking forward to developing the work I have been conducting over previous years as well as learning how incorporating artificial agents for drivers to engage with can further assist and improve driver safety.

Human-car interaction

Supervisors: Steve Brewster (School of Computing Science) and Frank Pollick (School of Psychology).

The aim of this project is to investigate, in the context of social interactions,the interaction between a driver and an autonomous vehicle. Autonomous cars are sophisticated agents that can handle many driving tasks. However, they may have to hand control to the human driver in different circumstances, for example if sensors fail or weather conditions are bad [MCA16, BUR19]. This is potentially difficult for the driver as they may have not been driving the car for a long period and have to quickly take control [POL15]. This is an important issue for car companies as they want to add more automation to vehicles in a safe manner.Key to this problem is whether this interface would benefit from conceptualizing the exchange between human and car as a social interaction.

This project will study how best to handle handovers, from the car indicating to the driver that it is time to take over, the takeover event, and then the return to automated driving. They key factors to investigate are: situational awareness (the driver needs to know what the problem is and what must be done when they take over), responsibility (who’s task is it to drive at which point),  the in-car context (what is the driver doing: are they asleep, talking to another passenger), and driver skills (is the driver competent to drive or are they under the influence).

We will conduct a range of experiments in our driving simulator to test different types of handover situations and different types of multimodal interactions involving social cuesto support the 4 factors outlined above.

The output will be experimental results and guidelines that can help automotive designers know how best to communicate and deal with handover situations between car and driver. We currently work with companies such as Jaguar Landrover and Bosch and our results will have direct application in their products.

[MCA16] Mcall, R., McGee, F., Meschtscherjakov, A. and Engel, T., Towards A Taxonomy of Autonomous Vehicle Handover Situations, Publication: Proceedings of the International Conference on Automotive User Interfaces and Interactive Vehicular Applications, pp. 193–200, 2016.

[BUR19] Burnett, G., Large, D. R. & Salanitri, D., How will drivers interact with vehicles of the future? Royal Automobile Club Foundation for Motoring Report, 2019.

[POL15] Politis, I, Pollick, F and Brewster S. Language-based multimodal displays for the handover of control in autonomous cars, Publication, Proceedings of the International Conference on Automotive User Interfaces and Interactive Vehicular Applications, pp. 3–10, 2015.

Casper Hyllested (CDT Candidate)


I am a PhD student in the SOCIAL CDT and my research focuses primarily on understanding, deconstructing, and measuring the individual differences in people and data. The primary current aim is to implement this understanding to create both generalizable and individualized user models, allowing virtual agents to adapt to the dynamic state and trait profiles that can influence a person’s behaviors and responses.

During my psychology undergraduate at the University of Glasgow I was particularly interested in the patterns which could be attributed to the plethora of individual differences that underlie most research in psychology. I specialized largely in reliability in research and generalizability theory, and my dissertation was predominantly focused on how responses could vary dynamically over time. Since then I have begun to explore the versatility of situated measures, having participants respond to a set of situations rather than generalized items on a questionnaire, once more with the same focus of uncovering the variance in data that forms the composition of any individual or group-wide responses.

Exploring both situational and individual differences, not to mention the unlimited pool of potential facets and confounds that amalgamate to generate just a single response or behavior, is a daunting task. Implementing programming and virtual agents can simultaneously allow for easier collection and analysis of the required data. In turn, categorizing underlying personal mechanics may allow virtual agents to better tailor themselves to, and understand, the individuals they encounter. It will still require copious amounts of time and expertise from a multitude of different fields, which is why I am particularly looking forward to collaborating in a cohort of people with very different backgrounds in a highly interdisciplinary setting.

A framework for establishing situated and generalizable models of users in intelligent virtual agents

Supervisors: Christoph Scheepers (School of Psychology) and Stacy Marsella (School of Psychology)

Aims: Increasing research suggests that intelligent virtual agents are most effective and accepted when they adapt themselves to individual users. One way virtual agents can adapt to different individuals is by developing an effective model of a user’s traits and using it to anticipate dynamically varying states of these traits as situational conditions vary. The primary aims of the current work are to develop: (1) empirical methods for collecting data to build user models, (2) computational procedures for building models from these data, (3) computational procedures for adapting these models to current situations. Although the project’s primary goal is to develop a general framework for building user models, we will also explore preliminary implementations in digital interfaces.

Novel Elements: One standard approach to building a model of a user’s traits—Classical Test Theory—uses a coherent inventory of measurement items to assess a specific trait of interest (e.g., stress, conscientiousness, neuroticism). Typically, these items measure a trait explicitly via a self-report instrument or passively via a digital device. Well-known limitations of this approach include its inability to assess the generalizability of a model across situations and occasions, and its failure incorporate specific situations into model development. In this project, we expand upon the classic approach by incorporating two new perspectives: (1) Generalizability Theory, (2) The Situated Assessment Method. Generalizability Theory will establish a general user model that varies across multiple facets, including individuals, measurement items, situations, and occasions. The Situated Assessment Method replaces standard unsituated assessment items with situations, fundamentally changing the character of assessment.

Approach: We will develop a general framework for collecting empirical data that enables building user models across many potential domains, including stress, personality, social connectedness, wellbeing, mindfulness, eating, daily habits, etc. The data collected—both explicit self-report and passive digital—will assess traits (and states) relevant to a domain across facets for individuals, measurement items, situations, and occasions. These data will be to Generalizability Theory and the Situated Assessment Method to build user models and establish their variance profiles. Of particular interest will be how well user models generalize across facets, the magnitude of individual differences, and clusters of individuals sharing similar models. Situated and unsituated models will both be assessed to establish their relative strengths, weaknesses, and external validity. Once models are built, their ability to predict a user’s states on particular occasions will be assessed, using procedures from Generalizability Theory, the Situated Assessment Method, and autoregression. Prediction error will be assessed to establish optimal model building methods. App prototypes will be developed and explored.

Outputs and Impact: Generally, this work will increase our ability to construct and understand user models that virtual agents can employ. Specifically, we will develop novel methods that: (1) collect data for building user models, (2) assess the generalizability of models; (3) generate state-level inferences in specific situations. Besides being relevant for the development of intelligent social agents, this work will contribute to deeper understanding of classic assessment instruments and to alternative situated measurement approaches across multiple scientific domains. More practicially, the framework, methods, and app prototypes we develop are of potential use to clinicians and individuals interested in understanding both functional and dysfunctional health behaviours.

[BLO12] Bloch, R., & Norman, G. (2012). Generalizability theory for the perplexed: A practical introduction and guide: AMEE Guide No. 68. Medical Teacher, 34, 960–992.

[PED20] Pedersen, C.H., & Scheepers, C. (2020). An exploratory meta-analysis of the state-trait anxiety inventory through use of generalizability theory. Manuscript in prepration.

[DUT19] Dutriaux, L., Clark, N., Papies, E. K., Scheepers, C., & Barsalou, L. W. (2019). Using the Situated Assessment Method (SAM2) to assess individual differences in common habits. Manuscript under review.

[LEB16] Lebois, L. A. M., Hertzog, C., Slavich, G. M., Barrett, L. F., & Barsalou, L. W. (2016). Establishing the situated features associated with perceived stress. Acta Psychologica, 169,119–132.

[MAR14] Stacy Marsella and Jonathan Gratch. Computationally Modeling Human Emotion. Communications of the ACM, December, 2014.

[MIL11] Lynn C. Miller, Stacy Marsella, Teresa Dey, Paul Robert Appleby, John L. Christensen, Jennifer Klatt and Stephen J. Read. Socially Optimized Learning in Virtual Environments (SOLVE). The Fourth International Conference on Interactive Digital Storytelling (ICIDS), Vancouver, Canada, Nov. 2011.

Gordon Rennie (CDT Candidate)


Before starting at the University of Glasgow I carried out an undergraduate in Psychology and then a MSc in Human Robot Interaction at Heriot-Watt University. Initially I was drawn to psychology because of the sheer number of unknowns in the science and the possibility of discovering new knowledge, while also being aware that it could have a real impact on improving people’s lives. My MSc continued this by taking psychological knowledge and attempting to apply it to real world computing technologies targeted at improving people’s lives. There I began working with Conversational Agents, computer programs which attempt to speak with users using natural language.

My MSc project took one such agent, Alana created by Heriot-Watt’s Interaction Lab, and attempted to enable it to speak with multiple users at once. This was one improvement to the agent that builds on top of the brilliant work done on it previously and gave me insight into how difficult it is to improve such systems.

The current PhD I’m studying at SOCIAL offered me the chance to continue this vein of research by working on other areas where current conversational agents fail. Specifically understanding conversational occurrences such as ‘uh’, ‘ah’, and laughter. I find conversational agents a particularly exciting area of research because of the future it promises. Imagine: a computer stops working for some unknowable reason – a common occurrence for even the most technically literate. Imagine also that you could ask it why and how to fix the issue, in plain English, without navigating a myriad of menus. That’s the dream of voice interaction. Of every user being able to interact with computers in the most natural way possible.

Language Independent Conversation Modelling

Supervisors: Alessandro Vinciarelli (School of Computing Science) and Olga Perepelkina (Neurodata Lab).

According to Emmanuel Schegloff, one of the most important linguists of the 20thCentury, conversation is the “primordial site of human sociality”, the setting that has shaped human communicative skills from neural processes to expressive abilities [TUR16]. This project focuses on these latter and, in particular, on the use of nonverbal behavioural cues such as laughter, pauses, fillers and interruptions during dyadic interactions. In particular, the project targets the following main goals:

  • To develop approaches for the automatic detection of laughter, pauses, fillers, overlapping speech and back-channel events in speech signals;
  • To analyse the interplay between the cues above and social-psychological phenomena such as emotions, agreement/disagreement, negotiation, personality, etc.

The experiments will be performed over two existing corpora. One includes roughly 12 hours of spontaneous conversations involving 120 persons [VIN15] that have been fully annotated in terms of the cues and the phenomena above. The other is the Russian Acted Multimodal Affective Set (RAMAS) − the first multimodal corpus in Russian language, including approximately 7 h of high-quality close-up video recordings of faces, speech, motion-capture data and such physiological signals as electro-dermal activity and photoplethysmogram [PER18].

The main motivation behind the focus on nonvebal behavioural cues is that these tend to be used differently in different cultural contexts, but they can still be detected independently of the language being used. In this respect, an approach based on nonverbal communication promises to be more robust to the application over data collected in different countries and linguistic areas. In addition, while the importance of nonverbal communication is widely recognised in social psychology, the way certain cues interplay with social and psychological phenomena still requires full investigation [VIN19].

From a methodological point of view, the project involves the following main aspects:

  • Development of corpus analysis methodologies (observational statistics) for the investigation of the relationships between nonverbal behaviour and social phenomena;
  • Development of signal processing methodologies for the conversion of speech signals into measurements suitable for computer processing;
  • Development of Artificial Intelligence techniques (mainly based on deep networks) for the inference of information from raw speech signals.

From a scientific point of view, the impact of the project will be mainly in Affective Computing and Social Signal Processing [VIN09] while, from an industrial point of view, the impact will be mainly in the areas of Conversational Interfaces (e.g., Alexa and Siri), multimedia content analysis and, in more general terms, Social AI, the application domain encompassing all attempts of making machines capable to interact with people like people do with one another. For this reason, the project is based on the collaboration between the University of Glasgow and Neurodata Lab (, one of the top companies in Social and Emotion AI.

[PER18] Perepelkina O., Kazimirova E., Konstantinova M. RAMAS: Russian Multimodal Corpus of Dyadic Interaction for Affective Computing. In: Karpov A., Jokisch O., Potapova R. (eds) Speech and Computer. Lecture Notes in Computer Science, vol 11096. Springer, 2018.

[TUR16] S.Turkle, “Reclaiming conversation: The power of talk in a digital age”, Penguin, 2016.

[VIN19] M.Tayarani, A.Esposito and A.Vinciarelli, “What an `Ehm’ Leaks About You: Mapping Fillers into Personality Traits with Quantum Evolutionary Feature Selection Algorithms“, accepted for publication by IEEE Transactions on Affective Computing, to appear, 2019.

[VIN15] A.Vinciarelli, E.Chatziioannou and A.Esposito, “When the Words are not Everything: The Use of Laughter, Fillers, Back-Channel, Silence and Overlapping Speech in Phone Calls“, Frontiers in Information and Communication Technology, 2:4, 2015.

[VIN09] A.Vinciarelli, M.Pantic, and H.Bourlard, “Social Signal Processing: Survey of an Emerging Domain“, Image and Vision Computing Journal, Vol. 27, no. 12, pp. 1743-1759, 2009.

Tobias Thejll-Madsen (CDT Candidate)

TobiasThejllMadsenI am a PhD student with the SOCIAL CDT at the University of Glasgow.  My research focuses on facial expressions in social signaling and on using this knowledge to autogenerate effective humanlike facial expressions on virtual agents.  To do this, we need to understand how expressions link to underlying emotional states and social judgements and translate this to models that the computer can use.  I’m excited to work with a range of people in both psychology and computer science.

Previously, I have completed an MA in Psychology and an MSc in Human Cognitive Neuropsychology with the University of Edinburgh.  There I focused on cognitive psychology, most recently looking at active learning in a social setting, and I am very curious about social inference and cognition in general.  However, as many others, I find it hard to just have one interest so in no particular order, I enjoy: moral philosophy/psychology, statistics and methodology, education/pedagogy (prior to my MSc, I worked developing educational resources and research), reinforcement learning, epistemology, philosophy of science, outreach and science communication, cooking, stand-up comedy, roleplaying games, staying hydrated, and basically anything outdoors.

 The Project: Effective Facial Actions for Artificial Agents.

Supervisors: Rachael Jack (School of Psychology) and Stacy Marsella (School of Psychology).

Face signals play a critical role in social interactions because humans make a wide range of inferences about others from their facial appearance, including emotional, mental and physiological states, culture, ethnicity, age, sex, social class, and personality traits (e.g., see Jack & Schyns, 2017). These judgments in turn impact how people interact with others, oftentimes with significant consequences such as who is hated or loved, hired or fired (e.g., Eberhardt et al., 2006). However, identifying what face features drive these social judgment is challenging because the human face is highly complex, comprising a high number of different facial expressions, textures, complexions, and 3D shapes. Consequently, no formal model of social face signalling currently exists, which in turn has limited the design of artificial agents’ faces to primarily ad hoc approaches that neglect the importance of facial dynamics (e.g., Chen et al., 2019). This project aims to address this knowledge gap by delivering a formal model of face signalling for use in socially interactive artificial agents.

Specifically, this project will a) model the space of 3D dynamic face signals that drive social judgments during social interactions, b) incorporate this model into artificial agents and c) evaluate the model in different human-artificial agent interactions. The result promises to provide a powerful improvement in the design of artificial agents’ face signalling and social interaction capabilities with broad potential for applications in wider society (e.g., social skills training; challenging stereotyping/prejudice).

Modelling the face signals will be derived using methods from human psychophysical perception studies (e.g., see Jack & Schyns, 2017) that extends the work of Dr Jack to include a wider range of social signals used in social interactions (e.g., empathy, agreeableness, skepticism). Face signals that go beyond natural boundaries such as hyper-realistic or super stimuli will also be explored. The resulting model will be incorporated into artificial agents using the public domain SmartBody (Thiebaux et al., 2018) animation platform with possible extension to other platforms. Finally, the model will be evaluated in human-agent interaction using the SmartBody platform with possible combination with other modalities including head and eye movements, hand/arm gestures, transient facial changes such as blushing, pallor, or sweating (e.g., Marsella et al., 2013).

Although there is not a current industrial partner, we expect the work to be very relevant to companies interested in the use of virtual agents for social skills training, such as Medical CyberWorld, and companies working on realistic humanoids robots, such as Furhat and Hanson Robotics. Jack and Marsella have pre-existing relations with these companies.

  1. Jack, R. E., & Schyns, P. G. (2017). Toward a social psychophysics of face communication. Annual review of psychology, 68, 269-297.
  2. Eberhardt, J. L., Davies, P. G., Purdie-Vaughns, V. J., & Johnson, S. L. (2006). Looking deathworthy: Perceived stereotypicality of Black defendants predicts capital-sentencing outcomes. Psychological science, 17(5), 383-386.
  3. Chen, C., Hensel, L. B., Duan, Y., Ince, R., Garrod, O. G., Beskow, J., Jack, R. E. & Schyns, P. G. (2019). Equipping Social Robots with Culturally-Sensitive Facial Expressions of Emotion Using Data-Driven Methods. In: 14th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2019), Lille, France, 14-18 May 2019, (Accepted for Publication).
  4. Stacy Marsella, Yuyu Xu, Margaux Lhommet, Andrew Feng, Stefan Scherer, and Ari Shapiro, “Virtual Character Performance From Speech”, in Symposium on Computer Animation , July 2013.
  5. Stacy Marsella and Jonathan Gratch, “Computationally modeling human emotion”, Communications of the ACM , vol. 57, Dec. 2014, pp. 56-67
  6. Marcus Thiebaux, Andrew Marshall, Stacy Marsella, and Marcelo Kallmann, “SmartBody Behavior Realization for Embodied Conversational Agents”, in Proceedings of Autonomous Agents and Multi-Agent Systems (AAMAS), 2008.

Maria Vlachou (CDT Candidate)

Maria VlachouI am coming to the SOCIAL CDT from a diverse background. I hold an MSc in Data Analytics (University of Glasgow) funded by the Data Lab Scotland, a Research Master in (KU Leuven, Belgium), and a BSc in Psychology (Panteion University, Greece). I have worked as an Applied Behavior Analysis Trainer (Greece & Denmark) & as a Research Intern at the Department of Quantitative Psychology (KU Leuven), where we focused on statistical methods for psychology and reproducibility of science. I have worked for the last two years as a Business Intelligence Developer in the Pharmaceutical Industry. As an MSc student in Glasgow, I was exposed to more advanced statistical methods and my Thesis focused on Auto-Encoder models for dimensionality reduction.

I consider my future work on the project Conversational Venue Recommendation” (supervised by Dr. Craig MacDonald and Dr. Philip McAleer) as a natural evolution of all the above. Therefore, I look forward to working on deep learning methods for building Conversational AI chatbots and discovering new complex methods for recommendations in social networks by incorporating users’ characteristics. Overall, I am excited to be part of an interdisciplinary CDT and have the opportunity to work with people from different research backgrounds.

 The Project: Conversational Venue Recommendation

Supervisors: Craig McDonald (School of Computing Science) and Phil McAleer (School of Psychology)

Increasingly, location-based social networks, such as Foursquare, Facebook or Yelp are replacing traditional static travel guidebooks. Indeed, personalize venue recommendation is an important task for location-based social networks. This task aims to suggest interesting venues that a user may visit, personalized to their tastes and current context, as might be detected form their current location, recent venue visits and historical venue visits. The recent development of models for venue recommendation have encompassed deep learning techniques, able to make effective personalized recommendations.

Venue recommendation is typically deployed such that the user interacts with a mobile phone application. To the best of our knowledge, voice-based venue recommendation has seen considerably less research, but is a rich area for potential improvement. In particular, a venue recommendation agent may be able to elicit further preferences, ask if they prefer one venue or another, or ask for clarification in the type of venue, or distance to be travelled to the next venue.

This proposal aims to:

  • Develop and evaluate models for making venue recommendation using chatbot interfaces, that can be adapted to voice through integration of text-to-speech technology, building upon recent neural network architectures for venue recommendation.
  • Integrate additional factors about personality of the user, or other voice-based context signals (stress, urgency, group interactions) that can inform the venue recommendation agent.

Venue recommendation is an information access scenario for citizens within a “smart city” – indeed, smart city sensors can be used to augment venue recommendation with information about which areas of the city are busy.

[Man18] Contextual Attention Recurrent Architecture for Context-aware Venue Recommendation. Jarana Manotumruksa, Craig Macdonald and Iadh Ounis. In Proceedings of SIGIR 2018.

[Man17] A Deep Recurrent Collaborative Filtering Framework for Venue Recommendation. Jarana Manotumruksa, Craig Macdonald and Iadh Ounis. In Proceedings of CIKM 2017.

[Dev15] Experiments with a Venue-Centric Model for Personalised and Time-Aware Venue Suggestion. Romain Deveaud, Dyaa Albakour, Craig Macdonald, Iadh Ounis. In Proceedings of CIKM 2015.

Sean Westwood (CDT Candidate)

SeanWestwoodI completed my undergraduate and postgraduate degrees in the School of Psychology here at the University of Glasgow. For my PhD I will be continuing to work under the supervision of Dr Marios Philiastides, who specialises in the neuroscience of decision making and has guided me through my MSc. I will also be working under Professor Alessandro Vinciarelli from the School of Computing Science, who specialises in developing computational models involved in human-AI interactions.

My main research interests are reinforcement learning and decision making in humans, as well as the neurological basis for individual differences between people. These interests stem from my background in childcare and sports coaching. For my undergraduate dissertation I studied how gender and masculinity impact risk-taking under stress to investigate why people may act differently in response to physiological arousal. My postgraduate research has focused on links between noradrenaline and aspects of reinforcement learning, using pupil dilation as a measure of noradrenergic activity.

I am looking forward to continuing with this line of research, with the aim of building computational models that reflect individual patterns of learning based on pupil data. It is my hope that this will open up exciting possibilities for AI programmes that are able to dynamically respond to individual needs in an educational context.

The Project: Neurobiologically-informed optimization of gamified learning environments

Supervisors: Marios Philiastides (School of Psychology) and Alessandro Vinciarelli (School of Computing Science)

Value-based decisions are often required in everyday life, where we must incorporate situational evidence with past experiences to work out which option will lead to the best outcome. However, the mechanisms that govern how these two factors are weighted are not yet fully understood. Gaining insight into these processes could greatly help towards the optimisation of feedback in gamified learning environments. This project aims to develop a closed-loop biofeedback systemthat leverages unique ways of fusing electroencephalographic (EEG) and pupillometry measurements to investigate the utilityof the noradrenergic arousal system in value judgements and learning.

In recent years,it has become well established that pupil diameter consistently varies with certain decision making variables such as uncertainty, predictions errors and environmental volatility (Larsen & Waters, 2018). The noradrenergic(NA) arousal system in the brainstem is thought to be driving the neural networks involved in controlling these variables. Despite the increasing popularity of pupillometry in decision-making research, there are still many aspects that remain unexplored, such as the role of the NA arousal system in regulating learning rate, which is the rate at which new evidence outweighs past experiences in value-based decisions (Nasar et al., 2012).

Developing a neurobiological framework ofhow NA influences feedback processingand the effect ithas on learning rates can potentially enablethe dynamic manipulation of learning. Recent studies have used real-time EEG analysis to manipulate arousal levels in a challenging perceptual task, showing that it is possible to improve task performance by manipulating feedback (Faller et al., 2019).

Apromising area of application of such real-time EEG analysis is the gamification of learning, particularly in digital learning environments. Gamification in a pedagogical context is the idea of using game features (Landers, 2014)to enable a high level control over stimuli and feedback. This project aimsto dynamically alter learning rates via manipulation of the NA arousal system using known neural correlates associated withlearning and decision making such as attentional conflict and levels of uncertainty (Sara & Bouret, 2012). Specifically, the main aims of the project are:

  1. To model the relationship between EEG, pupil diameter and dynamic learning rate during reinforcement learning (Fouragnan et al., 2015).
  2. To model the effect of manipulating arousal, uncertainty and attentional conflict on dynamic learning rate during reinforcement learning.
  3. To develop a digital learning environment that allows for these principles to be applied in a pedagogical context.

Understanding the potential role of NA arousal system in the way we learn, update beliefs and explore new options could have significantimplications in the realm of education and performance. This project will facilitate the creation of an online learning environment which will provide an opportunity to benchmarkthe utility of neurobiological markers in an educational setting. Success in this endeavour would pave the way for a wide variety of adaptations to learning protocols that could in turn empower alevel of learning optimisation and individualisation as feedback is dynamically and continuously adapted to the needs of the learner.

[FAL19] Faller, J., Cummings J., Saproo, S., & Paul Sajda (2019). Regulation of arousal via online neurofeedback improves human performance in a demanding sensory-motor task. Proceedings of the National Academy of Sciences, 116(13), 6482-6490.

[FOU15] Fouragnan, E., Retzler, C., Mullinger, K., & Philiastides, M. G. (2015). Two spatiotemporally distinct value systems shape reward-based learning in the human brain. Nature communications, 6, 8107.

[LAN14] Landers, R. N. (2014). Developing a theory of gamified learning: Linking serious games and gamification of learning. Simulation & gaming,45(6), 752-768.

[LAR18] Larsen, R. S., & Waters, J. (2018). Neuromodulatory correlates of pupil dilation. Frontiers in neural circuits, 12, 21.

[NAS12] Nassar, M. R., Rumsey, K. M., Wilson, R. C., Parikh, K., Heasly, B., & Gold, J. I. (2012). Rational regulation of learning dynamics by pupil-linked arousal systems. Nature neuroscience, 15(7), 1040.

[SAR12] Sara, S. J., & Bouret, S. (2012). Orienting and reorienting: the locus coeruleus mediates cognition through arousal. Neuron, 76(1), 130-141.

COHORT 1 (2019-2023)

Andrei Birladeanu (CDT Candidate)

Andrei Birladeanu

I am part of the first cohort of SOCIAL CDT students, working on a project at the intersection of psychiatry and social signal processing. I did my undergraduate in Psychology at the University of Aberdeen finishing up with a thesis examining the physical symptoms of social anxiety. My academic interests are broad, but I have been particularly drawn to the fields of theoretical cognitive science, and cognitive neuroscience in both its basic and translational forms. The latter is what has motivated me to pursue research in the field computational psychiatry, a novel approach aiming to detect and define mental disorders with the help of data-driven techniques. For my PhD, I am using methods from social signal processing to help psychiatrists identify children who display signs of Reactive Attachment Disorder, a severe cluster of psychological and behavioural issues affecting abused and neglected children.

The Project: Multimodal Deep Learning for Detection and Analysis of Reactive Attachment Disorder in Abused and Neglected Children.

Supervisors: Helen Minnis (Institute of Health and Well Being) and Alessandro Vinciarelli (School of Computing Science).

The goal of this project is to develop AI-driven methodologies for detection and analysis of Reactive Attachment Disorder (RAD), a psychiatric disorder affecting abused and neglected children. The main effect of RAD is “failure to seek and accept comfort”, i.e., the shut-down of a set of psychological processes, known as the Attachment System and essential for normal development, that allow children to establish and maintain benefcial relationships with their caregivers [YAR16]. While having serious implications for the child’s future (e.g., RAD is common in children with complex psychiatric disorder and criminal behavior [MOR17]), RAD is highly amenable to treatment if recognised in infancy [YAR16]. However, the disorder is hard for clinicians to detect because its symptoms are not easily visible to the naked eye.

Encouraging progress in RAD diagnosis have acheived by manually analyzing videos of children involved in therapeutic sessions with their caregivers, but such an approach is too expensive and time consuming to be applied in a standard clinical setting. For this reason, this project proposes the use of AI-driven technologies for the analysis of human behavior [VIN09]. These have been successfully applied to other attachment related issues [ROF19] and can help not only to automate the observation of the interactions, thus reducing the amount of time needed for possible diagnosis, but also to identify behavioural markers that might escape clinical observation. The emphasis will be on approaches that jointly model multiple behavioural modalities through the use of appropriate deep network architectures [BAL18].

The experimental activities will revolve around an existing corpus of over 300 real-world videos collected in a clinical setting and they will include three main steps:

  1. Identification of the behavioural cues (the RAD markers) most likely to account for RAD through manual observation of a representative sample of the corpus;
  2. Development of AI-driven methodologies, mostly based on signal processing and deep networks, for the detection of the RAD markers in the videos of the corpus;
  3. Development of AI-driven methodologies, mostly based on deep networks, for the automatic identification of children affected by RAD based on presence and intensity of the cues detected at point 2.

The likely outcomes of the system include a scientific analysis of RAD related behaviours as well as AI-driven methodologies capable of supporting the activity of clinicians. In this respect, the project aligns with needs and interests of private and public bodies dealing with child and adolescent mental health (e.g., the UK National Health Service and National Society for the Prevention of Cruelty to Children).

[BAL18] Baltrušaitis, T., Ahuja, C. and Morency, L.P. (2018). Multimodal Machine Learning: A Survey and Taxonomy, IEEE Transactions on Pattern Analysis and Machine Intelligence, 41(2), 421-433.

[HUM17] Humphreys, K. L., Nelson, C. A., Fox, N. A., & Zeanah, C. H. (2017). Signs of reactive attachment disorder and disinhibited social engagement disorder at age 12 years: Effects of institutional care history and high-quality foster care. Development and Psychopathology, 29(2), 675-684.

[MOR17] Moran, K., McDonald, J., Jackson, A., Turnbull, S., & Minnis, H. (2017). A study of Attachment Disorders in young offenders attending specialist services. Child Abuse & Neglect, 65, 77-87.

[ROF19] Roffo G, Vo DB, Tayarani M, Rooksby M, Sorrentino A, Di Folco S, Minnis H, Brewster S, Vinciarelli A. (2019). Automating the Administration and Analysis of Psychiatric Tests: The Case of Attachment in School Age Children. Proceedings of the CHI, Paper No.: 595 Pages 1–12.

[VIN09] Vinciarelli, A., Pantic, M. and Bourlard, H. (2009), Social Signal Processing: Survey of an Emerging Domain, Image and Vision Computing Journal, 27(12), 1743-1759.

[YAR16] Yarger, H. A., Hoye, J. R., & Dozier, M. (2016). Trajectories of change in attachment and biobehavioral catch-up among high risk mothers: a randomised clinical trial. Infant Mental Health Journal, 37(5), 525-536.

Rhiannon Fyfe (CDT Candidate)

Rhiannon Fyfe

I am a PhD student with the SOCIAL CDT. My MA is in English Language and Linguistics from the University of Glasgow. My current area of research is the further development of socially intelligent robots with a hope to improve Human-Robot Interaction, through the use of theory and methods from socially informed linguistics, and through the deployment in a real-world context of MuMMER (a humanoid robot, based on the SoftBank Robotics’ Pepper robot). During my undergraduate, my research interests included looking at the ways in which speech is practically produced and understood, which different social factors have an effect on speech, which different conversational rules are applied in different social situations, what causes breakdowns in communication and how they can be avoided. My dissertation was titled “Are There New Emerging Basic Colour Terms in British English? A Statistical Analysis”, which was a study into how the semantic space of colour is divided linguistically by speakers of different social backgrounds. The prospect of developing helpful and entertaining robots that could be used to aid child language development, the elderly and the general public drew me to the SOCIAL CDT. I am excited to move forward in this research.

The Project: Evaluating and Enhancing Human-Robot Interaction for Multiple Diverse Users in a Real-World Context.

Supervisors: Mary Ellen Foster (School of Computing Science) and Jane Stuart-Smith (School of Critical Studies).

The increasing availability of socially-intelligent robots with functionality for a range of purposes, from guidance in museums [Geh15], to companionship for the elderly [Heb16], has motivated a growing number of studies attempting to evaluate and enhance Human-Robot Interaction (HRI). But, as Honig and Oron-Gilad review of recent work on understanding and resolving failures in HRI observes [Hon18], most research has focussed on technical ways of improving robot reliability. They argue that progress requires a “holistic approach” in which “[t]he technical knowledge of hardware and software must be integrated with cognitive aspects of information processing, psychological knowledge of interaction dynamics, and domain-specific knowledge of the user, the robot, the target application, and the environment” (p.16). Honig and Oron-Gilad point to a particular need to improve the ecological validity of evaluating user communication in HRI, by moving away from experimental, single-person environments, with low-relevance tasks, mainly with younger adult users, to more natural settings, with users of different social profiles and communication strategies, where the outcome of successful HRI matters.

The main contribution of this PhD project is to develop an interdisciplinary approach to evaluating and enhancing communication efficacy of HRI, by combining state-of-the-art social robotics with theory and methods from socially-informed linguistics [Cou14] and conversation analysis [Cli16]. Specifically, the project aims to improve HRI with the newly-developed MultiModal Mall Entertainment Robot (MuMMER). MuMMER is a humanoid robot, based on the SoftBank Robotics’ Pepper robot, which has been designed to interact naturally and autonomously in the communicatively-challenging space of a public shopping centre/mall with unlimited possible users of differing social backgrounds and communication styles [Fos16]. MuMMER’s role is to entertain and engage visitors to the shopping mall, thereby enhancing their overall experience in the mall. This in turn requires ensuring successful HRI which is socially acceptable, helpful and entertaining for multiple, diverse users in a real-world context. As of June 2019, the technical development of the MuMMER system has been nearly completed, and the final robot system will be located for 3 months in a shopping mall in Finland during the autumn of 2019.

The PhD project will evalute HRI with MuMMER in a new context, a large shopping mall in an English-speaking context, in Scotland’s largest, and most socially and ethnically-diverse city, Glasgow. Project objectives are to:

  • Design a set of sociolinguistically-informed observational studies of HRI with MuMMER in situ with users from a range of social, ethnic, and language backgrounds, using direct and indirect methods
  • Identify the minimal technical modification(dialogue, non-verbal, other) to optimise HRI, and thereby user experience and engagement, also considering indices such as consumer footfall to the mall
  • Implement technical alterations, and re-evaluate with new users.

[Cli16] Clift, R. (2016). Conversation Analysis. Cambridge: Cambridge University Press.

[Cou14] Coupland, N., Sarangi, S., & Candlin, C. N. (2014). Sociolinguistics and social theory. Routledge.

[Fos16] Foster M.E., Alami, R., Gestranius, O., Lemon, O., Niemela, M., Odobez, J-M., Pandey, A.M. (2016) The MuMMER Project: Engaging Human-Robot Interaction in Real-World Public Spaces. In: Agah A., Cabibihan J., Howard A., Salichs M., He H. (eds) Social Robotics. ICSR 2016. Lecture Notes in Computer Science, vol 9979. Springer, Cham

[Geh15] Gehle R., Pitsch K., Dankert T., Wrede S. (2015). Trouble-based group dynamics in real-world HRI – Reactions on unexpected next moves of a museum guide robot., in 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), 2015 (Kobe), 407–412.

[Heb16] Hebesberger, D., Dondrup, C., Koertner, T., Gisinger, C., Pripfl, J. (2016).Lessonslearned from the deployment of a long-term autonomous robot as companion inphysical therapy for older adults with dementia: A mixed methods study. In: TheEleventh ACM/IEEE International Conference on Human Robot Interaction, 27–34

[Hon18] Honig, S., & Oron-Gilad, T. (2018). Understanding and Resolving Failures in Human-Robot Interaction: Literature Review and Model Development. Frontiers in Psychology, 9, 861.

Salman Mohammadi (CDT Candidate)

Salman Mohammadi

I’m a PhD student in the SOCIAL CDT working on Deep Reinforcement Learning and its application to Brain Computer Interfaces. This kind of work revolves around augmenting human decision making processes using AI, by exposing latent neural states correlated with decision making processes to humans in real-time.

Prior to this, I completed my BSc in Computing Science at the University of Glasgow. My honours dissertation was on deep learning methods for learning different compositional styles in classical piano music, and I conducted a user-study which evaluated AI-generated piano music in different styles. As part of a summer scholarship with the School of Computing Science, I’ve been extending this work and researching the wider field of deep variational inference and representation learning for variational auto-encoder models, which focuses on automatically discovering latent and semantically meaningful low dimensional representations of high dimensional data.

In my PhD I’m looking forward to progressing state-of-the-art reinforcement learning and working in the intersection between artificial intelligence and neuroscience. I hope to contribute to research that augments human intelligence with artificial intelligence to create entirely new modes of thought and expression for humans.

The Project: Enhancing Social Interactions via Physiologically-Informed AI.

Supervisors: Marios Philiastides (School of  Psychology) and Alessandro Vinciarelli (School of Computing Science).

Over the past few years major developments in machine learning (ML) have enabled important advancements in artificial intelligence (AI). Firstly, the field of deep learning (DL) – which has enabled models to learn complex input-output functions (e.g. pixels in an image mapped onto object categories), has emerged as a major player in this area. DL builds upon neural network theory and design architectures, expanding these in ways that enable more complex function approximations.

The second major advance in ML has combined advances in DL with reinforcement learning (RL) to enable new AI systems for learning state-action policies – in what is often referred to as deep reinforcement learning (DRL) – to enhance human performance in complex tasks. Despite these advancements, however, critical challenges still exist in incorporating AI into a team with human(s).

One of the most important challenges is the need to understand how humans value intermediate decisions (i.e. before they generate a behaviour) through internal models of their confidence, expected reward, risk etc. Critically, such information about human decision-making is not only expressed through overt behaviour, such as speech or action, but more subtlety through physiological changes, small changes in facial expression and posture etc. Socially and emotionally intelligent people are excellent at picking up on this information to infer the current disposition of one another and to guide their decisions and social interactions.

In this project, we propose to develop a physiologically-informed AI platform, utilizing neural and systemic physiological information (e.g. arousal, stress) ([Fou15][Pis17][Ghe18]) together with affective cues from facial features ([Vin09][Bal16]) to infer latent cognitive and emotional states from humans interacting in a series of social decision-making tasks (e.g. trust game, prisoner’s dilemma etc). Specifically, we will use these latent states to generate rich reinforcement signals to train AI agents (specifically DRL) and allow them to develop a “theory of mind” ([Pre78][Fri05]) in order to make predictions about upcoming human behaviour. The ultimate goal of this project is to deliver advancements towards “closing-the-loop”, whereby the AI agent feeds-back its own predictions to the human players in order to optimise behaviour and social interactions.

[Ghe18] S Gherman, MG Philiastides, “Human VMPFC encodes early signatures of confidence in perceptual decisions”, eLife, 7: e38293, 2018.

[Pis17] MA Pisauro, E Fouragnan, C Retzler, MG Philiastides, “Neural correlates of evidence accumulation during value-based decisions revealed via simultaneous EEG-fMRI”, Nature Communications, 8: 15808, 2017.

[Fou15] E Fouragnan, C Retzler, KJ Mullinger, MG Philiastides, “Two spatiotemporally distinct value systems shape reward-based learning in the human brain”, Nature Communications, 6: 8107, 2015.

[Vin09] A.Vinciarelli, M.Pantic, and H.Bourlard, “Social Signal Processing: Survey of an Emerging Domain“, Image and Vision Computing Journal, Vol. 27, no. 12, pp. 1743-1759, 2009.

[Bal16] T.Baltrušaitis, P.Robinson, and L.-P. Morency. “Openface: an open source facial behavior analysis toolkit.” Proceedings of the IEEE Winter Conference on Applications of Computer Vision, 2016.

[Pre78] D. Premack, G. Woodruff, “Does the chimpanzee have a theory of mind?”, Behavioral and brain sciences Vol. 1, no. 4, pp. 515-526, 1978.

[Fri05] C. Frith, U. Frith, “Theory of Mind”, Current Biology Vol. 15, no. 17, R644-646, 2005.

Emily O’Hara (CDT Candidate)

Emily O'Hara

My name is Emily O’Hara and I am a current PhD student in SOCIAL, the CDT program for Socially Intelligent Artificial Agents at the University of Glasgow. My doctoral research focuses on the social perception of speech, paying particular attention to how the usage of fillers affect the percepts of speaker personality. Within the frames of artificial intelligence, the project aims to improve the functionality and naturalness of artificial voices. My research interests during my undergraduate degree in English Language and Linguistics included sociolinguistics, natural language processing, and psycholinguistics. My dissertation was entitled “Masked Degrees of Facilitation: Can They be Found for Phonological Features in Visual Word Recognition?” and was a psycholinguistic study of how the phonological elements of words are stored in the brain and accessed during reading. The opportunity to integrate my knowledge of linguistic methods and theory with computer science was what attracted me to the CDT, and I look forward to undertaking research that can aid in the creation of more seamless user-AI communication.

The Project: Social Perception of Speech.

Supervisors: Philip McAleer (School of Psychology) and Alessandro Vinciarelli (School of Computing Science).

Short vocalizations like “ehm” and “uhm”, the fillers according to the linguistics terminology, are common in everyday conversations (up to one every 10.9 seconds according to the analysis presented in [Vin15]). For this reason, it is important to understand whether the fillers uttered by a person convey personality impressions, i.e., whether people develop a different opinion about a given individual depending on how she/he utters the fillers. This project will use an existing corpus of 2988 fillers (uttered by 120 persons interacting with one another) to achieve the following scientific and technological goals:

  • To establish the vocal parameters that lead to consistent percepts of speaker personality both within and across listeners and the neural areas involved in these attributions from brief fillers.
  • To develop an AI approach aimed at predicting the trait people attribute to an individual when they hear her/his fillers.

The first goal will be achieved through behavioural [Mah18] and neuroimaging experiments [Tod08] that pinpoint how and where in the brain stable personality percepts are processed. From there, acoustical analysis and data-driven approaches using cutting-edge acoustical morphing techniques will allow for generation of hypotheses feeding subsequent AI networks [McA14]. This section will allow the development of the skills necessary to design, implement, and analyse behavioural and neural experiments for establishing social percepts from speech and voice.

The final goal will be achieved through the development of an end-to-end automatic approach that can map the speech signal underlying a filler into the traits that listeners attribute to a speaker. This will allow the development of the skills necessary to design and implement deep neural networks capable to model sequences of physical measurements (with an emphasis on speech signals).

The project is relevant to the emerging domain called personality computing [Vin14] and the main application related to this project is the synthesis of “personality colored” speech, i.e., artificial voices that can give the impression of a personality and sound not only more realistic, but also better at performing the task they are developed for [Nas05].

[Mah18]. G. Mahrholz, P. Belin and P. McAleer, “Judgements of a speaker’s personality are correlated across differing content and stimulus type”, PLOS ONE, 13(10): e0204991. 2018

[McA14]. P. McAleer, A. Todorov and P. Belin, “How Do You Say ‘Hello’? Personality Impressions from Brief Novel Voices”, PL0S ONE, 9(3): e90779. 2014

[Tod08]. A. Todorov, S. G. Baron and N. N. Oosterhof, “Evaluating face trustworthiness: a model based approach, Social Cognitive Affective Neuroscience, 3(2), pp. 119-127. 2008

[Vin15] A.Vinciarelli, E.Chatziioannou and A.Esposito, “When the Words are not Everything: The Use of Laughter, Fillers, Back-Channel, Silence and Overlapping Speech in Phone Calls“, Frontiers in Information and Communication Technology, 2:4, 2015.

[Vin14] A.Vinciarelli and G.Mohammadi, “A Survey of Personality Computing“, IEEE Transactions on Affective Computing, Vol. 5, no. 3, pp. 273-291, 2014.

[Nas05] C.Nass, S.Brave, “Wired for speech: How voice activates and advances the human-computer relationship”, MIT Press, 2005.

Mary Roth (CDT Candidate)

Mary Roth

I am a recent Psychology graduate from the University of Strathclyde, Glasgow. To me, conducting research has always been the most interesting part of my degree. I find that people and minds are the most complex and fascinating phenomena one could study, and throughout completing my degree I have been very passionate about learning more about the mechanisms underlying our cognition, emotion, and behaviour.

Grounded in the work on my dissertation, my current research interests include the psychology of biases, heuristics, and automatic processing. In this PhD programme I will work on the project “Robust, Efficient, Dynamic Theory of Mind” with Stacy Marsella and Lawrence Barsalou.

Being part of the SOCIAL CDT programme, I look forward to contributing to the emerging interdisciplinary junction between psychology and computer science. Coming from a psychological background, I am excited to apply psychological research to the development of more efficient and dynamic models of social situations.

The Projects: Robust, Efficient, Dynamic Theory of Mind.

Supervisors: Stacy Marsella (School of Psychology) and Larry Barsalou (School of Psychology).

Background: The ability to socially function effectively is a critical human skill and providing such skills to artificial agents is a core challenge faced by these technologies. The aim of this work is to improve the social skills of artificial agents, making them more robust, by giving them a skill that is fundamental to effective human social interaction, the ability to possess and use beliefs about the mental processes and states of others, commonly called Theory of Mind (ToM) [Whiten, 1991]. Theory of Mind skills are predictive of social cooperation and collective intelligence, as well as key to cognitive empathy, emotional intelligence, and the use of shared mental models in teamwork [many references ablated]. Although people typically develop ToM at an early age, research has shown that even adults with a fully formed capability for ToM are limited in their capacity to employ it (Keysar, Lin, & Barr, 2003; Lin, Keysar, & Epley, 2010).

From a computational perspective, there are sound explanations as to why this may be the case. As critical as they are, forming, maintaining and using models of others in decision making can be computationally intractable. Pynadath & Marsella [2007] presented an approach, called minimal mental models, that sought to reduce these costs by exploiting criteria such as prediction accuracy and utility costs associated with prediction errors as a way to limit model complexity. There is a clear relation of that work to the work in psychology on ad hoc categories formed in order to achieve goals [Barsalou, 1983], as well as ideas on motivated inference [Kunda, 1990].

Approach: This effort seeks to develop more robust artificial agents with ToM using an approach that collects data on human ToM performance, analyzes the data and then constructs a computational model based on the analyses. The resulting model will provide artificial agents with a robust, efficient capacity to reason about others.

a) Study the nature of mental model formation and adaptation in people during social interaction– specifically how do one’s own goals, as well as the other’s goals influence and make tractable the model formation and use process.

b) Develop a tractable computational model of this process that takes into the account the artificial agent’s and the human’s goals, as well as models of each other, in an interaction. Tractable of course is fundamental in face-to-face social interactions where agents must respond rapidly.

c) Evaluate the model in artificial agent – human interactions.

We see this work as fundamental to taking embodied social agents beyond their limited, inflexible approaches to interacting socially with us to a significantly more robust capacity. Key to that will be making theory of mind reasoning in artificial agents more tractable via taking into account both the agent’s goals and the human’s goals in the interaction.

[Kin90] Kunda, Z. (1990). The case for motivated reasoning. Psychological Bulletin, 108(3), 480-498.

[Bar83] Barsalou, L.W. Memory & Cognition (1983) 11: 211.

[Key03] Keysar, B., Lin, S., & Barr, D. (2003). Limits on theory of mind use in adults. Cognition, 89, 25–41.

[Lin10] Lin, S., Keysar, B., & Epley, N. (2010). Reflexively mindblind: Using theory of mind to interpret behavior requires effortful attention. Journal of Experimental Social Psychology, 46, 551–556.

[Pin07] David V. Pynadath & Stacy C. Marsella (2007). Minimal Mental Models. In: AAAI, pp. 1038-1046.

[Whi91] Whiten, Andrew (ed). Natural Theories of Mind. Oxford: Basil Blackwell, 1991.