Students

Cohort 2 (2020-2024)

Jacqueline Borgstedt (CDT Candidate)

Jacqueline BordgstedtI am a current SOCIAL PhD Student interested in how socially intelligent artificial systems can be shaped and adopted in order to improve mental health and well-being of humans. My doctoral research explores how multimodal interaction between robots and humans can facilitate reduction of stress or anxiety and may foster emotional support. Specifically, I am investigating the role of different aspects of touch during robot-human interaction and its potential for improving psychological well-being.

Prior to my PhD studies, I completed an MA in Psychology at the University of Glasgow. During my undergraduate studies, I was particularly interested in the neural circuitry underlying emotion recognition and regulation. As part of my undergraduate dissertation, I thus investigated emotion recognition abilities and sensitivity to affective expressions in epilepsy patients. Further research interests include the evaluation and development of interventions for OCD, anxiety disorders and emotion regulation difficulties.

During my PhD I am looking forward to integrating my knowledge of psychological theories within the development and evaluation of socially assistive robots. Furthermore, I hope to contribute to novel solutions for the application of robots in mental health interventions as well the enhancement of human-robot interaction.

 The Project: Multimodal Interaction and Huggable Robot.

Supervisors: Stephen Brewster (School of Computing Science) and Frank Pollick (School of Psychology).

The aim of the project is to investigate the combination of Human Computer Interaction and social/huggable robots for care, the reduction of stress and anxiety, and emotional support. Existing projects, such as Paro (www.parorobots.com) and the Huggable (www.media.mit.edu/projects/huggable-a-social-robot-for-pediatric-care), focus on very simple interactions. The goal of this PhD project will be to create more complex feedback and sensing to enable a richer interaction between the human and the robot.

The plan would be to study two different aspects of touch: thermal feedback and squeeze input/output. These are key aspects of human-human interaction but have not been studied in human-robot settings where robots and humans come into physical contact.

Thermal feedback has strong associations with emotion and social cues [Wil17]. We use terms like ‘warm and loving’ or ‘cold and distant’ in everyday language. By investigating different uses of warm and cool feedback we can facilitate different emotional relationships with a robot. (This could be used alongside more familiar vibration feedback, such as purring). A series of studies will be undertaken looking at how we can use warming/cooling, rate of change and amount of change in temperature to change responses to robots. We will study responses in terms of, for example, valence and arousal.

We will also look at squeeze interaction from the device. Squeezing in real life offers comfort and support. One half of this task will look at squeeze input, with the human squeezing the robot. This can be done with simple pressure sensors on the robot. The second half will investigate the robot squeezing the arm of the human. For this we will need to build some simple hardware. The studies will look at human responses to squeezing, the social acceptability of these more intimate interactions, and emotional responses to them.

The output of this work will be a series of design prototypes and UI guidelines to help robot designers use new interaction modalities in their robots. The impact of this work will be enable robots have a richer and more natural interaction with the humans they touch. This has many practical applications for the acceptability of robots for care and emotional support.

[Wil17] Wilson, G., and Brewster, S.: Multi-moji: Combining Thermal, Vibrotactile & Visual Stimuli to Expand the Affective Range of Feedback. In Proceedings of the 35th Conference on Human Factors in Computing Systems – CHI ’17, ACM Press, 2017.


Serena Dimitri (CDT Candidate)

Dimitri SerenaI started my journey with a BSc in Psychology at the University of Pavia where I learned to appreciate individual differences. Following, I undertook a joint Master degree course at the University of Pavia and University School of Advanced Studies where I specialized In Neuroscience with the growing will of understanding more about the brain. During my BSc and MSc, I took part in exchange programs at the University Complutense of Madrid, Trinity College of Dublin and the University of Plymouth where I dealt with different ways of doing research. I worked as a researcher on a 3-years-project, which culminated in my Master dissertation: “Neuroscience, Psychology and Computer Science: An Innovative Software on Aggressive Behaviour”. My research interests follow my academic path and my personality, I am mainly captivated by the exploration of how the brain and the individuals work in reaction to technology and how to shape technology to interact effectively with individuals. I am now a PhD student at the University of Glasgow, where I found the perfect harmony between my psychology background, my neuroscience studies and the world of AI and computer science. These three are for me: my knowledge, my specialization and my greatest interest.

The Project: Testing social predictive processing in virtual reality.

Supervisors: Lars Muckli (School of Psychology) and Alice Miller (School of Computing Science).

Virtual reality (VR) is a powerful entertainment tool allowing highly immersive and richly contextual experiences. At the same time, it can be used to flexibly manipulate the 3D (virtual) environment allowing to tailor behavioural experiments systematically. VR is particularly useful for social interaction research, because the experimenter can manipulate rich and realistic social environments, and have participants behave naturally within them [RB18].

While immersed in VR, a participant builds an inner map of the virtual space and stores multiple expectations about the environment mechanisms i.e., where objects or rooms are and their interaction with them, but also about physical and emotional properties of virtual agents (e.g. theory of mind). Using this innovative and powerful technology, it is possible to manipulate both the virtual space and virtual agents within the virtual world, to test internal participants’ expectations and register their reactions to predictable and unpredictable scenarios.

The phenomenon of “change blindness” demonstrates the surprising difficulty observers have in noticing unpredictable changes to visual scenes[SR05]. When presented with two almost identical images, people can fail to notice small changes (e.g. in object colour) and even large changes (e.g. object disappearance). This process arises because the brain cannot attend to the entire wealth of environmental signals presented to our visual systems at any given moment, and instead use attentional networks to selectively process the most relevant features whilst ignoring others. Testing which environmental attributes drive the detection of changes can give useful insights on how humans use predictive processing in social contexts.

In this PhD the student will run behavioural and brain imaging experiments in which they will use VR to investigate how contextual information drives predictive expectations in relation to changes to the environment and agents within it. They will investigate if change detection is due to visual attention or to a social cognitive mechanism such as empathy. This will involve testing word recognition whilst taking the visuospatial perspective of the agents previously seen in the VR (e.g. [FKS18]). The student will examine if social contextual information originating in higher brain areas modulates the processing of visual information.  In brain imaging literature, an effective method to study contextual feedback information is the occlusion paradigm [MPM19]. Cortical layer specific fMRI is possible with 7T brain imaging; the student will test how top-down signals during social cognition activate specific layers of cortex. This data would contribute to redefining current theories explaining the predictive nature of the human brain.

The student will also develop quantitative models in order to assess developed theories. In recent work [PMT19], model checking was proposed as a simple technology to test and develop brain models. Model checking [CHVB18] involves building a simple, finite state model, and defining temporal properties which specify behaviour of interest.  These properties can then be automatically checked using exhaustive search. Model checking can replace the need to perform thousands of simulations to measure the effect of an intervention, or of a modification to the model.

[MPM19] Morgan, A. T., Petro, L. S., & Muckli, L. (2019). Scene representations conveyed by cortical feedback to early visual cortex can be described by line drawings. Journal of Neuroscience, 39(47), 9410-9423.

[SR05] Simons, D. J., & Rensink, R. A. (2005). Change blindness: Past, present, and future. Trends in cognitive sciences, 9(1), 16-20.

[RB18] de la Rosa, S., & Breidt, M. (2018). Virtual reality: A new track in psychological research. British Journal of Psychology109(3), 427-430.

[FKS18] Freundlieb, M., Kovács, Á. M., & Sebanz, N. (2018). Reading Your Mind While You Are Reading—Evidence for Spontaneous Visuospatial Perspective Taking During a Semantic Categorization Task. Psychological science29(4), 614-622.

[PMT19] Porr, B., Miller, A., & Trew, A. (2019). An investigation into serotonergic and environmental interventions against depression in a simulated delayed reward paradigm. Adaptive behaviour, (online version available).

[CHVB-8] Clarke, E. M., Henzinger, T. and Veith, H. & Bloem, R (2018). Handbook of model checking. Springer.


Andreas Drakopoulos (CDT Candidate)

AndreasDIt is a pleasure to be joining the SOCIAL CDT at the University of Glasgow as a PhD student. My research is concerned with how humans perceive virtual and physical space, the simultaneous modelling of the two, and determining whether they are represented by different areas in the brain.

I come to the centre from a mathematical background: I completed a BSc and MSc in Mathematics at the Universities of Glasgow and Leeds respectively, gravitating towards pure mathematics. My undergraduate dissertation was on Stone duality, a connection between algebra and geometry expressed in the language of category theory; my master’s dissertation focused on the Curry-Howard correspondence, which is the observation that aspects of constructive logic harmonise with aspects of computation (e.g. proofs can be viewed as programs).

I developed my academic skills by studying abstract mathematics, and I am excited to now have the opportunity to use them in an applied setting. I am also particularly looking forward to being part of a group with diverse backgrounds and interests, something that drew me to the CDT in the first place. 

The Project: Optimising Interactions with  Virtual Environments  

Supervisors: Michele Sevegnani (School of Computing Science) and Monika Harvey (School of Psychology). 

Virtual and Mixed Reality systems are socio-technical applications in which users experience different configurations of digital media and computation that give different senses of how a “virtual environment” relates to their local physical environment. In Human-Computer Interaction (HCI), we recently developed computational models capable of representing physical and virtual space, solving the problems of how to recognise  virtual spatial regions starting from the detected physical position of the users (Benford et al., 2016). The models are bigraphs [MIL09] derived from the universal computational model introduced by Turing Award Laureate Robin Milner. Bigraphs encapsulate both dynamic and spatial behaviour of agents that interact and move among each other, or within each other. We used the models to investigate cognitive dissonance, namely the inability or difficulty to interact with the virtual environment. 

How the brain represents physical versus virtual environments is also an issue very much debated within Psychology and Neuroscience with some researchers arguing that the brain makes little distinction between the two [BOZ12]. Yet more in line with Sevegnani’s work, Harvey and colleagues have shown that different brain areas represent these different environments and that they are further processed in different time scales HAR12; ROS09]. Moreover, special populations struggle more with virtual over real environments [ROS11]. 

The overarching goal of this PhD project is, therefore, to adapt the computational models developed in HCI and apply them to psychological scenarios, to test whether the environmental processing within the brain is different as proposed. This information will then refine the HCI model and ideally allow a refined application to special populations. 

[BEN16] Benford, S., Calder, M., Rodden, T., & Sevegnani, M., On lions, impala, and bigraphs: Modelling interactions in physical/virtual spaces. ACM Transactions on Computer-Human Interaction (TOCHI), 23(2), 9, 2016. 

[BOZ12] Bozzacchi., C., Giusti, M.A., Pitzalis, S., Spinelli, D., & Di Russo, F., Similar Cerebral Motor Plans for Real and Virtual Actions. PLOS One (7910), e47783, 2012. 

[HAR12] Harvey, M. and Rossit, S., Visuospatial neglect in action. Neuropsychologia, 50, 1018-1028, 2012. 

[MIL09] Milner, R.,  The space and motion of communicating agents. Cambridge University Press, 2009. 

[ROS11] Rossit, S., Malhotra, P., Muir, K., Reeves, I., Duncan G. and Harvey, M., The role of right temporal lobe structures in off-line action: evidence from lesion-behaviour mapping in stroke patients. Cerebral Cortex, 21 (12), 2751-2761, 2011. 

[ROS09] Rossit, S., Malhorta, P., Muir, K., Reeves, I., Duncan, G., Livingstone, K., Jackson H., Hogg, C., Castle P., Learmonth G. and Harvey, M., No neglect- specific deficits in reaching tasks. Cerebral Cortex, 19, 2616-2624, 2009. 


Tobias Thejll-Madsen (CDT Candidate)

TobiasThejllMadsenI am a PhD student with the SOCIAL CDT at the University of Glasgow.  My research focuses on facial expressions in social signaling and on using this knowledge to autogenerate effective humanlike facial expressions on virtual agents.  To do this, we need to understand how expressions link to underlying emotional states and social judgements and translate this to models that the computer can use.  I’m excited to work with a range of people in both psychology and computer science.

Previously, I have completed an MA in Psychology and an MSc in Human Cognitive Neuropsychology with the University of Edinburgh.  There I focused on cognitive psychology, most recently looking at active learning in a social setting, and I am very curious about social inference and cognition in general.  However, as many others, I find it hard to just have one interest so in no particular order, I enjoy: moral philosophy/psychology, statistics and methodology, education/pedagogy (prior to my MSc, I worked developing educational resources and research), reinforcement learning, epistemology, philosophy of science, outreach and science communication, cooking, stand-up comedy, roleplaying games, staying hydrated, and basically anything outdoors.

 The Project: Effective Facial Actions for Artificial Agents.

Supervisors: Rachael Jack (School of Psychology) and Stacy Marsella (School of Psychology).

Face signals play a critical role in social interactions because humans make a wide range of inferences about others from their facial appearance, including emotional, mental and physiological states, culture, ethnicity, age, sex, social class, and personality traits (e.g., see Jack & Schyns, 2017). These judgments in turn impact how people interact with others, oftentimes with significant consequences such as who is hated or loved, hired or fired (e.g., Eberhardt et al., 2006). However, identifying what face features drive these social judgment is challenging because the human face is highly complex, comprising a high number of different facial expressions, textures, complexions, and 3D shapes. Consequently, no formal model of social face signalling currently exists, which in turn has limited the design of artificial agents’ faces to primarily ad hoc approaches that neglect the importance of facial dynamics (e.g., Chen et al., 2019). This project aims to address this knowledge gap by delivering a formal model of face signalling for use in socially interactive artificial agents.

Specifically, this project will a) model the space of 3D dynamic face signals that drive social judgments during social interactions, b) incorporate this model into artificial agents and c) evaluate the model in different human-artificial agent interactions. The result promises to provide a powerful improvement in the design of artificial agents’ face signalling and social interaction capabilities with broad potential for applications in wider society (e.g., social skills training; challenging stereotyping/prejudice).

Modelling the face signals will be derived using methods from human psychophysical perception studies (e.g., see Jack & Schyns, 2017) that extends the work of Dr Jack to include a wider range of social signals used in social interactions (e.g., empathy, agreeableness, skepticism). Face signals that go beyond natural boundaries such as hyper-realistic or super stimuli will also be explored. The resulting model will be incorporated into artificial agents using the public domain SmartBody (Thiebaux et al., 2018) animation platform with possible extension to other platforms. Finally, the model will be evaluated in human-agent interaction using the SmartBody platform with possible combination with other modalities including head and eye movements, hand/arm gestures, transient facial changes such as blushing, pallor, or sweating (e.g., Marsella et al., 2013).

Although there is not a current industrial partner, we expect the work to be very relevant to companies interested in the use of virtual agents for social skills training, such as Medical CyberWorld, and companies working on realistic humanoids robots, such as Furhat and Hanson Robotics. Jack and Marsella have pre-existing relations with these companies.

  1. Jack, R. E., & Schyns, P. G. (2017). Toward a social psychophysics of face communication. Annual review of psychology, 68, 269-297.
  2. Eberhardt, J. L., Davies, P. G., Purdie-Vaughns, V. J., & Johnson, S. L. (2006). Looking deathworthy: Perceived stereotypicality of Black defendants predicts capital-sentencing outcomes. Psychological science, 17(5), 383-386.
  3. Chen, C., Hensel, L. B., Duan, Y., Ince, R., Garrod, O. G., Beskow, J., Jack, R. E. & Schyns, P. G. (2019). Equipping Social Robots with Culturally-Sensitive Facial Expressions of Emotion Using Data-Driven Methods. In: 14th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2019), Lille, France, 14-18 May 2019, (Accepted for Publication).
  4. Stacy Marsella, Yuyu Xu, Margaux Lhommet, Andrew Feng, Stefan Scherer, and Ari Shapiro, “Virtual Character Performance From Speech”, in Symposium on Computer Animation , July 2013.
  5. Stacy Marsella and Jonathan Gratch, “Computationally modeling human emotion”, Communications of the ACM , vol. 57, Dec. 2014, pp. 56-67
  6. Marcus Thiebaux, Andrew Marshall, Stacy Marsella, and Marcelo Kallmann, “SmartBody Behavior Realization for Embodied Conversational Agents”, in Proceedings of Autonomous Agents and Multi-Agent Systems (AAMAS), 2008.

Maria Vlachou (CDT Candidate)

Maria VlachouI am coming to the SOCIAL CDT from a diverse background. I hold an MSc in Data Analytics (University of Glasgow) funded by the Data Lab Scotland, a Research Master in (KU Leuven, Belgium), and a BSc in Psychology (Panteion University, Greece). I have worked as an Applied Behavior Analysis Trainer (Greece & Denmark) & as a Research Intern at the Department of Quantitative Psychology (KU Leuven), where we focused on statistical methods for psychology and reproducibility of science. I have worked for the last two years as a Business Intelligence Developer in the Pharmaceutical Industry. As an MSc student in Glasgow, I was exposed to more advanced statistical methods and my Thesis focused on Auto-Encoder models for dimensionality reduction.

I consider my future work on the project Conversational Venue Recommendation” (supervised by Dr. Craig MacDonald and Dr. Philip McAleer) as a natural evolution of all the above. Therefore, I look forward to working on deep learning methods for building Conversational AI chatbots and discovering new complex methods for recommendations in social networks by incorporating users’ characteristics. Overall, I am excited to be part of an interdisciplinary CDT and have the opportunity to work with people from different research backgrounds.

 The Project: Conversational Venue Recommendation

Supervisors: Craig McDonald (School of Computing Science) and Phil McAleer (School of Psychology)

Increasingly, location-based social networks, such as Foursquare, Facebook or Yelp are replacing traditional static travel guidebooks. Indeed, personalize venue recommendation is an important task for location-based social networks. This task aims to suggest interesting venues that a user may visit, personalized to their tastes and current context, as might be detected form their current location, recent venue visits and historical venue visits. The recent development of models for venue recommendation have encompassed deep learning techniques, able to make effective personalized recommendations.

Venue recommendation is typically deployed such that the user interacts with a mobile phone application. To the best of our knowledge, voice-based venue recommendation has seen considerably less research, but is a rich area for potential improvement. In particular, a venue recommendation agent may be able to elicit further preferences, ask if they prefer one venue or another, or ask for clarification in the type of venue, or distance to be travelled to the next venue.

This proposal aims to:

  • Develop and evaluate models for making venue recommendation using chatbot interfaces, that can be adapted to voice through integration of text-to-speech technology, building upon recent neural network architectures for venue recommendation.
  • Integrate additional factors about personality of the user, or other voice-based context signals (stress, urgency, group interactions) that can inform the venue recommendation agent.

Venue recommendation is an information access scenario for citizens within a “smart city” – indeed, smart city sensors can be used to augment venue recommendation with information about which areas of the city are busy.

[Man18] Contextual Attention Recurrent Architecture for Context-aware Venue Recommendation. Jarana Manotumruksa, Craig Macdonald and Iadh Ounis. In Proceedings of SIGIR 2018.

[Man17] A Deep Recurrent Collaborative Filtering Framework for Venue Recommendation. Jarana Manotumruksa, Craig Macdonald and Iadh Ounis. In Proceedings of CIKM 2017.

[Dev15] Experiments with a Venue-Centric Model for Personalised and Time-Aware Venue Suggestion. Romain Deveaud, Dyaa Albakour, Craig Macdonald, Iadh Ounis. In Proceedings of CIKM 2015.


Sean Westwood (CDT Candidate)

SeanWestwoodI completed my undergraduate and postgraduate degrees in the School of Psychology here at the University of Glasgow. For my PhD I will be continuing to work under the supervision of Dr Marios Philiastides, who specialises in the neuroscience of decision making and has guided me through my MSc. I will also be working under Professor Alessandro Vinciarelli from the School of Computing Science, who specialises in developing computational models involved in human-AI interactions.

My main research interests are reinforcement learning and decision making in humans, as well as the neurological basis for individual differences between people. These interests stem from my background in childcare and sports coaching. For my undergraduate dissertation I studied how gender and masculinity impact risk-taking under stress to investigate why people may act differently in response to physiological arousal. My postgraduate research has focused on links between noradrenaline and aspects of reinforcement learning, using pupil dilation as a measure of noradrenergic activity.

I am looking forward to continuing with this line of research, with the aim of building computational models that reflect individual patterns of learning based on pupil data. It is my hope that this will open up exciting possibilities for AI programmes that are able to dynamically respond to individual needs in an educational context.

The Project: Neurobiologically-informed optimization of gamified learning environments

Supervisors: Marios Philiastides (School of Psychology) and Alessandro Vinciarelli (School of Computing Science)

Value-based decisions are often required in everyday life, where we must incorporate situational evidence with past experiences to work out which option will lead to the best outcome. However, the mechanisms that govern how these two factors are weighted are not yet fully understood. Gaining insight into these processes could greatly help towards the optimisation of feedback in gamified learning environments. This project aims to develop a closed-loop biofeedback systemthat leverages unique ways of fusing electroencephalographic (EEG) and pupillometry measurements to investigate the utilityof the noradrenergic arousal system in value judgements and learning.

In recent years,it has become well established that pupil diameter consistently varies with certain decision making variables such as uncertainty, predictions errors and environmental volatility (Larsen & Waters, 2018). The noradrenergic(NA) arousal system in the brainstem is thought to be driving the neural networks involved in controlling these variables. Despite the increasing popularity of pupillometry in decision-making research, there are still many aspects that remain unexplored, such as the role of the NA arousal system in regulating learning rate, which is the rate at which new evidence outweighs past experiences in value-based decisions (Nasar et al., 2012).

Developing a neurobiological framework ofhow NA influences feedback processingand the effect ithas on learning rates can potentially enablethe dynamic manipulation of learning. Recent studies have used real-time EEG analysis to manipulate arousal levels in a challenging perceptual task, showing that it is possible to improve task performance by manipulating feedback (Faller et al., 2019).

Apromising area of application of such real-time EEG analysis is the gamification of learning, particularly in digital learning environments. Gamification in a pedagogical context is the idea of using game features (Landers, 2014)to enable a high level control over stimuli and feedback. This project aimsto dynamically alter learning rates via manipulation of the NA arousal system using known neural correlates associated withlearning and decision making such as attentional conflict and levels of uncertainty (Sara & Bouret, 2012). Specifically, the main aims of the project are:

  1. To model the relationship between EEG, pupil diameter and dynamic learning rate during reinforcement learning (Fouragnan et al., 2015).
  2. To model the effect of manipulating arousal, uncertainty and attentional conflict on dynamic learning rate during reinforcement learning.
  3. To develop a digital learning environment that allows for these principles to be applied in a pedagogical context.

Understanding the potential role of NA arousal system in the way we learn, update beliefs and explore new options could have significantimplications in the realm of education and performance. This project will facilitate the creation of an online learning environment which will provide an opportunity to benchmarkthe utility of neurobiological markers in an educational setting. Success in this endeavour would pave the way for a wide variety of adaptations to learning protocols that could in turn empower alevel of learning optimisation and individualisation as feedback is dynamically and continuously adapted to the needs of the learner.

[FAL19] Faller, J., Cummings J., Saproo, S., & Paul Sajda (2019). Regulation of arousal via online neurofeedback improves human performance in a demanding sensory-motor task. Proceedings of the National Academy of Sciences, 116(13), 6482-6490.

[FOU15] Fouragnan, E., Retzler, C., Mullinger, K., & Philiastides, M. G. (2015). Two spatiotemporally distinct value systems shape reward-based learning in the human brain. Nature communications, 6, 8107.

[LAN14] Landers, R. N. (2014). Developing a theory of gamified learning: Linking serious games and gamification of learning. Simulation & gaming,45(6), 752-768.

[LAR18] Larsen, R. S., & Waters, J. (2018). Neuromodulatory correlates of pupil dilation. Frontiers in neural circuits, 12, 21.

[NAS12] Nassar, M. R., Rumsey, K. M., Wilson, R. C., Parikh, K., Heasly, B., & Gold, J. I. (2012). Rational regulation of learning dynamics by pupil-linked arousal systems. Nature neuroscience, 15(7), 1040.

[SAR12] Sara, S. J., & Bouret, S. (2012). Orienting and reorienting: the locus coeruleus mediates cognition through arousal. Neuron, 76(1), 130-141.


COHORT 1 (2019-2023)

Andrei Birladeanu (CDT Candidate)

Andrei Birladeanu

I am part of the first cohort of SOCIAL CDT students, working on a project at the intersection of psychiatry and social signal processing. I did my undergraduate in Psychology at the University of Aberdeen finishing up with a thesis examining the physical symptoms of social anxiety. My academic interests are broad, but I have been particularly drawn to the fields of theoretical cognitive science, and cognitive neuroscience in both its basic and translational forms. The latter is what has motivated me to pursue research in the field computational psychiatry, a novel approach aiming to detect and define mental disorders with the help of data-driven techniques. For my PhD, I am using methods from social signal processing to help psychiatrists identify children who display signs of Reactive Attachment Disorder, a severe cluster of psychological and behavioural issues affecting abused and neglected children.

The Project: Multimodal Deep Learning for Detection and Analysis of Reactive Attachment Disorder in Abused and Neglected Children.

Supervisors: Helen Minnis (Institute of Health and Well Being) and Alessandro Vinciarelli (School of Computing Science).

The goal of this project is to develop AI-driven methodologies for detection and analysis of Reactive Attachment Disorder (RAD), a psychiatric disorder affecting abused and neglected children. The main effect of RAD is “failure to seek and accept comfort”, i.e., the shut-down of a set of psychological processes, known as the Attachment System and essential for normal development, that allow children to establish and maintain benefcial relationships with their caregivers [YAR16]. While having serious implications for the child’s future (e.g., RAD is common in children with complex psychiatric disorder and criminal behavior [MOR17]), RAD is highly amenable to treatment if recognised in infancy [YAR16]. However, the disorder is hard for clinicians to detect because its symptoms are not easily visible to the naked eye.

Encouraging progress in RAD diagnosis have acheived by manually analyzing videos of children involved in therapeutic sessions with their caregivers, but such an approach is too expensive and time consuming to be applied in a standard clinical setting. For this reason, this project proposes the use of AI-driven technologies for the analysis of human behavior [VIN09]. These have been successfully applied to other attachment related issues [ROF19] and can help not only to automate the observation of the interactions, thus reducing the amount of time needed for possible diagnosis, but also to identify behavioural markers that might escape clinical observation. The emphasis will be on approaches that jointly model multiple behavioural modalities through the use of appropriate deep network architectures [BAL18].

The experimental activities will revolve around an existing corpus of over 300 real-world videos collected in a clinical setting and they will include three main steps:

  1. Identification of the behavioural cues (the RAD markers) most likely to account for RAD through manual observation of a representative sample of the corpus;
  2. Development of AI-driven methodologies, mostly based on signal processing and deep networks, for the detection of the RAD markers in the videos of the corpus;
  3. Development of AI-driven methodologies, mostly based on deep networks, for the automatic identification of children affected by RAD based on presence and intensity of the cues detected at point 2.

The likely outcomes of the system include a scientific analysis of RAD related behaviours as well as AI-driven methodologies capable of supporting the activity of clinicians. In this respect, the project aligns with needs and interests of private and public bodies dealing with child and adolescent mental health (e.g., the UK National Health Service and National Society for the Prevention of Cruelty to Children).

[BAL18] Baltrušaitis, T., Ahuja, C. and Morency, L.P. (2018). Multimodal Machine Learning: A Survey and Taxonomy, IEEE Transactions on Pattern Analysis and Machine Intelligence, 41(2), 421-433.

[HUM17] Humphreys, K. L., Nelson, C. A., Fox, N. A., & Zeanah, C. H. (2017). Signs of reactive attachment disorder and disinhibited social engagement disorder at age 12 years: Effects of institutional care history and high-quality foster care. Development and Psychopathology, 29(2), 675-684.

[MOR17] Moran, K., McDonald, J., Jackson, A., Turnbull, S., & Minnis, H. (2017). A study of Attachment Disorders in young offenders attending specialist services. Child Abuse & Neglect, 65, 77-87.

[ROF19] Roffo G, Vo DB, Tayarani M, Rooksby M, Sorrentino A, Di Folco S, Minnis H, Brewster S, Vinciarelli A. (2019). Automating the Administration and Analysis of Psychiatric Tests: The Case of Attachment in School Age Children. Proceedings of the CHI, Paper No.: 595 Pages 1–12.

[VIN09] Vinciarelli, A., Pantic, M. and Bourlard, H. (2009), Social Signal Processing: Survey of an Emerging Domain, Image and Vision Computing Journal, 27(12), 1743-1759.

[YAR16] Yarger, H. A., Hoye, J. R., & Dozier, M. (2016). Trajectories of change in attachment and biobehavioral catch-up among high risk mothers: a randomised clinical trial. Infant Mental Health Journal, 37(5), 525-536.


Rhiannon Fyfe (CDT Candidate)

Rhiannon Fyfe

I am a PhD student with the SOCIAL CDT. My MA is in English Language and Linguistics from the University of Glasgow. My current area of research is the further development of socially intelligent robots with a hope to improve Human-Robot Interaction, through the use of theory and methods from socially informed linguistics, and through the deployment in a real-world context of MuMMER (a humanoid robot, based on the SoftBank Robotics’ Pepper robot). During my undergraduate, my research interests included looking at the ways in which speech is practically produced and understood, which different social factors have an effect on speech, which different conversational rules are applied in different social situations, what causes breakdowns in communication and how they can be avoided. My dissertation was titled “Are There New Emerging Basic Colour Terms in British English? A Statistical Analysis”, which was a study into how the semantic space of colour is divided linguistically by speakers of different social backgrounds. The prospect of developing helpful and entertaining robots that could be used to aid child language development, the elderly and the general public drew me to the SOCIAL CDT. I am excited to move forward in this research.

The Project: Evaluating and Enhancing Human-Robot Interaction for Multiple Diverse Users in a Real-World Context.

Supervisors: Mary Ellen Foster (School of Computing Science) and Jane Stuart-Smith (School of Critical Studies).

The increasing availability of socially-intelligent robots with functionality for a range of purposes, from guidance in museums [Geh15], to companionship for the elderly [Heb16], has motivated a growing number of studies attempting to evaluate and enhance Human-Robot Interaction (HRI). But, as Honig and Oron-Gilad review of recent work on understanding and resolving failures in HRI observes [Hon18], most research has focussed on technical ways of improving robot reliability. They argue that progress requires a “holistic approach” in which “[t]he technical knowledge of hardware and software must be integrated with cognitive aspects of information processing, psychological knowledge of interaction dynamics, and domain-specific knowledge of the user, the robot, the target application, and the environment” (p.16). Honig and Oron-Gilad point to a particular need to improve the ecological validity of evaluating user communication in HRI, by moving away from experimental, single-person environments, with low-relevance tasks, mainly with younger adult users, to more natural settings, with users of different social profiles and communication strategies, where the outcome of successful HRI matters.

The main contribution of this PhD project is to develop an interdisciplinary approach to evaluating and enhancing communication efficacy of HRI, by combining state-of-the-art social robotics with theory and methods from socially-informed linguistics [Cou14] and conversation analysis [Cli16]. Specifically, the project aims to improve HRI with the newly-developed MultiModal Mall Entertainment Robot (MuMMER). MuMMER is a humanoid robot, based on the SoftBank Robotics’ Pepper robot, which has been designed to interact naturally and autonomously in the communicatively-challenging space of a public shopping centre/mall with unlimited possible users of differing social backgrounds and communication styles [Fos16]. MuMMER’s role is to entertain and engage visitors to the shopping mall, thereby enhancing their overall experience in the mall. This in turn requires ensuring successful HRI which is socially acceptable, helpful and entertaining for multiple, diverse users in a real-world context. As of June 2019, the technical development of the MuMMER system has been nearly completed, and the final robot system will be located for 3 months in a shopping mall in Finland during the autumn of 2019.

The PhD project will evalute HRI with MuMMER in a new context, a large shopping mall in an English-speaking context, in Scotland’s largest, and most socially and ethnically-diverse city, Glasgow. Project objectives are to:

  • Design a set of sociolinguistically-informed observational studies of HRI with MuMMER in situ with users from a range of social, ethnic, and language backgrounds, using direct and indirect methods
  • Identify the minimal technical modification(dialogue, non-verbal, other) to optimise HRI, and thereby user experience and engagement, also considering indices such as consumer footfall to the mall
  • Implement technical alterations, and re-evaluate with new users.

[Cli16] Clift, R. (2016). Conversation Analysis. Cambridge: Cambridge University Press.

[Cou14] Coupland, N., Sarangi, S., & Candlin, C. N. (2014). Sociolinguistics and social theory. Routledge.

[Fos16] Foster M.E., Alami, R., Gestranius, O., Lemon, O., Niemela, M., Odobez, J-M., Pandey, A.M. (2016) The MuMMER Project: Engaging Human-Robot Interaction in Real-World Public Spaces. In: Agah A., Cabibihan J., Howard A., Salichs M., He H. (eds) Social Robotics. ICSR 2016. Lecture Notes in Computer Science, vol 9979. Springer, Cham

[Geh15] Gehle R., Pitsch K., Dankert T., Wrede S. (2015). Trouble-based group dynamics in real-world HRI – Reactions on unexpected next moves of a museum guide robot., in 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), 2015 (Kobe), 407–412.

[Heb16] Hebesberger, D., Dondrup, C., Koertner, T., Gisinger, C., Pripfl, J. (2016).Lessonslearned from the deployment of a long-term autonomous robot as companion inphysical therapy for older adults with dementia: A mixed methods study. In: TheEleventh ACM/IEEE International Conference on Human Robot Interaction, 27–34

[Hon18] Honig, S., & Oron-Gilad, T. (2018). Understanding and Resolving Failures in Human-Robot Interaction: Literature Review and Model Development. Frontiers in Psychology, 9, 861.


Salman Mohammadi (CDT Candidate)

Salman Mohammadi

I’m a PhD student in the SOCIAL CDT working on Deep Reinforcement Learning and its application to Brain Computer Interfaces. This kind of work revolves around augmenting human decision making processes using AI, by exposing latent neural states correlated with decision making processes to humans in real-time.

Prior to this, I completed my BSc in Computing Science at the University of Glasgow. My honours dissertation was on deep learning methods for learning different compositional styles in classical piano music, and I conducted a user-study which evaluated AI-generated piano music in different styles. As part of a summer scholarship with the School of Computing Science, I’ve been extending this work and researching the wider field of deep variational inference and representation learning for variational auto-encoder models, which focuses on automatically discovering latent and semantically meaningful low dimensional representations of high dimensional data.

In my PhD I’m looking forward to progressing state-of-the-art reinforcement learning and working in the intersection between artificial intelligence and neuroscience. I hope to contribute to research that augments human intelligence with artificial intelligence to create entirely new modes of thought and expression for humans.

The Project: Enhancing Social Interactions via Physiologically-Informed AI.

Supervisors: Marios Philiastides (School of  Psychology) and Alessandro Vinciarelli (School of Computing Science).

Over the past few years major developments in machine learning (ML) have enabled important advancements in artificial intelligence (AI). Firstly, the field of deep learning (DL) – which has enabled models to learn complex input-output functions (e.g. pixels in an image mapped onto object categories), has emerged as a major player in this area. DL builds upon neural network theory and design architectures, expanding these in ways that enable more complex function approximations.

The second major advance in ML has combined advances in DL with reinforcement learning (RL) to enable new AI systems for learning state-action policies – in what is often referred to as deep reinforcement learning (DRL) – to enhance human performance in complex tasks. Despite these advancements, however, critical challenges still exist in incorporating AI into a team with human(s).

One of the most important challenges is the need to understand how humans value intermediate decisions (i.e. before they generate a behaviour) through internal models of their confidence, expected reward, risk etc. Critically, such information about human decision-making is not only expressed through overt behaviour, such as speech or action, but more subtlety through physiological changes, small changes in facial expression and posture etc. Socially and emotionally intelligent people are excellent at picking up on this information to infer the current disposition of one another and to guide their decisions and social interactions.

In this project, we propose to develop a physiologically-informed AI platform, utilizing neural and systemic physiological information (e.g. arousal, stress) ([Fou15][Pis17][Ghe18]) together with affective cues from facial features ([Vin09][Bal16]) to infer latent cognitive and emotional states from humans interacting in a series of social decision-making tasks (e.g. trust game, prisoner’s dilemma etc). Specifically, we will use these latent states to generate rich reinforcement signals to train AI agents (specifically DRL) and allow them to develop a “theory of mind” ([Pre78][Fri05]) in order to make predictions about upcoming human behaviour. The ultimate goal of this project is to deliver advancements towards “closing-the-loop”, whereby the AI agent feeds-back its own predictions to the human players in order to optimise behaviour and social interactions.

[Ghe18] S Gherman, MG Philiastides, “Human VMPFC encodes early signatures of confidence in perceptual decisions”, eLife, 7: e38293, 2018.

[Pis17] MA Pisauro, E Fouragnan, C Retzler, MG Philiastides, “Neural correlates of evidence accumulation during value-based decisions revealed via simultaneous EEG-fMRI”, Nature Communications, 8: 15808, 2017.

[Fou15] E Fouragnan, C Retzler, KJ Mullinger, MG Philiastides, “Two spatiotemporally distinct value systems shape reward-based learning in the human brain”, Nature Communications, 6: 8107, 2015.

[Vin09] A.Vinciarelli, M.Pantic, and H.Bourlard, “Social Signal Processing: Survey of an Emerging Domain“, Image and Vision Computing Journal, Vol. 27, no. 12, pp. 1743-1759, 2009.

[Bal16] T.Baltrušaitis, P.Robinson, and L.-P. Morency. “Openface: an open source facial behavior analysis toolkit.” Proceedings of the IEEE Winter Conference on Applications of Computer Vision, 2016.

[Pre78] D. Premack, G. Woodruff, “Does the chimpanzee have a theory of mind?”, Behavioral and brain sciences Vol. 1, no. 4, pp. 515-526, 1978.

[Fri05] C. Frith, U. Frith, “Theory of Mind”, Current Biology Vol. 15, no. 17, R644-646, 2005.


Emily O’Hara (CDT Candidate)

Emily O'Hara

My name is Emily O’Hara and I am a current PhD student in SOCIAL, the CDT program for Socially Intelligent Artificial Agents at the University of Glasgow. My doctoral research focuses on the social perception of speech, paying particular attention to how the usage of fillers affect the percepts of speaker personality. Within the frames of artificial intelligence, the project aims to improve the functionality and naturalness of artificial voices. My research interests during my undergraduate degree in English Language and Linguistics included sociolinguistics, natural language processing, and psycholinguistics. My dissertation was entitled “Masked Degrees of Facilitation: Can They be Found for Phonological Features in Visual Word Recognition?” and was a psycholinguistic study of how the phonological elements of words are stored in the brain and accessed during reading. The opportunity to integrate my knowledge of linguistic methods and theory with computer science was what attracted me to the CDT, and I look forward to undertaking research that can aid in the creation of more seamless user-AI communication.

The Project: Social Perception of Speech.

Supervisors: Philip McAleer (School of Psychology) and Alessandro Vinciarelli (School of Computing Science).

Short vocalizations like “ehm” and “uhm”, the fillers according to the linguistics terminology, are common in everyday conversations (up to one every 10.9 seconds according to the analysis presented in [Vin15]). For this reason, it is important to understand whether the fillers uttered by a person convey personality impressions, i.e., whether people develop a different opinion about a given individual depending on how she/he utters the fillers. This project will use an existing corpus of 2988 fillers (uttered by 120 persons interacting with one another) to achieve the following scientific and technological goals:

  • To establish the vocal parameters that lead to consistent percepts of speaker personality both within and across listeners and the neural areas involved in these attributions from brief fillers.
  • To develop an AI approach aimed at predicting the trait people attribute to an individual when they hear her/his fillers.

The first goal will be achieved through behavioural [Mah18] and neuroimaging experiments [Tod08] that pinpoint how and where in the brain stable personality percepts are processed. From there, acoustical analysis and data-driven approaches using cutting-edge acoustical morphing techniques will allow for generation of hypotheses feeding subsequent AI networks [McA14]. This section will allow the development of the skills necessary to design, implement, and analyse behavioural and neural experiments for establishing social percepts from speech and voice.

The final goal will be achieved through the development of an end-to-end automatic approach that can map the speech signal underlying a filler into the traits that listeners attribute to a speaker. This will allow the development of the skills necessary to design and implement deep neural networks capable to model sequences of physical measurements (with an emphasis on speech signals).

The project is relevant to the emerging domain called personality computing [Vin14] and the main application related to this project is the synthesis of “personality colored” speech, i.e., artificial voices that can give the impression of a personality and sound not only more realistic, but also better at performing the task they are developed for [Nas05].

[Mah18]. G. Mahrholz, P. Belin and P. McAleer, “Judgements of a speaker’s personality are correlated across differing content and stimulus type”, PLOS ONE, 13(10): e0204991. 2018

[McA14]. P. McAleer, A. Todorov and P. Belin, “How Do You Say ‘Hello’? Personality Impressions from Brief Novel Voices”, PL0S ONE, 9(3): e90779. 2014

[Tod08]. A. Todorov, S. G. Baron and N. N. Oosterhof, “Evaluating face trustworthiness: a model based approach, Social Cognitive Affective Neuroscience, 3(2), pp. 119-127. 2008

[Vin15] A.Vinciarelli, E.Chatziioannou and A.Esposito, “When the Words are not Everything: The Use of Laughter, Fillers, Back-Channel, Silence and Overlapping Speech in Phone Calls“, Frontiers in Information and Communication Technology, 2:4, 2015.

[Vin14] A.Vinciarelli and G.Mohammadi, “A Survey of Personality Computing“, IEEE Transactions on Affective Computing, Vol. 5, no. 3, pp. 273-291, 2014.

[Nas05] C.Nass, S.Brave, “Wired for speech: How voice activates and advances the human-computer relationship”, MIT Press, 2005.


Mary Roth (CDT Candidate)

Mary Roth

I am a recent Psychology graduate from the University of Strathclyde, Glasgow. To me, conducting research has always been the most interesting part of my degree. I find that people and minds are the most complex and fascinating phenomena one could study, and throughout completing my degree I have been very passionate about learning more about the mechanisms underlying our cognition, emotion, and behaviour.

Grounded in the work on my dissertation, my current research interests include the psychology of biases, heuristics, and automatic processing. In this PhD programme I will work on the project “Robust, Efficient, Dynamic Theory of Mind” with Stacy Marsella and Lawrence Barsalou.

Being part of the SOCIAL CDT programme, I look forward to contributing to the emerging interdisciplinary junction between psychology and computer science. Coming from a psychological background, I am excited to apply psychological research to the development of more efficient and dynamic models of social situations.

The Projects: Robust, Efficient, Dynamic Theory of Mind.

Supervisors: Stacy Marsella (School of Psychology) and Larry Barsalou (School of Psychology).

Background: The ability to socially function effectively is a critical human skill and providing such skills to artificial agents is a core challenge faced by these technologies. The aim of this work is to improve the social skills of artificial agents, making them more robust, by giving them a skill that is fundamental to effective human social interaction, the ability to possess and use beliefs about the mental processes and states of others, commonly called Theory of Mind (ToM) [Whiten, 1991]. Theory of Mind skills are predictive of social cooperation and collective intelligence, as well as key to cognitive empathy, emotional intelligence, and the use of shared mental models in teamwork [many references ablated]. Although people typically develop ToM at an early age, research has shown that even adults with a fully formed capability for ToM are limited in their capacity to employ it (Keysar, Lin, & Barr, 2003; Lin, Keysar, & Epley, 2010).

From a computational perspective, there are sound explanations as to why this may be the case. As critical as they are, forming, maintaining and using models of others in decision making can be computationally intractable. Pynadath & Marsella [2007] presented an approach, called minimal mental models, that sought to reduce these costs by exploiting criteria such as prediction accuracy and utility costs associated with prediction errors as a way to limit model complexity. There is a clear relation of that work to the work in psychology on ad hoc categories formed in order to achieve goals [Barsalou, 1983], as well as ideas on motivated inference [Kunda, 1990].

Approach: This effort seeks to develop more robust artificial agents with ToM using an approach that collects data on human ToM performance, analyzes the data and then constructs a computational model based on the analyses. The resulting model will provide artificial agents with a robust, efficient capacity to reason about others.

a) Study the nature of mental model formation and adaptation in people during social interaction– specifically how do one’s own goals, as well as the other’s goals influence and make tractable the model formation and use process.

b) Develop a tractable computational model of this process that takes into the account the artificial agent’s and the human’s goals, as well as models of each other, in an interaction. Tractable of course is fundamental in face-to-face social interactions where agents must respond rapidly.

c) Evaluate the model in artificial agent – human interactions.

We see this work as fundamental to taking embodied social agents beyond their limited, inflexible approaches to interacting socially with us to a significantly more robust capacity. Key to that will be making theory of mind reasoning in artificial agents more tractable via taking into account both the agent’s goals and the human’s goals in the interaction.

[Kin90] Kunda, Z. (1990). The case for motivated reasoning. Psychological Bulletin, 108(3), 480-498.

[Bar83] Barsalou, L.W. Memory & Cognition (1983) 11: 211.

[Key03] Keysar, B., Lin, S., & Barr, D. (2003). Limits on theory of mind use in adults. Cognition, 89, 25–41.

[Lin10] Lin, S., Keysar, B., & Epley, N. (2010). Reflexively mindblind: Using theory of mind to interpret behavior requires effortful attention. Journal of Experimental Social Psychology, 46, 551–556.

[Pin07] David V. Pynadath & Stacy C. Marsella (2007). Minimal Mental Models. In: AAAI, pp. 1038-1046.

[Whi91] Whiten, Andrew (ed). Natural Theories of Mind. Oxford: Basil Blackwell, 1991.