Students

Cohort 1 (2019-2023)

Andrei Birladeanu (CDT Candidate)

Andrei Birladeanu

I am part of the first cohort of SOCIAL CDT students and am working with Professor Emily Cross and Dr Mary Ellen Foster. I did my undergraduate degree in Psychology at the University of Aberdeen finishing up with a thesis examining interoceptive abnormalities underlying social anxiety. The project integrated findings from social dynamics, neuroscience, and philosophy allowing me to see the benefits of interdisciplinary work firsthand. My research interests revolve around the mechanisms behind human-human and human-machine interaction and the project I am working on aims to understand and improve the interaction between humans and artificial agents. I also have a keen interest in theoretical Cognitive Science, especially in the sub-field of philosophy of mind and issues around representation, dynamical systems theory, and computational models of the mind.

TheProject: You Never get a Second Chance to Make a First Impression – Establishing how best to align human expectations about a robot’s performance based on the robot’s appearance and behaviour.

Supervisors: Mary Ellen Foster (School of Computing Science) and Emily Cross (School of Psychology).

Main aims and objectives:

  • A major aim of social robotics is to create embodied agents that humans can instantly and automatically understand and interact with, using the same mechanisms that they use when interacting with each other. While considerable research attention has been invested in this endeavour, it is still the case that when humans encounter robots, they need time to understand how the robot works; in other words, people need time to learn to read the signals the robot generates. People consistently have expectations that are far too high for the artificial agents they encounter, which often leads to confusion and disappointment.
  • If we can better understand human expectations about robot capabilities based on the robot’s appearance (and/or initial behaviours) and ensure that those are aligned with the actual robot abilities, this should accelerate progress in human-robot interaction, specifically in the domains of human acceptance of robots in social settings and cooperative task performance between humans and robots. This project will combine expertise in robotic design and the social neuroscience of how we perceive and interact with artificial agents to develop a socially interactive robot designed for use in public spaces that requires (little or) no learning or effort for humans to interact with while carrying out tasks such as guidance, cooperative navigation, and interactive problem-solving tasks.

Proposed methods:

  • Computing Science: System development and integration (Developing operational models of interactive behaviour and implementing them on robot platforms); deployment of robot systems in lab-based settings and in real-world public spaces
  • Psychology/Brain Science: Behavioural tasks (questionnaires and measures of social perception, such as the Social Stroop task), non-invasive mobile brain imaging (functional near infrared spectroscopy) to record human brain activity when encountering the artificial agent in question.

Likely outputs:

  • empirically-based principles for social robot design to optimize alignment between robot’s appearance, user expectations, and robot performance, based on brain and behavioural data
  • A publicly available, implemented, and validated robot system embodying these principles
  • Empirical research papers detailing findings for a computing science audience (e.g., ACM Transactions on Human-Robot Interaction) a psychology/neuroscience audience (e.g., Psychological Science, Cognition) and a general audience, that draws on the multidisciplinary aspects of the work (PNAS, Current Biology), as well as papers at appropriate conferences and workshops such as Human-Robot Interaction, Intelligent Virtual Agents, CHI, and similar.

[Fos17] Foster, M. E.; Gaschler, A.; and Giuliani, M. Automatically Classifying User Engagement for Dynamic Multi-party Human–Robot Interaction. International Journal of Social Robotics. July 2017.

[Fos16] Foster, M. E.; Alami, R.; Gestranius, O.; Lemon, O.; Niemelä, M.; Odobez, J.; and Pandey, A. K. The MuMMER project: Engaging human-robot interaction in real-world public spaces. In Proceedings of the Eighth International Conference on Social Robotics, 2016.

[Cro19] Cross, E. S., Riddoch, K. A., Pratts, J., Titone, S., Chaudhury, B. & Hortensius, R. (2019). A neurocognitive investigation of the impact of socialising with a robot on empathy for pain. Philosophical Transactions of the Royal Society B.

[Hor18] Hortensius, R. & Cross, E.S. (2018). From automata to animate beings: The scope and limits of attributing socialness to artificial agents. Annals of the New York Academy of Science: The Year in Cognitive Neuroscience.


Rhiannon Fyfe (CDT Candidate)

Rhiannon Fyfe

I am a PhD student with the SOCIAL CDT. My MA is in English Language and Linguistics from the University of Glasgow. My current area of research is the further development of socially intelligent robots with a hope to improve Human-Robot Interaction, through the use of theory and methods from socially informed linguistics, and through the deployment in a real-world context of MuMMER (a humanoid robot, based on the SoftBank Robotics’ Pepper robot). During my undergraduate, my research interests included looking at the ways in which speech is practically produced and understood, which different social factors have an effect on speech, which different conversational rules are applied in different social situations, what causes breakdowns in communication and how they can be avoided. My dissertation was titled “Are There New Emerging Basic Colour Terms in British English? A Statistical Analysis”, which was a study into how the semantic space of colour is divided linguistically by speakers of different social backgrounds. The prospect of developing helpful and entertaining robots that could be used to aid child language development, the elderly and the general public drew me to the SOCIAL CDT. I am excited to move forward in this research.

The Project: Evaluating and Enhancing Human-Robot Interaction for Multiple Diverse Users in a Real-World Context.

Supervisors: Mary Ellen Foster (School of Computing Science) and Jane Stuart-Smith (School of Critical Studies).

The increasing availability of socially-intelligent robots with functionality for a range of purposes, from guidance in museums [Geh15], to companionship for the elderly [Heb16], has motivated a growing number of studies attempting to evaluate and enhance Human-Robot Interaction (HRI). But, as Honig and Oron-Gilad review of recent work on understanding and resolving failures in HRI observes [Hon18], most research has focussed on technical ways of improving robot reliability. They argue that progress requires a “holistic approach” in which “[t]he technical knowledge of hardware and software must be integrated with cognitive aspects of information processing, psychological knowledge of interaction dynamics, and domain-specific knowledge of the user, the robot, the target application, and the environment” (p.16). Honig and Oron-Gilad point to a particular need to improve the ecological validity of evaluating user communication in HRI, by moving away from experimental, single-person environments, with low-relevance tasks, mainly with younger adult users, to more natural settings, with users of different social profiles and communication strategies, where the outcome of successful HRI matters.

The main contribution of this PhD project is to develop an interdisciplinary approach to evaluating and enhancing communication efficacy of HRI, by combining state-of-the-art social robotics with theory and methods from socially-informed linguistics [Cou14] and conversation analysis [Cli16]. Specifically, the project aims to improve HRI with the newly-developed MultiModal Mall Entertainment Robot (MuMMER). MuMMER is a humanoid robot, based on the SoftBank Robotics’ Pepper robot, which has been designed to interact naturally and autonomously in the communicatively-challenging space of a public shopping centre/mall with unlimited possible users of differing social backgrounds and communication styles [Fos16]. MuMMER’s role is to entertain and engage visitors to the shopping mall, thereby enhancing their overall experience in the mall. This in turn requires ensuring successful HRI which is socially acceptable, helpful and entertaining for multiple, diverse users in a real-world context. As of June 2019, the technical development of the MuMMER system has been nearly completed, and the final robot system will be located for 3 months in a shopping mall in Finland during the autumn of 2019.

The PhD project will evalute HRI with MuMMER in a new context, a large shopping mall in an English-speaking context, in Scotland’s largest, and most socially and ethnically-diverse city, Glasgow. Project objectives are to:

  • Design a set of sociolinguistically-informed observational studies of HRI with MuMMER in situ with users from a range of social, ethnic, and language backgrounds, using direct and indirect methods
  • Identify the minimal technical modification(dialogue, non-verbal, other) to optimise HRI, and thereby user experience and engagement, also considering indices such as consumer footfall to the mall
  • Implement technical alterations, and re-evaluate with new users.

[Cli16] Clift, R. (2016). Conversation Analysis. Cambridge: Cambridge University Press.

[Cou14] Coupland, N., Sarangi, S., & Candlin, C. N. (2014). Sociolinguistics and social theory. Routledge.

[Fos16] Foster M.E., Alami, R., Gestranius, O., Lemon, O., Niemela, M., Odobez, J-M., Pandey, A.M. (2016) The MuMMER Project: Engaging Human-Robot Interaction in Real-World Public Spaces. In: Agah A., Cabibihan J., Howard A., Salichs M., He H. (eds) Social Robotics. ICSR 2016. Lecture Notes in Computer Science, vol 9979. Springer, Cham

[Geh15] Gehle R., Pitsch K., Dankert T., Wrede S. (2015). Trouble-based group dynamics in real-world HRI – Reactions on unexpected next moves of a museum guide robot., in 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), 2015 (Kobe), 407–412.

[Heb16] Hebesberger, D., Dondrup, C., Koertner, T., Gisinger, C., Pripfl, J. (2016).Lessonslearned from the deployment of a long-term autonomous robot as companion inphysical therapy for older adults with dementia: A mixed methods study. In: TheEleventh ACM/IEEE International Conference on Human Robot Interaction, 27–34

[Hon18] Honig, S., & Oron-Gilad, T. (2018). Understanding and Resolving Failures in Human-Robot Interaction: Literature Review and Model Development. Frontiers in Psychology, 9, 861.


Salman Mohammadi (CDT Candidate)

Salman Mohammadi

I’m a PhD student in the SOCIAL CDT working on Deep Reinforcement Learning and its application to Brain Computer Interfaces. This kind of work revolves around augmenting human decision making processes using AI, by exposing latent neural states correlated with decision making processes to humans in real-time.

Prior to this, I completed my BSc in Computing Science at the University of Glasgow. My honours dissertation was on deep learning methods for learning different compositional styles in classical piano music, and I conducted a user-study which evaluated AI-generated piano music in different styles. As part of a summer scholarship with the School of Computing Science, I’ve been extending this work and researching the wider field of deep variational inference and representation learning for variational auto-encoder models, which focuses on automatically discovering latent and semantically meaningful low dimensional representations of high dimensional data.

In my PhD I’m looking forward to progressing state-of-the-art reinforcement learning and working in the intersection between artificial intelligence and neuroscience. I hope to contribute to research that augments human intelligence with artificial intelligence to create entirely new modes of thought and expression for humans.

The Project: Enhancing Social Interactions via Physiologically-Informed AI.

Supervisors: Marios Philiastides (School of  Psychology) and Alessandro Vinciarelli (School of Computing Science).

Over the past few years major developments in machine learning (ML) have enabled important advancements in artificial intelligence (AI). Firstly, the field of deep learning (DL) – which has enabled models to learn complex input-output functions (e.g. pixels in an image mapped onto object categories), has emerged as a major player in this area. DL builds upon neural network theory and design architectures, expanding these in ways that enable more complex function approximations.

The second major advance in ML has combined advances in DL with reinforcement learning (RL) to enable new AI systems for learning state-action policies – in what is often referred to as deep reinforcement learning (DRL) – to enhance human performance in complex tasks. Despite these advancements, however, critical challenges still exist in incorporating AI into a team with human(s).

One of the most important challenges is the need to understand how humans value intermediate decisions (i.e. before they generate a behaviour) through internal models of their confidence, expected reward, risk etc. Critically, such information about human decision-making is not only expressed through overt behaviour, such as speech or action, but more subtlety through physiological changes, small changes in facial expression and posture etc. Socially and emotionally intelligent people are excellent at picking up on this information to infer the current disposition of one another and to guide their decisions and social interactions.

In this project, we propose to develop a physiologically-informed AI platform, utilizing neural and systemic physiological information (e.g. arousal, stress) ([Fou15][Pis17][Ghe18]) together with affective cues from facial features ([Vin09][Bal16]) to infer latent cognitive and emotional states from humans interacting in a series of social decision-making tasks (e.g. trust game, prisoner’s dilemma etc). Specifically, we will use these latent states to generate rich reinforcement signals to train AI agents (specifically DRL) and allow them to develop a “theory of mind” ([Pre78][Fri05]) in order to make predictions about upcoming human behaviour. The ultimate goal of this project is to deliver advancements towards “closing-the-loop”, whereby the AI agent feeds-back its own predictions to the human players in order to optimise behaviour and social interactions.

[Ghe18] S Gherman, MG Philiastides, “Human VMPFC encodes early signatures of confidence in perceptual decisions”, eLife, 7: e38293, 2018.

[Pis17] MA Pisauro, E Fouragnan, C Retzler, MG Philiastides, “Neural correlates of evidence accumulation during value-based decisions revealed via simultaneous EEG-fMRI”, Nature Communications, 8: 15808, 2017.

[Fou15] E Fouragnan, C Retzler, KJ Mullinger, MG Philiastides, “Two spatiotemporally distinct value systems shape reward-based learning in the human brain”, Nature Communications, 6: 8107, 2015.

[Vin09] A.Vinciarelli, M.Pantic, and H.Bourlard, “Social Signal Processing: Survey of an Emerging Domain“, Image and Vision Computing Journal, Vol. 27, no. 12, pp. 1743-1759, 2009.

[Bal16] T.Baltrušaitis, P.Robinson, and L.-P. Morency. “Openface: an open source facial behavior analysis toolkit.” Proceedings of the IEEE Winter Conference on Applications of Computer Vision, 2016.

[Pre78] D. Premack, G. Woodruff, “Does the chimpanzee have a theory of mind?”, Behavioral and brain sciences Vol. 1, no. 4, pp. 515-526, 1978.

[Fri05] C. Frith, U. Frith, “Theory of Mind”, Current Biology Vol. 15, no. 17, R644-646, 2005.


Emily O’Hara (CDT Candidate)

Emily O'Hara

My name is Emily O’Hara and I am a current PhD student in SOCIAL, the CDT program for Socially Intelligent Artificial Agents at the University of Glasgow. My doctoral research focuses on the social perception of speech, paying particular attention to how the usage of fillers affect the percepts of speaker personality. Within the frames of artificial intelligence, the project aims to improve the functionality and naturalness of artificial voices. My research interests during my undergraduate degree in English Language and Linguistics included sociolinguistics, natural language processing, and psycholinguistics. My dissertation was entitled “Masked Degrees of Facilitation: Can They be Found for Phonological Features in Visual Word Recognition?” and was a psycholinguistic study of how the phonological elements of words are stored in the brain and accessed during reading. The opportunity to integrate my knowledge of linguistic methods and theory with computer science was what attracted me to the CDT, and I look forward to undertaking research that can aid in the creation of more seamless user-AI communication.

The Project: Social Perception of Speech.

Supervisors: Philip McAleer (School of Psychology) and Alessandro Vinciarelli (School of Computing Science).

Short vocalizations like “ehm” and “uhm”, the fillers according to the linguistics terminology, are common in everyday conversations (up to one every 10.9 seconds according to the analysis presented in [Vin15]). For this reason, it is important to understand whether the fillers uttered by a person convey personality impressions, i.e., whether people develop a different opinion about a given individual depending on how she/he utters the fillers. This project will use an existing corpus of 2988 fillers (uttered by 120 persons interacting with one another) to achieve the following scientific and technological goals:

  • To establish the vocal parameters that lead to consistent percepts of speaker personality both within and across listeners and the neural areas involved in these attributions from brief fillers.
  • To develop an AI approach aimed at predicting the trait people attribute to an individual when they hear her/his fillers.

The first goal will be achieved through behavioural [Mah18] and neuroimaging experiments [Tod08] that pinpoint how and where in the brain stable personality percepts are processed. From there, acoustical analysis and data-driven approaches using cutting-edge acoustical morphing techniques will allow for generation of hypotheses feeding subsequent AI networks [McA14]. This section will allow the development of the skills necessary to design, implement, and analyse behavioural and neural experiments for establishing social percepts from speech and voice.

The final goal will be achieved through the development of an end-to-end automatic approach that can map the speech signal underlying a filler into the traits that listeners attribute to a speaker. This will allow the development of the skills necessary to design and implement deep neural networks capable to model sequences of physical measurements (with an emphasis on speech signals).

The project is relevant to the emerging domain called personality computing [Vin14] and the main application related to this project is the synthesis of “personality colored” speech, i.e., artificial voices that can give the impression of a personality and sound not only more realistic, but also better at performing the task they are developed for [Nas05].

[Mah18]. G. Mahrholz, P. Belin and P. McAleer, “Judgements of a speaker’s personality are correlated across differing content and stimulus type”, PLOS ONE, 13(10): e0204991. 2018

[McA14]. P. McAleer, A. Todorov and P. Belin, “How Do You Say ‘Hello’? Personality Impressions from Brief Novel Voices”, PL0S ONE, 9(3): e90779. 2014

[Tod08]. A. Todorov, S. G. Baron and N. N. Oosterhof, “Evaluating face trustworthiness: a model based approach, Social Cognitive Affective Neuroscience, 3(2), pp. 119-127. 2008

[Vin15] A.Vinciarelli, E.Chatziioannou and A.Esposito, “When the Words are not Everything: The Use of Laughter, Fillers, Back-Channel, Silence and Overlapping Speech in Phone Calls“, Frontiers in Information and Communication Technology, 2:4, 2015.

[Vin14] A.Vinciarelli and G.Mohammadi, “A Survey of Personality Computing“, IEEE Transactions on Affective Computing, Vol. 5, no. 3, pp. 273-291, 2014.

[Nas05] C.Nass, S.Brave, “Wired for speech: How voice activates and advances the human-computer relationship”, MIT Press, 2005.


Mary Roth (CDT Candidate)

Mary Roth

I am a recent Psychology graduate from the University of Strathclyde, Glasgow. To me, conducting research has always been the most interesting part of my degree. I find that people and minds are the most complex and fascinating phenomena one could study, and throughout completing my degree I have been very passionate about learning more about the mechanisms underlying our cognition, emotion, and behaviour.

Grounded in the work on my dissertation, my current research interests include the psychology of biases, heuristics, and automatic processing. In this PhD programme I will work on the project “Robust, Efficient, Dynamic Theory of Mind” with Stacy Marsella and Lawrence Barsalou.

Being part of the SOCIAL CDT programme, I look forward to contributing to the emerging interdisciplinary junction between psychology and computer science. Coming from a psychological background, I am excited to apply psychological research to the development of more efficient and dynamic models of social situations.

The Projects: Robust, Efficient, Dynamic Theory of Mind.

Supervisors: Stacy Marsella (School of Psychology) and Larry Barsalou (School of Psychology).

Background: The ability to socially function effectively is a critical human skill and providing such skills to artificial agents is a core challenge faced by these technologies. The aim of this work is to improve the social skills of artificial agents, making them more robust, by giving them a skill that is fundamental to effective human social interaction, the ability to possess and use beliefs about the mental processes and states of others, commonly called Theory of Mind (ToM) [Whiten, 1991]. Theory of Mind skills are predictive of social cooperation and collective intelligence, as well as key to cognitive empathy, emotional intelligence, and the use of shared mental models in teamwork [many references ablated]. Although people typically develop ToM at an early age, research has shown that even adults with a fully formed capability for ToM are limited in their capacity to employ it (Keysar, Lin, & Barr, 2003; Lin, Keysar, & Epley, 2010).

From a computational perspective, there are sound explanations as to why this may be the case. As critical as they are, forming, maintaining and using models of others in decision making can be computationally intractable. Pynadath & Marsella [2007] presented an approach, called minimal mental models, that sought to reduce these costs by exploiting criteria such as prediction accuracy and utility costs associated with prediction errors as a way to limit model complexity. There is a clear relation of that work to the work in psychology on ad hoc categories formed in order to achieve goals [Barsalou, 1983], as well as ideas on motivated inference [Kunda, 1990].

Approach: This effort seeks to develop more robust artificial agents with ToM using an approach that collects data on human ToM performance, analyzes the data and then constructs a computational model based on the analyses. The resulting model will provide artificial agents with a robust, efficient capacity to reason about others.

a) Study the nature of mental model formation and adaptation in people during social interaction– specifically how do one’s own goals, as well as the other’s goals influence and make tractable the model formation and use process.

b) Develop a tractable computational model of this process that takes into the account the artificial agent’s and the human’s goals, as well as models of each other, in an interaction. Tractable of course is fundamental in face-to-face social interactions where agents must respond rapidly.

c) Evaluate the model in artificial agent – human interactions.

We see this work as fundamental to taking embodied social agents beyond their limited, inflexible approaches to interacting socially with us to a significantly more robust capacity. Key to that will be making theory of mind reasoning in artificial agents more tractable via taking into account both the agent’s goals and the human’s goals in the interaction.

[Kin90] Kunda, Z. (1990). The case for motivated reasoning. Psychological Bulletin, 108(3), 480-498.

[Bar83] Barsalou, L.W. Memory & Cognition (1983) 11: 211.

[Key03] Keysar, B., Lin, S., & Barr, D. (2003). Limits on theory of mind use in adults. Cognition, 89, 25–41.

[Lin10] Lin, S., Keysar, B., & Epley, N. (2010). Reflexively mindblind: Using theory of mind to interpret behavior requires effortful attention. Journal of Experimental Social Psychology, 46, 551–556.

[Pin07] David V. Pynadath & Stacy C. Marsella (2007). Minimal Mental Models. In: AAAI, pp. 1038-1046.

[Whi91] Whiten, Andrew (ed). Natural Theories of Mind. Oxford: Basil Blackwell, 1991.