Projects

Enhancing Social Interactions via Physiologically-Informed AI.

Supervisors: Marios Philiastides (School of  Psychology) and Alessandro Vinciarelli (School of Computing Science).

Over the past few years major developments in machine learning (ML) have enabled important advancements in artificial intelligence (AI). Firstly, the field of deep learning (DL) – which has enabled models to learn complex input-output functions (e.g. pixels in an image mapped onto object categories), has emerged as a major player in this area. DL builds upon neural network theory and design architectures, expanding these in ways that enable more complex function approximations.

The second major advance in ML has combined advances in DL with reinforcement learning (RL) to enable new AI systems for learning state-action policies – in what is often referred to as deep reinforcement learning (DRL) – to enhance human performance in complex tasks. Despite these advancements, however, critical challenges still exist in incorporating AI into a team with human(s).

One of the most important challenges is the need to understand how humans value intermediate decisions (i.e. before they generate a behaviour) through internal models of their confidence, expected reward, risk etc. Critically, such information about human decision-making is not only expressed through overt behaviour, such as speech or action, but more subtlety through physiological changes, small changes in facial expression and posture etc. Socially and emotionally intelligent people are excellent at picking up on this information to infer the current disposition of one another and to guide their decisions and social interactions.

In this project, we propose to develop a physiologically-informed AI platform, utilizing neural and systemic physiological information (e.g. arousal, stress) ([Fou15][Pis17][Ghe18]) together with affective cues from facial features ([Vin09][Bal16]) to infer latent cognitive and emotional states from humans interacting in a series of social decision-making tasks (e.g. trust game, prisoner’s dilemma etc). Specifically, we will use these latent states to generate rich reinforcement signals to train AI agents (specifically DRL) and allow them to develop a “theory of mind” ([Pre78][Fri05]) in order to make predictions about upcoming human behaviour. The ultimate goal of this project is to deliver advancements towards “closing-the-loop”, whereby the AI agent feeds-back its own predictions to the human players in order to optimise behaviour and social interactions.

[Ghe18] S Gherman, MG Philiastides, “Human VMPFC encodes early signatures of confidence in perceptual decisions”, eLife, 7: e38293, 2018.

[Pis17] MA Pisauro, E Fouragnan, C Retzler, MG Philiastides, “Neural correlates of evidence accumulation during value-based decisions revealed via simultaneous EEG-fMRI”, Nature Communications, 8: 15808, 2017.

[Fou15] E Fouragnan, C Retzler, KJ Mullinger, MG Philiastides, “Two spatiotemporally distinct value systems shape reward-based learning in the human brain”, Nature Communications, 6: 8107, 2015.

[Vin09] A.Vinciarelli, M.Pantic, and H.Bourlard, “Social Signal Processing: Survey of an Emerging Domain“, Image and Vision Computing Journal, Vol. 27, no. 12, pp. 1743-1759, 2009.

[Bal16] T.Baltrušaitis, P.Robinson, and L.-P. Morency. “Openface: an open source facial behavior analysis toolkit.” Proceedings of the IEEE Winter Conference on Applications of Computer Vision, 2016.

[Pre78] D. Premack, G. Woodruff, “Does the chimpanzee have a theory of mind?”, Behavioral and brain sciences Vol. 1, no. 4, pp. 515-526, 1978.

[Fri05] C. Frith, U. Frith, “Theory of Mind”, Current Biology Vol. 15, no. 17, R644-646, 2005.


Into the thick of it: Situating digital health behaviour interventions

Supervisors: Esther Papies (School of Psychology) and Stacy Marsella (School of Psychology).

Aims and Objectives.  This project will examine how to best integrate a digital health intervention into a users’ daily life.  Are digital interventions more effective if they are situated, i.e., adapted to specific users and situations where behaviour change should happen?  If so, which features of situations should a health (phone) app use to remind a user to perform a healthy behaviour (e.g., time of day, location, mood, activity pattern)? From a Social AI perspective, how do we make inferences about those situations from sensing data and prior models of users’ situated behaviors, how and when does the app socially interact with the user to improve the situated behavior and how do we adapt the user model over time to improve the app’s tailored interaction with a specific user? We will test this in the domain of hydration, with an intervention to increase the consumption of water.

Background and Novelty.  Digital interventions are a powerful new tool in the domain of individual health behaviour change.  Health apps can reach large numbers of users at relatively low cost, and can be tailored to an individual’s health goals.  So far, however, digital health interventions have not exploited a key strength compared to traditional interventions delivered by human health practitioners, namely the ability to situate interventions in critical situations in a users’ daily life.  Rather than being presented statically at pre-set times, situating interventions means that they respond and adapt to the key contextual features that affect a users’ health behaviour.  Previous work has shown that context features have a powerful influence on health behaviour, for example by triggering habits, impulses, and social norms. Therefore, it is vital for effective behaviour change interventions to take the specific context of a user’s health behaviours into account.  The current proposal will test whether situating a mobile health intervention, i.e., designing it to respond adaptively to contextual features, increases its effectiveness compared to unsituated interventions.  We will do this in the domain of hydration, because research suggests that many adults may be chronically dehydrated, with implications for cognitive functioning, mood, and physical health (e.g., risk of diabetes, overweight, kidney damage).

Methods.  We will build an app to increase water intake and compare a static version of this app with a dynamic version that responds to time, a user’s activity level, location, social context, mood, and other possible features that may be linked to hydration (Paper 1).  We will assess whether an app that responds actively to such features leads over time to more engagement and behaviour change than a static app (Paper 2), and which contextual inferences work best to situate an app for effective behaviour change (Paper 3).

Outputs.  This project will lead to presentations and papers at both AI and Psychology conferences outlining the principles and results of situating  health behaviour interventions, using the tested healthy hydration app.

Impact.  Results from this work will have implication for the design of health behaviour interventions across domains, as well as for our understanding of the processes underlying behaviour change. It will explore how sensing and adaptive user modeling can situate both user and AI system in a common contextual frame and whether this facilitates engagement and behavior change.

Alignment with Industrial Interests.  This work will be of interest to industry collaborators interested in personalised health behaviour, such as Danone.

[MUN15] Muñoz, C. X., Johnson, E. C., McKenzie, A. L., Guelinckx, I., Graverholt, G., Casa, D. J., … Armstrong, L. E. (2015). Habitual total water intake and dimensions of mood in healthy young women. Appetite, 92, 81–86.

[PAP17] Papies, E. K. (2017). Situating interventions to bridge the intention–behaviour gap: A framework for recruiting nonconscious processes for behaviour change. Social and Personality Psychology Compass, 11(7), n/a-n/a.

[RIE13] Riebl, S. K., & Davy, B. M. (2013). The Hydration Equation: Update on Water Balance and Cognitive Performance. ACSM’s health & fitness journal, 17(6), 21–28.

[WAN17] Wang and S. Marsella, “Assessing personality through objective behavioral sensing,” in Proceedings of the 7th international conference on affective computing and intelligent interaction, 2017.

[LYN11] Lynn C. Miller, Stacy Marsella, Teresa Dey, Paul Robert Appleby, John L. Christensen, Jennifer Klatt and Stephen J. Read. Socially Optimized Learning in Virtual Environments (SOLVE). The Fourth International Conference on Interactive Digital Storytelling (ICIDS), Vancouver, Canada, Nov. 2011.

[PYN07] Pynadath, David V.; Marsella, Stacy C.  Minimal mental models.  In Proceedings of the 22ndNational Conference on Artificial Intelligence (AAAI), pp. 1038-1044, 2007.


Sharing the road: Cyclists and automated vehicles

Supervisors: Steve Brewster (School of Computing Science) and Frank Pollick (School of Psychology).

Automated vehicles must share the road with pedestrians and cyclists, and drive safely around them. Autonomous cars, therefore, must have some form of social intelligence if they are to function correctly around other road users.There has been work looking at how pedestrians may interact with future autonomous vehicles [ROT15] and potential solutions have been proposed (e.g. displays on the outside of cars to indicate that the car has seen the pedestrian). However, there has been little work on automated cars and cyclists.

When there is no driver in the car, social cues such as eye contact, waving, etc., are lost [ROT15]. This changes the social interaction between the car and the cyclist, and may cause accidents if it is no longer clear, for example, who should proceed. Automated cars also behave differently to cars driven by humans, e.g. they may appear more cautious in their driving, which the cyclist may misinterpret. The aim of this project is to study the social cues used by drivers and cyclists, and create multimodal solutions that can enable safe cycling around autonomous vehicles.

The first stage of the work will be observation of the communication between human drivers and cyclists through literature review and fieldwork. The second stage will be to build a bike into our driving simulator [MAT19] so that we can test interactions between cyclists and drivers safely in a simulation.

We will then start to look at how we can facilitate the social interaction between autonomous cars and cyclists. This will potentially involve visual displays on cars or audio feedback from them, to indicate state information to cyclists nearby (eg whether they have been detected, whether the car is letting the cyclist go ahead). We will also investigate interactions and displays for cyclists, for example multimodal displays in cycling helmets [MAT19] to give them information about car state (which could be collected by V2X software on the cyclist’s phone, for example). Or directly communicating with the car by input made on the handlebars or via gestures. These will be experimentally tested in the simulator and, if we have time, in highly controlled real driving scenarios.

The output of this work will be a set new techniques to support the social interaction between autonomous vehicles and cyclists. We currently work with companies such as Jaguar Landrover and Bosch and our results will have direct application in their products.

[ROT15] Rothenbucher, D., Li, J., Sirkin, D. and Ju, W., Ghost driver: a platform for investigating interactions between pedestrians and driverless vehicles, Adjunct Proceedings of the International Conference on Automotive User Interfaces and Interactive Vehicular Applications, pp. 44–49, 2015.

[MAT19] Matviienko, A. Brewster, S., Heuten, W. and Boll, S. Comparing unimodal lane keeping cues for child cyclists (https://doi.org/10.1145/3365610.3365632), Proceedings of the 18th International Conference on Mobile and Ubiquitous Multimedia


Human-car interaction

Supervisors: Steve Brewster (School of Computing Science) and Frank Pollick (School of Psychology).

The aim of this project is to investigate, in the context of social interactions,the interaction between a driver and an autonomous vehicle. Autonomous cars are sophisticated agents that can handle many driving tasks. However, they may have to hand control to the human driver in different circumstances, for example if sensors fail or weather conditions are bad [MCA16, BUR19]. This is potentially difficult for the driver as they may have not been driving the car for a long period and have to quickly take control [POL15]. This is an important issue for car companies as they want to add more automation to vehicles in a safe manner.Key to this problem is whether this interface would benefit from conceptualizing the exchange between human and car as a social interaction.

This project will study how best to handle handovers, from the car indicating to the driver that it is time to take over, the takeover event, and then the return to automated driving. They key factors to investigate are: situational awareness (the driver needs to know what the problem is and what must be done when they take over), responsibility (who’s task is it to drive at which point),  the in-car context (what is the driver doing: are they asleep, talking to another passenger), and driver skills (is the driver competent to drive or are they under the influence).

We will conduct a range of experiments in our driving simulator to test different types of handover situations and different types of multimodal interactions involving social cuesto support the 4 factors outlined above.

The output will be experimental results and guidelines that can help automotive designers know how best to communicate and deal with handover situations between car and driver. We currently work with companies such as Jaguar Landrover and Bosch and our results will have direct application in their products.

[MCA16] Mcall, R., McGee, F., Meschtscherjakov, A. and Engel, T., Towards A Taxonomy of Autonomous Vehicle Handover Situations, Publication: Proceedings of the International Conference on Automotive User Interfaces and Interactive Vehicular Applications, pp. 193–200, 2016.

[BUR19] Burnett, G., Large, D. R. & Salanitri, D., How will drivers interact with vehicles of the future? Royal Automobile Club Foundation for Motoring Report, 2019.

[POL15] Politis, I, Pollick, F and Brewster S. Language-based multimodal displays for the handover of control in autonomous cars, Publication, Proceedings of the International Conference on Automotive User Interfaces and Interactive Vehicular Applications, pp. 3–10, 2015.


Soft eSkin with Embedded Microactuators

Supervisors: Ravinder Dahiya (School of Engineering) and Philippe Schyns (School of Psychology).

Research on tactile skin or e-skin has attracted significant interest recently as it is the key underpinning technology for safe physical interaction between humans and machines such as robots. Thus far, eSkin research has focussed on imitating some of the features of human touch sensing. However, skin is not just designed for feeling the real world, it is also a medium to express feeling through gestures. For example, the skin on the face, which can fold and wrinkle into specific patterns, allows us to express emotions such as varying degrees of happiness, sadness or anger. Yet, this important role of skin has not received any attention so far. Here, for the first time, this project will explore the emotion signal generation capacities of skin by developing programmable soft e-skin patches with embedded micro actuators that will emulate real skin movements. Building on the flexible and soft electronics research in the James Watt School of Engineering and the social robotics research in Institute of Neuroscience & Psychology, this project aims to achieve the following scientific and technological goals:

  • Identify suitable actuation methods to generate simple emotive features such as wrinkles on the forehead
  • Develop a soft eSkin patch with embedded microactutors
  • Use dynamic facial expression models for specific movement patterns in the soft eSkin patch
  • Develop an AI approach to program and control the actuators

Industrial Partners: Project briefly discussed with BMW. They have shown interest, but details could not be discussed as currently we are in the process of filing a patent application.


Language Independent Conversation Modelling

Supervisors: Olga Perepelkina (Neurodata Lab) and Alessandro Vinciarelli (School of Computing Science).

According to Emmanuel Schegloff, one of the most important linguists of the 20thCentury, conversation is the “primordial site of human sociality”, the setting that has shaped human communicative skills from neural processes to expressive abilities [TUR16]. This project focuses on these latter and, in particular, on the use of nonverbal behavioural cues such as laughter, pauses, fillers and interruptions during dyadic interactions. In particular, the project targets the following main goals:

  • To develop approaches for the automatic detection of laugther, pauses, fillers, overlapping speech and back-channel events in speech signals;
  • To analyse the interplay between the cues above and social-psychological phenomena such as emotions, agreement/disagreement, negotiation, personality, etc.

The experiments will be performed over two existing corpora. One includes roughly 12 hours of spontaneous conversations involving 120 persons [VIN15] that have been fully annotated in terms of the cues and the phenomena above. The other is the Russian Acted Multimodal Affective Set (RAMAS) − the first multimodal corpus in Russian language, including approximately 7 h of high-quality close-up video recordings of faces, speech, motion-capture data and such physiological signals as electro-dermal activity and photoplethysmogram [PER18].

The main motivation behind the focus on nonvebal behavioural cues is that these tend to be used differently in different cultural contexts, but they can still be detected independently of the language being used. In this respect, an approach based on nonverbal communication promises to be more robust to the application over data collected in different countries and linguistic areas. In addition, while the importance of nonverbal communication is widely recognised in social psychology, the way certain cues interplay with social and psychological phenomena still requires full investigation [VIN19].

From a methodological point of view, the project involves the following main aspects:

  • Development of corpus analysis methodologies (observational staistics) for the investigation of the relationships beween nonverbal behaviour and social phenomena;
  • Development of signal processing methodologies for the conversion of speech signals into measurements suitable for computer processing;
  • Development of Artificial Intelligence techniques (mainly based on deep networks) for the inference of information from raw speech signals.

From a scientific point of view, the impact of the project will be mainly in Affective Computing and Social Signal Processing [VIN09] while, from an industrial point of view, the impact will be mainly in the areas of Conversational Interfaces (e.g., Alexa and Siri), multimedia content analysis and, in more general terms, Social AI, the application domain encompassing all attempts of making machines capable to interact with people like people do with one another. For this reason, the project is based on the collaboration between the University of Glasgow and Neurodata Lab (http://www.neurodatalab.com), one of the top companies in Social and Emotion AI.

[PER18] Perepelkina O., Kazimirova E., Konstantinova M. RAMAS: Russian Multimodal Corpus of Dyadic Interaction for Affective Computing. In: Karpov A., Jokisch O., Potapova R. (eds) Speech and Computer. Lecture Notes in Computer Science, vol 11096. Springer, 2018.

[TUR16] S.Turkle, “Reclaiming conversation: The power of talk in a digital age”, Penguin, 2016.

[VIN19] M.Tayarani, A.Esposito and A.Vinciarelli, “What an `Ehm’ Leaks About You: Mapping Fillers into Personality Traits with Quantum Evolutionary Feature Selection Algorithms“, accepted for publication by IEEE Transactions on Affective Computing, to appear, 2019.

[VIN15] A.Vinciarelli, E.Chatziioannou and A.Esposito, “When the Words are not Everything: The Use of Laughter, Fillers, Back-Channel, Silence and Overlapping Speech in Phone Calls“, Frontiers in Information and Communication Technology, 2:4, 2015.

[VIN09] A.Vinciarelli, M.Pantic, and H.Bourlard, “Social Signal Processing: Survey of an Emerging Domain“, Image and Vision Computing Journal, Vol. 27, no. 12, pp. 1743-1759, 2009.


The coordination of gesture and voice in autism as a window into audiovisual perception of emotion

Supervisors: Frank Pollick (School of Psychology) and Stacy Marsella (School of Psychology).

When we speak we typically combine our speech with gesture and these gestures are often referred to as a back channel of face-to-face communication. Notably, a lack of coordination of gesture and speech is thought to be a diagnostic property of autism. While there have been many studies of the production of gesture and speech in autism (de Marchena & Eigsti, 2010), as well as differences in the perception of human movement (Todorova, Hatton & Pollick, 2019), there has been less investigation of what spatiotemporal properties people are sensitive to in the coordination of gesture and speech. Addressing this issue is important for the development of artificial systems that would include both gesture and speech. If we desire these systems to appear natural, as well as effective for special populations such as those with autism, then we need to find the spatiotemporal parameters of speech-gesture coordination that impact the perception of fluency. This is particularly important for robotic systems as it is known that the physical limits of robots can constrain their perception as natural movement (Pollick, Hale & Tzoneva-Hadjigergieva, 2005).

Experiment 1: One window into the perceived coordination of speech and gesture is provided by a fundamental aspect of multisensory perception. Namely that sight and sound do not need to be precisely synchronous to be bound together in a unified percept. The amount of asynchrony allowed is up to 300 ms and has been found to vary between task and observer. We will take a published set of brief (2.5 to 3.5s) audiovisual stimuli depicting emotional exchanges of two point-light display actors (Piwek, Pollick & Petrini, 2015) and parametrically vary the asynchrony to determine how asynchrony impacts emotion perception. For a second part of this experiment we will use the motion capture located in the School of Psychology to record typical individuals telling stories and examine how varying audiovisual asynchrony impacts understanding and enjoyment of watching these stories being told. These experiments will be performed with typically developed adults and adults with autism and their results compared.

Experiment 2: Here, we wish to model the combination of gesture and speech using Bayesian Causal modeling (Körding, Beierholm, Ma, Quartz, Tenenbaum & Shams, 2007), which allows us to model how both the high level semantic and low level physical matches between sight and sound to determine whether the audio and visual signals are likely coming from the same source. Using the data from Experiment 1, along with data from new experimental stimuli where the audio and visual signals are incongruent (e.g. a different speaker telling a different story), we will investigate how the fits of the model reflect different physical and semantic matching of the audiovisual pairings. We will investigate whether the model fits are sensitive to differences between the typically developed adults and the adults with autism.

Experiment 3: From Experiments 1 and 2 we hope to understand the properties that drive the perception of coordinated speech. This will allow us to use these parameters to drive audiovisual speech on a robot platform to investigate what audiovisual parameter combinations are better received by typical and autistic observers. This final study will be done in collaboration with Autism Foundation Finland.

We hope for these experimental and theoretical analyses to inform our understanding of how best to design coordinated gesture and speech on robots and how autism might influence preferences for different designs.

[DEM10] de Marchena, A., & Eigsti, I. M. (2010). Conversational gestures in autism spectrum disorders: Asynchrony but not decreased frequency. Autism research3(6), 311-322.

[KOR07] Körding, K. P., Beierholm, U., Ma, W. J., Quartz, S., Tenenbaum, J. B., & Shams, L. (2007). Causal inference in multisensory perception. PLoS one2(9), e943.

[PIW15] Piwek, L., Pollick, F., & Petrini, K. (2015). Audiovisual integration of emotional signals from others’ social interactions. Frontiers in psychology6, 611.

[POL05] Pollick, F. E., Hale, J. G., & Tzoneva-Hadjigeorgieva, M. (2005). Perception of humanoid movement. International Journal of Humanoid Robotics2(03), 277-300.

[TOD19] Todorova, G. K., Hatton, R. E. M., & Pollick, F. E. (2019). Biological motion perception in autism spectrum disorder: a meta-analysis. Molecular autism10(1), 49.


Social Intelligence towards Human-AI Teambuilding

Supervisors: Frank Pollick (School of Psychology) and Reuben Moreton (Qumodo).

Visions of the workplace-of-the-future include applications of machine learning and artificial intelligence embedded in nearly every aspect (Brynjolfsson & Mitchell, 2017). This “digital transformation” holds promise to broadly increase effectiveness and efficiency. A challenge to realising this transformation is that the workplace is substantially a human social environment and machines are not intrinsically social. Imbuing machines with social intelligence holds promise to help build human-AI teams and current approaches to teaming one human and one machine appear reasonably staightforward to design. However, if there are more than one human and more than one system that are working together we can see that the complexity of social interactions increases and we need to understand the society of human-AI teams. This research proposes to take a first step in this direction to consider the interaction of triads containing humans and machines.

Our proposed testbed will be concerned with automatic image classification and we choose this since identity and location recognition is a primary work context of our industrial partner Qumodo. Moreover, there are many image classification systems that have recently shown the ability to approach or exceed human performance. There are two scenarios we would like to examine involving human-AI triads and we term them the sharing problem and the consensus problem:

In the sharingproblem we examine two humans teamed with the same AI and examine how the human-AI team is influenced by the learning style of the AI, which after initial training can either learn from a single trainer or from multiple trainers. We will examine how trust in the classifier evolves depending upon the presence/absence of another trainer and the accuracy of the other trainer(s). To obtain precise control the “other” trainer(s) could either be actual operators or simulations obtained by parametrically modifying accuracy based on ground truth. Of interest are the questions of when human-AI teams benefit from pooling of human judgment and if pooling can lead to reduced trust.

In the consensusproblem we use the scenario of a human manager who must reach a consensus view based on input from a pair of judgments (human-human, human-AI). This consensus will be reached either with or without “explanation” from the two judgments. To make the experiment tractable we will consider the case of a binary decision (e.g. two facial images are of the same person or a different person). Aspects of the design will be taken from a recent paper examing recognition of identity from facial images (Phillips, et al., 2018).

In addition to these experimental studies we also wish to conduct qualitative studies involving  surveys or structured interviews in the workplace to ascertain whether the experimental results are consistent or not with people’s attitudes towards the scenarios depicted in the experiments.

As industry moves further towards AI automation, this research will have substantial impact on future practices within the workplace. Even as AI performance increases, in most scenarios a human is still required to be in the loop. There has been very little research into what such a human-AI integration/interaction should look like. Therefore this research is of pressing importance across a myriad of different sectors moving towards automation.

[BRY17] Brynjolfsson, E., & Mitchell, T. (2017). What can machine learning do? Workforce implications. Science358(6370), 1530-1534.

[PHI18] Phillips, P. J., Yates, A. N., Hu, Y., Hahn, C. A., Noyes, E., Jackson, K., … & Chen, J. C. (2018). Face recognition accuracy of forensic examiners, superrecognizers, and face recognition algorithms. Proceedings of the National Academy of Sciences115(24), 6171-6176.


Game-based techniques for the investigation of trust for Autonomous robots

Supervisors: Alice Miller (School of Computing Science) and Frank Pollick (School of Psychology).

Trustworthiness is a property of an agent or organisation that engenders trust in others. Humans rely on trust in their day-to-day social interactions, be they in the context of personal relationships, commercial negotiation, or organisational consultation  (with healthcare providers or employers for example). Social success therefore relies on the evaluation of the trustworthiness of others, and our own ability to present ourselves as trustworthy. If autonomous agents are to be used in a social environment, it is vital that we understand the concept of trustworthiness in this context [DEV18].

Some formal models of trust for autonomous systems have been proposed (e.g. [BAS16]), but these models are geared specifically towards autonomous vehicles. Any proposed model must be evaluated by testing. In many cases this would involve deploying complex hardware in sufficiently realistic scenarios in which trust would be a consideration. However, it is also possible to investigate trust in other scenarios. For example, it has been shown that different interfaces to an automatic image classifier change the calibration of human trust towards the classifier [ING20]. Relevant to social processing, in [GAL19] trust was examined via the use of videos. Here, the responses of human participants to videos involving an autonomous robot in a range of scenarios were used to investigate different aspects of trust.

Another way to generate user data to test formal models is using mobile games.  In a recent paper [KAV19], a model of the way that users play games was used to investigate a concept known as game balance. A software tool known as a probabilistic model checker [KWI17] was used to predict user behaviour under the assumptions of the model. Subsequently the game has been released to generate user data in order to evaluate the credibility of the model used.

In this PhD project you will use a similar technique to evaluate trust for autonomous systems. The crucial aspects are the formal models of trust and the question of how to design a suitable game so that the way users respond to different scenarios reflect how much they trust (autonomous robot or animated) characters in the game. You will:

  1. Develop and evaluate models of trust for autonomous robots
  2. Devise a mobile game for which players will respond according to their trust in autonomous robot or animated characters
  3. Use an automatic technique such as model checking or simulation to determine player behaviour under the assumptions of your trust models
  4. Analyse how well player behaviour matches that predicted using model checking

[DEV18] Trustworthiness of autonomous systems– K. Devitt, Foundations of Trusted Autonomy, Studies in Systems, Decision and Control, 2018

[BAS16] Trust dynamics in human autonomous vehicle interaction: a review of trust models – C. Basu et al. AAAI 2016.

[ING20] Calibrating trust towards an autonomous image classifier: a comparison of four interfaces – M. Ingram et al., submitted.

[GAL19] Trusting Robocop: Gender-Based Effects on Trust of an Autonomous Robot – D. Gallimore et al. Frontiers in Psychology 2019

[KAV19] Balancing turn-based games with chained strategy generation – W. Kavanagh, A. Miller et. al. IEEE Transactions on Games 2019.

[KWI17] Probabilistic model checking: advances and applications – M. Kwiatkowska et al. Formal System Verification 2017.


Cross-cultural detection of and adaptation to different user types for a public-space robot

Supervisors: Monika Harvey (School of Psychology), Mary Ellen Foster (School of Computing Science) and Olga Perepelkina (Neurodata Lab).

It is well known that people from different demographic groups – be it  age, gender, socio-economic status, culture to name a few – have different preferred interaction styles. However, when a robot is placed in a public space, it often has a single, standard interaction style that it uses in all situations acorss the different populations engaging with it. If a robot were able to detect the type of person it was interacting with and adapt its behaviour accordingly on the fly, this would support longer, higher-quality interactions which in turn would increase its utility and acceptance.

The overarching goal of this PhD project is to create such a robot and our collaboration with Neurodata Lab in Russia will allow us to investigate cultural as well as other more common demographic markers. We will further make use of the audiovisual sensing software developed by Neurodata Lab to be implemented in the robot.

As a result, the proposed  project will consist of several distint phases. Firstly, a simple robot system will be build and deployed in various locations across Scotland and Russia, and the audiovisual data of all people interacting with it will be recorded. As a second step, this data will be processed and classified with the aim of identifying  characteristic behaviours of different user types. In a further step, the robot behaviour will be modified so that it is able to adapt to the different users, and, in a final step, the the modified robot will be evaluated in the original deployment locations.

The results of the project will be of  great relevance to our industrial partner, allowing them to further develop and market their audiovisual sensing software. The student will greatly benefit from the industrial as well as the  cross-cultural work experience. More generally the results will be of  significant interest in areas including social robotics, affective computing, and intelligent user interfaces.

[FOS16] Foster, M. E., Alami, R., Gestranius, O., Lemon, O., Niemelä, M., Odobez, J.-M., & Pandey, A. K. (2016). The MuMMER Project: Engaging Human-Robot Interaction in Real-World Public Spaces. In Social Robotics (pp. 753–763).

[LEA18] Learmonth, G., Maerker, G., McBride, N., Pellinen, P. & Harvey, M. (2018). Right-lateralised lane keeping in young and older British drivers. PLoS One, 13(9),

[MAE19] Maerker, G., Learmonth, G., Thut. G. & Harvey, M. (2019). Intra- and inter-task reliability of spatial attention measures in healthy older adults. PLoS One, 14(2), 1-21.

[PER19] Perepelkina, O., & Vinciarelli, A. (2019). Social and Emotion AI: The Potential for Industry Impact. 2019 8th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW). Presented at the 2019 8th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW).


Social Interatation viaTouch Interactive Volummetric 3D Virtual Agents

Supervisors: Ravinder Dahiya (School of Engineering) and Philippe Schyns (School of Psychology)

Vision and touch based interactions are fundamental modes of interaction between humans and between humans and the real world. Several portable devices use these modes to display gestures that communicate social messags such as emotions. Recently, non-volumetric 3D displays have attracted considerable interest because they give users a 3D visual experience – for exmaple, 3D movies provide viewers with a perceptual sensation of depth via a pair of glasses. Using a newly developed haptics-based holographic 3D volumetric display, this project will develop these new forms of social interactions with virtual agents. Unlike various VR tools that require headsets (which can lead to motion sickness), here the interaction with 3D virtual objects will be less restricted, closer to its natural form, and, critically, give the user the illusion that the virtual agent is physically present. The experiments will involve interactions with holographically diplayed virtual human faces and bodies engaging in various social gestures. To this end, the simulated 2D images showing these various gestures will be displayed mid-air in 3D. For enriched interaction and enhanced realism, this project will also involve hand gesture recognition and controlling haptic feedback (i.e. air patterns) to simulate the surface of several classes of virtual objects. This fundamental study is transformative for sectors where physical interaction with virtual objects is critical, including medical, mental health, sports, education, heritage, security, and entertainment.


Evaluating and Shaping Cognitive Training with Artificial Intelligence Agents

Supervisors: Fani Deligianni (School of Computing Science) and Monika Harvey (School of Psychology)

Virtual reality (VR) has emerged as a promising tool for cognitive training for several neurological conditions (ie. mild cognitive impairment, acquired brain injury) as well as for enhancing healthy ageing and reducing the impact of mental health conditions (ie. anxiety and fear). Cognitive training refers to behavioural training that results in enhancement of specific cognitive abilities such as visuospatial attention and working memory. Using VR for such training offers several advantages towards achieving improvements, including its high level of versatility and its ability to dynamically adjust difficulty in real-time. Furthermore, it is an immersive technology and thus has great potential to increase motivation and compliance in subjects. Currently, VR and serious video games come in a wide variety of shapes and forms and the emerging data are difficult to quantify and compare in a meaningful way (Sokolov 2020).

This project aims to exploit machine learning to develop intuitive measures of cognitive training in a platform independent way. The project is challenging as there is great variability in cognitive measures even in well controlled/designed lab experiments (Learmonth et al., 2017; Benwell et al., 2014). So the objectives of the projects are:

  1. Predict psychological dimensions (ie. enjoyment, anxiety, valence and arousal) based on performance and neurophysiological data.
  2. Relate performance improvements (ie. learning rate) to psychological dimensions and physiological data (ie. EEG and eye-tracking).
  3. Develop artificial intelligence approaches that are able to modulate the VR world to control learning rate and participant satisfaction.

VR is a promising new technology that provides new means of building frameworks that will help to improve socio-cognitive processes. Machine learning methods that dynamically control aspects of the VR games are critical to enhanced engagement and learning rates (Darzi et al. 2019, Freer et al. 2020). Developing continuous measures of spatial attention, cognitive workload and overall satisfaction would provide intuitive ways for users to interact with the VR technology and allow the development of a personalised experience. Furthermore, these measures will play a significant role in objectively evaluating and shaping new emerging VR platforms and this approach will thus generate significant industrial interest.

[BEN14] Benwell, C.S.Y, Thut, G., Grant, A. and Harvey, M. (2014). A rightward shift in the visuospatial attention vector with healthy aging. Frontiers in Aging Neuroscience, 6, article 113, 1-11.

[DAR19] A. Darzi, T. Wondra, S. McCrea and D. Novak (2019). Classification of Multiple Psychological Dimensions in Computer Game Players Using Physiology, Performance, and Personality Characteristics. Frontiers in Neuroscience, 2019.

[FRE20] D. Freer, Y. Guo, F. Deligianni and G-Z. Yang (2020). On-Orbit Operations Simulator for Workload Measurement during Telerobotic Training. IEEE RA-L, https://arxiv.org/abs/2002.10594.

[LEA17] Learmonth, G., Benwell, C. S.Y., Thut, G. and Harvey, M. (2017). Age-related reduction of hemispheric lateralization for spatial attention: an EEG study. Neuro-Image, 153, 139-151.

[SOK20] A. Sokolov, A. Collignon and M. Bieler-Aeschlimann (2020). Serious video games and virtual reality for prevention and neurorehabilitation of cognitive decline because of aging and neurodegeneration. Current Opinion in Neurology, 33(2), 239-248.


Modulating Cognitive Models of Emotional Intelligence

Supervisors: Fani Deligianni (School of Computing Science) and Frank Pollick (School of Psychology)

State-of-the-art artificial intelligent (AI) systems mimic how the brain processes information to develop systems with unprecedented accuracy and performance in accomplishing tasks such as object/face recognition and text/speech translation. However, one key characteristic that defines human success is emotional intelligence. Empathy, the ability to understand others’ people feelings and emotionally reflect upon them, shapes social interaction and it is important in both personal and professional success. Although, some progress has been achieved in developing systems that detect emotions based on facial expressions and physiological data, a way of relating and reflecting upon them is far more challenging. Therefore, understanding how empathy/emotional responses emerge via complex information processing between key brain regions is of paramount importance to develop emotionally-aware AI agents.

In this project, we will exploit real-time functional Magnetic Resonance Imaging (fMRI) neurofeedback techniques to build cognitive models that explain modulation of brain activity in key regions related to empathy and emotions. For example, anterior insula is a brain region located in deep gray matter and it has been consistently implicated in empathy/emotional responses and abnormal emotional processing observed in several disorders such as Autism Spectrum Disorder and misophonia. Neurofeedback has shown promising results in regulating the activity of anterior insula and it could enable therapeutic training techniques (Kanel et al. 2019).

This approach would extract how brain regions interact during neuromodulation and allow cognitive models to emerge in real-time. Subsequently, to allow training in more naturalistic environments we suggest cross-domain learning between fMRI and EEG. The motivation behind this is that whereas fMRI is the gold standard imaging technique for deep gray matter structures it is limited by the lack of portability, comfort in use and low temporal resolution (Deligianni et al. 2014). On the other hand, advances in wearable EEG technology show promising results in the use of the device beyond well-controlled lab experiments. Toward this end advanced machine learning algorithms based on representation learning and domain generalisation would be developed. Domain/Model generalisation in deep learning aims to learn generalised features and extract representations in an ‘unseen’ target domain by eliminating bias observed via multiple source domain data (Volpi et al. 2018) .

Summarising, the overall aims of the project are:

  1. To build data-driven cognitive models of real-time brain network interaction during emotional modulation via neurofeedback techniques.
  2. To develop advanced machine learning algorithm to perform cross-domain learning between fMRI and EEG.
  3. To develop intelligent artificial agents based on portable EEG systems to successfully regulate emotional responses, taking into account cognitive models derived in the fMRI scanner.

[DEL14] Deligianni et al. ‘Relating resting-state fMRI and EEG whole-brain connectomes across frequency bands’, Frontiers in Neuroscience, 8(258), 2014.

[KAN19] Kanel et al. ‘Empathy to emotional voices and the use of real-time fMRI to enhance activation of the anterior insula’, NeuroImage, 198, 2019.

[KUM17] Kumar et al. ‘The Brain Basis for Misophonia’, Current Biology, 27(4), 2017.

[VOL18] Volpi et al. ‘Generalizing to Unseen Domains via Adversarial Data Augmentation’, Neural Information Processing Systems, 2018.


Detecting Affective States based on Human Motion Analysis

Supervisors: Fani Deligianni (School of Computing Science) and Marios Philiastides (School of Psychology)

Human motion analysis is a powerful tool to extract biomarkers for disease progression in neurological conditions, such as Parkinson disease and Alzheimer’s. Gait analysis has also revealed several indices that relate to emotional well-being. For example, increased gait speed, step length and arm swing has been related with positive emotions, whereas a low gait initiation reaction time and flexion of posture has related with negative feelings (Deligianni et al. 2019). Strong neuroscientific evidence show that the reason behind these relationships are due to an interaction between brain networks involved in gait and emotion. Therefore, it does not come to surprise that gait has been also related to mood disorders, such as depression and anxiety.

In this project, we aim to investigate the relationship between effective mental states and psychomotor abilities with relation to gait, balance and posture while emotions are modulated via augmented reality displays. The goal is to develop a comprehensive continuous map of interrelationships in both normal subjects and subjects affected by a mood disorder. In this way, we are going to derive objective measures that would allow to detect early signs of abnormalities and intervene via intelligent social agents. This is a multi-disciplinary project with several challenges to address:

  1. Build robust experimental setup of intuitive naturalistic paradigms.
  2. Develop AI algorithms to relate neurophysiological data with gait characteristics based on state-of-the-art motion capture systems (taking into account motion artefacts during gait)
  3. Develop AI algorithms to improve detection of gait characteristics via rgbd cameras (Gu et al. 2020) and possibly new assistive living technologies based on pulsed laser beam.

The proposed AI technology for social agents has several advantages. It can enable the development of intelligent social agents that would track mental well-being based on objective measures and provide personalised feedback and suggestions. In several cases, assessment is done based on self-reports via mobile apps. These measures of disease progression are subjective and it has been found that in major disorders they do not correlate well with objective evaluations. Furthermore, measurements of gait characteristics are continuous and they can reveal episodes of mood disorders that are not present when the subject visits a health practitioner. This approach might shed a light on subject variability with relation to behavioural therapy and provide more opportunities for earlier intervention (Queirazza et al. 2019). Finally, compared to other state-of-the-art effect recognition approaches, human motion analysis might pose less privacy issues and enhance users’ trust and comfort with the technology. In several situations, where facial expressions are not easy to track, human motion analysis is far more accurate in classifying subjects with mental disorders.

[DEL19] F Deligianni, Y Guo, GZ Yang, ‘From Emotions to Mood Disorders: A Survey on Gait Analysis Methodology’, IEEE journal of biomedical and health informatics, 2019.

[GUO19] Y Guo, F Deligianni, X Gu, GZ Yang, ‘3-D Canonical pose estimation and abnormal gait recognition with a single RGB-D camera’, IEEE Robotics and Automation Letters, 2019.

[XGU20] X Gu, Y Guo, F Deligianni, GZ Yang, ‘Coupled Real-Synthetic Domain Adaptation for Real-World Deep Depth Enhancement.’, IEEE Transactions on Image Processing, 2020.

[QUE19] F Queirazza, E Fouragnan, JD Steele, J Cavanagh and MG Philiastides, Neural correlates of weighted reward prediction error during reinforcement learning classify response to Cognitive Behavioural Therapy in depression, Science Advances, 5 (7), 2019.


Improving engagement with mobile health apps by understanding (mis)alignment between design elements and personal characteristics

Supervisors: Lawrence Barsalou (School of Psychology) and Aleksandar Matic (Telefonica)

Background

Mobile health apps have brought a growing enthusiasm related to delivering behavioural and health interventions at low-cost and in a scalable fashion. Unfortunately, the potential impact of mobile health applications has been seriously limited by typically a low user engagement and high drop-out rates. A number of studies unpacked potential reasons for the high drop-out rates including the fit for user’s problems, ease of use, privacy concerns, and trustworthiness [TOR18]. Though the best practices for developing engaging apps have been established, there is a consensus that further engagement improvements require personalisation at an individual level. Yet, the factors that influence engagement at the personal level are very complex and the practice has rarely witnessed individually personalised mobile health apps.

Psychological literature provides numerous clues on how the user interaction can be designed in a more engaging way based on personal characteristics. For instance, it is recommended to highlight rewards and social factors for extraverts, safety and certainty for neurotic individuals, achievements and structure for conscientious people [HIR12], or to use external vs internal factors to motivate individuals with high vs low locus of control [CC14]. Developing and testing personalised mobile health apps based on each personal characteristic would require a long process, a lot of A/B trials and significant efforts and costs. Perhaps, this explains why personalisation has been limited in practice and why most of the mobile health apps have been designed in one-size-fits-all manner. Instead of designing and testing each personalised element, this work will apply a different approach – namely a retrospective exploration of a) personal characteristics of individuals who have already used mobile health apps, b) corresponding service design elements, and c) the outcome (drop-out or engagement) and the links between a) and b) that drive the outcome.

Aims and objectives-

This project will deepen understanding of how to personalise mobile health apps to user personal characteristics aiming to improve the engagement and ultimately intervention effectiveness. The main objectives will be the following:

  • Identify specific links between personal characteristics and service design elements that predict engagement and/or drop-outs
  • Explore if the engagement with mobile health apps can be improved by avoiding misaligned (reinforcing aligned) design elements with personal characteristics that pre-dominantly drive drop-outs (engagement)
  • Deliver a set of takeaways for designing socially intelligent interfaces aware of personal characteristics

Methods

Extensive literature research will be first conducted to characterise design elements, and to  the links to personal characteristics that can influence engagement. This will result in a set of hypotheses on the relationship between different personal characteristics and the engagement mechanisms. Sequentially, one or more studies will be conducted to capture personal traits of the users who have already used a selected set of relevant mobile health apps. By applying standard statistical methods as well as machine learning (to unpack more complex interplay between personal characteristics and design elements), this data will be used to identify engagement/drop-out predictors. The learnings will be used to design and test personalisation in a real-world scenario. 

Alignment with industrial interests

This work will be of a direct interest to Telefonica Alpha that is creating mobile phone based wellbeing services as well as digital therapeutics.

[TOR18] Torous, John, Jennifer Nicholas, Mark E. Larsen, Joseph Firth, and Helen Christensen. “Clinical review of user engagement with mental health smartphone apps: evidence, theory and improvements.” Evidence-based mental health 21, no. 3 (2018): 116-119.

[HIR12] Hirsh, J. B., Kang, S. K., & Bodenhausen, G. V. (2012). Personalized persuasion: Tailoring persuasive appeals to recipients’ personality traits. Psychological science, 23(6), 578-581.

[CC14] Cobb-Clark, D. A., Kassenboehmer, S. C., & Schurer, S. (2014). Healthy habits: The connection between diet, exercise, and locus of control. Journal of Economic Behavior & Organization, 98, 1-28.


A framework for establishing situated and generalizable models of users in intelligent virtual agents

Supervisors: Christoph Scheepers (School of Psychology) and Stacy Marsella (School of Psychology)

Aims: Increasing research suggests that intelligent virtual agents are most effective and accepted when they adapt themselves to individual users. One way virtual agents can adapt to different individuals is by developing an effective model of a user’s traits and using it to anticipate dynamically varying states of these traits as situational conditions vary. The primary aims of the current work are to develop: (1) empirical methods for collecting data to build user models, (2) computational procedures for building models from these data, (3) computational procedures for adapting these models to current situations. Although the project’s primary goal is to develop a general framework for building user models, we will also explore preliminary implementations in digital interfaces.

Novel Elements: One standard approach to building a model of a user’s traits—Classical Test Theory—uses a coherent inventory of measurement items to assess a specific trait of interest (e.g., stress, conscientiousness, neuroticism). Typically, these items measure a trait explicitly via a self-report instrument or passively via a digital device. Well-known limitations of this approach include its inability to assess the generalizability of a model across situations and occasions, and its failure incorporate specific situations into model development. In this project, we expand upon the classic approach by incorporating two new perspectives: (1) Generalizability Theory, (2) The Situated Assessment Method. Generalizability Theory will establish a general user model that varies across multiple facets, including individuals, measurement items, situations, and occasions. The Situated Assessment Method replaces standard unsituated assessment items with situations, fundamentally changing the character of assessment.

Approach: We will develop a general framework for collecting empirical data that enables building user models across many potential domains, including stress, personality, social connectedness, wellbeing, mindfulness, eating, daily habits, etc. The data collected—both explicit self-report and passive digital—will assess traits (and states) relevant to a domain across facets for individuals, measurement items, situations, and occasions. These data will be to Generalizability Theory and the Situated Assessment Method to build user models and establish their variance profiles. Of particular interest will be how well user models generalize across facets, the magnitude of individual differences, and clusters of individuals sharing similar models. Situated and unsituated models will both be assessed to establish their relative strengths, weaknesses, and external validity. Once models are built, their ability to predict a user’s states on particular occasions will be assessed, using procedures from Generalizability Theory, the Situated Assessment Method, and autoregression. Prediction error will be assessed to establish optimal model building methods. App prototypes will be developed and explored.

Outputs and Impact: Generally, this work will increase our ability to construct and understand user models that virtual agents can employ. Specifically, we will develop novel methods that: (1) collect data for building user models, (2) assess the generalizability of models; (3) generate state-level inferences in specific situations. Besides being relevant for the development of intelligent social agents, this work will contribute to deeper understanding of classic assessment instruments and to alternative situated measurement approaches across multiple scientific domains. More practicially, the framework, methods, and app prototypes we develop are of potential use to clinicians and individuals interested in understanding both functional and dysfunctional health behaviours.

[BLO12] Bloch, R., & Norman, G. (2012). Generalizability theory for the perplexed: A practical introduction and guide: AMEE Guide No. 68. Medical Teacher, 34, 960–992.

[PED20] Pedersen, C.H., & Scheepers, C. (2020). An exploratory meta-analysis of the state-trait anxiety inventory through use of generalizability theory. Manuscript in prepration.

[DUT19] Dutriaux, L., Clark, N., Papies, E. K., Scheepers, C., & Barsalou, L. W. (2019). Using the Situated Assessment Method (SAM2) to assess individual differences in common habits. Manuscript under review.

[LEB16] Lebois, L. A. M., Hertzog, C., Slavich, G. M., Barrett, L. F., & Barsalou, L. W. (2016). Establishing the situated features associated with perceived stress. Acta Psychologica, 169,119–132.

[MAR14] Stacy Marsella and Jonathan Gratch. Computationally Modeling Human Emotion. Communications of the ACM, December, 2014.

[MIL11] Lynn C. Miller, Stacy Marsella, Teresa Dey, Paul Robert Appleby, John L. Christensen, Jennifer Klatt and Stephen J. Read. Socially Optimized Learning in Virtual Environments (SOLVE). The Fourth International Conference on Interactive Digital Storytelling (ICIDS), Vancouver, Canada, Nov. 2011.


Flying Robot Friends: Studying Social Drones in Virtual Reality

Supervisors: Mohamed Khamis (School of Computing Science) and Emily Cross (School of Psychology)

Designing and implementing autonomous drones to assist and interact with people in social contexts, so-called “social drones”, is an emerging area of robotics. Human-drone Interaction (HDI) applications range from supporting users in exercising and recreation [1], to providing navigation cues [2] and serving as flying interfaces [3]. To truly make drones “social”, we must understand how humans perceive them and behave around them. Thus, researchers have traditionally run experiments that allow observing users in direct contact with drones. However, such studies can be difficult, expensive and inflexible. For example, it can be difficult, infeasible, or even dangerous to conduct an experiment in the real world to study the impact of different drone sizes or flying altitudes on the user’s behavior. On the other hand, if valid experiments can be conducted in immersive virtual reality (VR), researchers can reach out to a higher number and potentially more diverse participants, and control the environment variables to a greater degree. For example, changing the size of a drone in VR involves merely changing a variable’s value. Similarly, drones in VR are not bound by the physical limitations of the word. But if VR-based studies of human-drone interactions are to be used as a springboard for informing our understanding of HDI in the real world, it is imperative to understand the extent to which findings generalize to in situ HDI.

This project aims to explore the use of immersive virtual reality (VR) as a test bed for studying human behavior around social drones. The main objectives are to understand whether results from studies conducted in VR would match results from corresponding real-world settings, and use VR to inform embodied/in-person HDI studies. As prior work suggests [5], it is expected that some behaviors will be similar across VR and the real world, thereby allowing researchers to use VR as an alternative to real world studies in some contexts. However, understanding the limitiations of VR for developing social drones will be vital as well.

To this end, the project will involve studying and comparing human proxemic behavior around drones both in VR and in the real world. While proxemic behavior has been investigated for human-robot Interaction [4], it has never been studied when interacting with drones. It is expected that attributes of drones, such as their flying altitude or their size, impact how people distance themselves from drones. As has also been studied in HRI proxemics, the extent to which people’s preferred distance to social drones differs based on first and third person viewpoints is also of interest to examine. The results from the real world and VR studies will be compared to allow an assessment of the opportunities, challenges, and limitations of using virtual reality to conduct experiments on introducing drones in close quarters to people to serve social purposes, and provide guidelines can help researchers decide whether to employ VR in their experiments and what factors to account for if doing so.

[1] Florian “Floyd” Mueller and Matthew Muirhead. 2015. Jogging with a Quadcopter. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI ’15). Association for Computing Machinery, New York, NY, USA, 2023–2032.

[2] Pascal Knierim, Steffen Maurer, Katrin Wolf, and Markus Funk. 2018. Quadcopter-Projected In-Situ Navigation Cues for Improved Location Awareness. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI ’18). Association for Computing Machinery, New York, NY, USA, Paper 433, 1–6.

[3] Markus Funk. 2018. Human-drone interaction: let’s get ready for flying user interfaces! interactions 25, 3 (May-June 2018), 78–81.

[4] Jonathan Mumm and Bilge Mutlu, “Human-robot proxemics: Physical and psychological distancing in human-robot interaction,” 2011 6th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Lausanne, 2011, pp. 331-338.

[5] Ville Mäkelä, Rivu Radiah, Saleh Alsherif, Mohamed Khamis, Chong Xiao, Lisa Borchert, Albrecht Schmidt, and Florian Alt. 2020. Virtual Field Studies: Conducting Studies on Public Displays in Virtual Reality. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (CHI ’20). Association for Computing Machinery.


Multimodal Deep Learning for Detection and Analysis of Reactive Attachment Disorder in Abused and Neglected Children

Supervisors: Helen Minnis (Institute of Health and Well Being) and Alessandro Vinciarelli (School of Computing Science)

The goal of this project is to develop AI-driven methodologies for detection and analysis of Reactive Attachment Disorder (RAD), a psychiatric disorder affecting abused and neglected children. The main effect of RAD is “failure to seek and accept comfort”, i.e., the shut-down of a set of psychological processes, known as the Attachment System and essential for normal development, that allow children to establish and maintain benefcial relationships with their caregivers [YAR16]. While having serious implications for the child’s future (e.g., RAD is common in children with complex psychiatric disorder and criminal behavior [MOR17]), RAD is highly amenable to treatment if recognised in infancy [YAR16]. However, the disorder is hard for clinicians to detect because its symptoms are not easily visible to the naked eye.

Encouraging progress in RAD diagnosis have acheived by manually analyzing videos of children involved in therapeutic sessions with their caregivers, but such an approach is too expensive and time consuming to be applied in a standard clinical setting. For this reason, this project proposes the use of AI-driven technologies for the analysis of human behavior [VIN09]. These have been successfully applied to other attachment related issues [ROF19] and can help not only to automate the observation of the interactions, thus reducing the amount of time needed for possible diagnosis, but also to identify behavioural markers that might escape clinical observation. The emphasis will be on approaches that jointly model multiple behavioural modalities through the use of appropriate deep network architectures [BAL18].

The experimental activities will revolve around an existing corpus of over 300 real-world videos collected in a clinical setting and they will include three main steps:

  1. Identification of the behavioural cues (the RAD markers) most likely to account for RAD through manual observation of a representative sample of the corpus;
  2. Development of AI-driven methodologies, mostly based on signal processing and deep networks, for the detection of the RAD markers in the videos of the corpus;
  3. Development of AI-driven methodologies, mostly based on deep networks, for the automatic identification of children affected by RAD based on presence and intensity of the cues detected at point 2.

The likely outcomes of the system include a scientific analysis of RAD related behaviours as well as AI-driven methodologies capable of supporting the activity of clinicians. In this respect, the project aligns with needs and interests of private and public bodies dealing with child and adolescent mental health (e.g., the UK National Health Service and National Society for the Prevention of Cruelty to Children).

[BAL18] Baltrušaitis, T., Ahuja, C. and Morency, L.P. (2018). Multimodal Machine Learning: A Survey and Taxonomy, IEEE Transactions on Pattern Analysis and Machine Intelligence, 41(2), 421-433.

[HUM17] Humphreys, K. L., Nelson, C. A., Fox, N. A., & Zeanah, C. H. (2017). Signs of reactive attachment disorder and disinhibited social engagement disorder at age 12 years: Effects of institutional care history and high-quality foster care. Development and Psychopathology, 29(2), 675-684.

[MOR17] Moran, K., McDonald, J., Jackson, A., Turnbull, S., & Minnis, H. (2017). A study of Attachment Disorders in young offenders attending specialist services. Child Abuse & Neglect, 65, 77-87.

[ROF19] Roffo G, Vo DB, Tayarani M, Rooksby M, Sorrentino A, Di Folco S, Minnis H, Brewster S, Vinciarelli A. (2019). Automating the Administration and Analysis of Psychiatric Tests: The Case of Attachment in School Age Children. Proceedings of the CHI, Paper No.: 595 Pages 1–12.

[VIN09] Vinciarelli, A., Pantic, M. and Bourlard, H. (2009), Social Signal Processing: Survey of an Emerging Domain, Image and Vision Computing Journal, 27(12), 1743-1759.

[YAR16] Yarger, H. A., Hoye, J. R., & Dozier, M. (2016). Trajectories of change in attachment and biobehavioral catch-up among high risk mothers: a randomised clinical trial. Infant Mental Health Journal, 37(5), 525-536.


Developing a digital avatar that invites user engagement

Supervisors: Philippe Schyns (School of Psychology) and Mary Ellen Foster (School of Computing Science)

Digital avatars can engage with humans to interact socially. However, before they do so they typically are in a resting, default state. The question that arises is how we should design such digital avatars in a resting state so that they have a realistic appearance that promotes engagement with a human. We will combine methods from human psychophysics, computer graphics, machine vision and social robotics to design a digital avatar (presented in VR or on a computer screen) that looks to a human participant like a sentient being (e.g. with realistic appearance and spontaneous dynamic movements of the face and the eyes), who can then engage with humans before starting an interaction (i.e. tracks their presence, engage with realistic eye contact and so forth). Building on the strength of digital design avatars in the Institute of Neuroscience and Psychology and the social robotics research on the School of Computing Science, this project will attempt to achieve the following scientific and technological goals:

  • Identify the default face movements (including eye movements) that produce a realistic sentient appearance.
  • Implement those movements on a digital avatar which can be displayed on a computer screen or in VR.
  • Use tracking software to detect human beings in the environment, follow their movements and engage with realistic eye contact.
  • Develop models to link human behaviour with avatar movements to encourage engagement.
  • Evaluate the performance of the implemented models through deployment in labs and in public spaces.

You Never get a Second Chance to Make a First Impression – Establishing how best to align human expectations about a robot’s performance based on the robot’s appearance and behaviour.

Supervisors: Mary Ellen Foster (School of Computing Science) and Emily Cross (School of Psychology).

Main aims and objectives:

  • A major aim of social robotics is to create embodied agents that humans can instantly and automatically understand and interact with, using the same mechanisms that they use when interacting with each other. While considerable research attention has been invested in this endeavour, it is still the case that when humans encounter robots, they need time to understand how the robot works; in other words, people need time to learn to read the signals the robot generates. People consistently have expectations that are far too high for the artificial agents they encounter, which often leads to confusion and disappointment.
  • If we can better understand human expectations about robot capabilities based on the robot’s appearance (and/or initial behaviours) and ensure that those are aligned with the actual robot abilities, this should accelerate progress in human-robot interaction, specifically in the domains of human acceptance of robots in social settings and cooperative task performance between humans and robots. This project will combine expertise in robotic design and the social neuroscience of how we perceive and interact with artificial agents to develop a socially interactive robot designed for use in public spaces that requires (little or) no learning or effort for humans to interact with while carrying out tasks such as guidance, cooperative navigation, and interactive problem-solving tasks.

Proposed methods:

  • Computing Science: System development and integration (Developing operational models of interactive behaviour and implementing them on robot platforms); deployment of robot systems in lab-based settings and in real-world public spaces
  • Psychology/Brain Science: Behavioural tasks (questionnaires and measures of social perception, such as the Social Stroop task), non-invasive mobile brain imaging (functional near infrared spectroscopy) to record human brain activity when encountering the artificial agent in question.

Likely outputs:

  • empirically-based principles for social robot design to optimize alignment between robot’s appearance, user expectations, and robot performance, based on brain and behavioural data
  • A publicly available, implemented, and validated robot system embodying these principles
  • Empirical research papers detailing findings for a computing science audience (e.g., ACM Transactions on Human-Robot Interaction) a psychology/neuroscience audience (e.g., Psychological Science, Cognition) and a general audience, that draws on the multidisciplinary aspects of the work (PNAS, Current Biology), as well as papers at appropriate conferences and workshops such as Human-Robot Interaction, Intelligent Virtual Agents, CHI, and similar.

[Fos17] Foster, M. E.; Gaschler, A.; and Giuliani, M. Automatically Classifying User Engagement for Dynamic Multi-party Human–Robot Interaction. International Journal of Social Robotics. July 2017.

[Fos16] Foster, M. E.; Alami, R.; Gestranius, O.; Lemon, O.; Niemelä, M.; Odobez, J.; and Pandey, A. K. The MuMMER project: Engaging human-robot interaction in real-world public spaces. In Proceedings of the Eighth International Conference on Social Robotics, 2016.

[Cro19] Cross, E. S., Riddoch, K. A., Pratts, J., Titone, S., Chaudhury, B. & Hortensius, R. (2019). A neurocognitive investigation of the impact of socialising with a robot on empathy for pain. Philosophical Transactions of the Royal Society B.

[Hor18] Hortensius, R. & Cross, E.S. (2018). From automata to animate beings: The scope and limits of attributing socialness to artificial agents. Annals of the New York Academy of Science: The Year in Cognitive Neuroscience.