Projects

A FULL LIST OF PROJECTS FOR THE ACADEMIC YEAR 2021-2022 WILL BE MADE AVAILABLE ON JANUARY 31st, 2021. PLEASE WAIT UNTIL THEN BEFORE SUBMITTING YOUR APPLICATION.

Enhancing Social Interactions via Physiologically-Informed AI

Supervisors:
Marios Philiastides (School of Psychology) and Alessandro Vinciarelli (School of Computing Science).

Over the past few years major developments in machine learning (ML) have enabled important advancements in artificial intelligence (AI). Firstly, the field of deep learning (DL) – which has enabled models to learn complex input-output functions (e.g. pixels in an image mapped onto object categories), has emerged as a major player in this area. DL builds upon neural network theory and design architectures, expanding these in ways that enable more complex function approximations.

The second major advance in ML has combined advances in DL with reinforcement learning (RL) to enable new AI systems for learning state-action policies – in what is often referred to as deep reinforcement learning (DRL) – to enhance human performance in complex tasks. Despite these advancements, however, critical challenges still exist in incorporating AI into a team with human(s).

One of the most important challenges is the need to understand how humans value intermediate decisions (i.e. before they generate a behaviour) through internal models of their confidence, expected reward, risk etc. Critically, such information about human decision-making is not only expressed through overt behaviour, such as speech or action, but more subtlety through physiological changes, small changes in facial expression and posture etc. Socially and emotionally intelligent people are excellent at picking up on this information to infer the current disposition of one another and to guide their decisions and social interactions.

In this project, we propose to develop a physiologically-informed AI platform, utilizing neural and systemic physiological information (e.g. arousal, stress) ([Fou15][Pis17][Ghe18]) together with affective cues from facial features ([Vin09][Bal16]) to infer latent cognitive and emotional states from humans interacting in a series of social decision-making tasks (e.g. trust game, prisoner’s dilemma etc). Specifically, we will use these latent states to generate rich reinforcement signals to train AI agents (specifically DRL) and allow them to develop a “theory of mind” ([Pre78][Fri05]) in order to make predictions about upcoming human behaviour. The ultimate goal of this project is to deliver advancements towards “closing-the-loop”, whereby the AI agent feeds-back its own predictions to the human players in order to optimise behaviour and social interactions.

[Ghe18] S Gherman, MG Philiastides, “Human VMPFC encodes early signatures of confidence in perceptual decisions”, eLife, 7: e38293, 2018.

[Pis17] MA Pisauro, E Fouragnan, C Retzler, MG Philiastides, “Neural correlates of evidence accumulation during value-based decisions revealed via simultaneous EEG-fMRI”, Nature Communications, 8: 15808, 2017.

[Fou15] E Fouragnan, C Retzler, KJ Mullinger, MG Philiastides, “Two spatiotemporally distinct value systems shape reward-based learning in the human brain”, Nature Communications, 6: 8107, 2015.

[Vin09] A.Vinciarelli, M.Pantic, and H.Bourlard, “Social Signal Processing: Survey of an Emerging Domain“, Image and Vision Computing Journal, Vol. 27, no. 12, pp. 1743-1759, 2009.

[Bal16] T.Baltrušaitis, P.Robinson, and L.-P. Morency. “Openface: an open source facial behavior analysis toolkit.” Proceedings of the IEEE Winter Conference on Applications of Computer Vision, 2016.

[Pre78] D. Premack, G. Woodruff, “Does the chimpanzee have a theory of mind?”, Behavioral and brain sciences Vol. 1, no. 4, pp. 515-526, 1978.

[Fri05] C. Frith, U. Frith, “Theory of Mind”, Current Biology Vol. 15, no. 17, R644-646, 2005.

Into the thick of it: Situating digital health behaviour interventions

Supervisors:
Esther Papies (School of Psychology) and Stacy Marsella (School of Psychology)

Aims and Objectives.  This project will examine how to best integrate a digital health intervention into a users’ daily life.  Are digital interventions more effective if they are situated, i.e., adapted to specific users and situations where behaviour change should happen?  If so, which features of situations should a health (phone) app use to remind a user to perform a healthy behaviour (e.g., time of day, location, mood, activity pattern)? From a Social AI perspective, how do we make inferences about those situations from sensing data and prior models of users’ situated behaviours, how and when does the app socially interact with the user to improve the situated behaviour and how do we adapt the user model over time to improve the app’s tailored interaction with a specific user? We will test this in the domain of hydration, with an intervention to increase the consumption of water.

Background and Novelty.  Digital interventions are a powerful new tool in the domain of individual health behaviour change.  Health apps can reach large numbers of users at relatively low cost, and can be tailored to an individual’s health goals.  So far, however, digital health interventions have not exploited a key strength compared to traditional interventions delivered by human health practitioners, namely the ability to situate interventions in critical situations in a users’ daily life.  Rather than being presented statically at pre-set times, situating interventions means that they respond and adapt to the key contextual features that affect a users’ health behaviour.  Previous work has shown that context features have a powerful influence on health behaviour, for example by triggering habits, impulses, and social norms. Therefore, it is vital for effective behaviour change interventions to take the specific context of a user’s health behaviours into account.  The current proposal will test whether situating a mobile health intervention, i.e., designing it to respond adaptively to contextual features, increases its effectiveness compared to un-situated interventions.  We will do this in the domain of hydration, because research suggests that many adults may be chronically dehydrated, with implications for cognitive functioning, mood, and physical health (e.g., risk of diabetes, overweight, kidney damage).

Methods.  We will build an app to increase water intake and compare a static version of this app with a dynamic version that responds to time, a user’s activity level, location, social context, mood, and other possible features that may be linked to hydration (Paper 1).  We will assess whether an app that responds actively to such features leads over time to more engagement and behaviour change than a static app (Paper 2), and which contextual inferences work best to situate an app for effective behaviour change (Paper 3).

Outputs.  This project will lead to presentations and papers at both AI and Psychology conferences outlining the principles and results of situating  health behaviour interventions, using the tested healthy hydration app.

Impact.  Results from this work will have implication for the design of health behaviour interventions across domains, as well as for our understanding of the processes underlying behaviour change. It will explore how sensing and adaptive user modelling can situate both user and AI system in a common contextual frame and whether this facilitates engagement and behaviour change.

Alignment with Industrial Interests.  This work will be of interest to industry collaborators interested in personalised health behaviour, such as Danone.

[MUN15] Muñoz, C. X., Johnson, E. C., McKenzie, A. L., Guelinckx, I., Graverholt, G., Casa, D. J., … Armstrong, L. E. (2015). Habitual total water intake and dimensions of mood in healthy young women. Appetite92, 81–86.

[PAP17] Papies, E. K. (2017). Situating interventions to bridge the intention–behaviour gap: A framework for recruiting nonconscious processes for behaviour change. Social and Personality Psychology Compass11(7), n/a-n/a.

[RIE13] Riebl, S. K., & Davy, B. M. (2013). The Hydration Equation: Update on Water Balance and Cognitive Performance. ACSM’s health & fitness journal17(6), 21–28.

[WAN17] Wang and S. Marsella, “Assessing personality through objective behavioral sensing,” in Proceedings of the 7th international conference on affective computing and intelligent interaction, 2017.

[LYN11] Lynn C. Miller, Stacy Marsella, Teresa Dey, Paul Robert Appleby, John L. Christensen, Jennifer Klatt and Stephen J. Read. Socially Optimized Learning in Virtual Environments (SOLVE). The Fourth International Conference on Interactive Digital Storytelling (ICIDS), Vancouver, Canada, Nov. 2011.

[PYN07] Pynadath, David V.; Marsella, Stacy C.  Minimal mental models.  In Proceedings of the 22ndNational Conference on Artificial Intelligence (AAAI), pp. 1038-1044, 2007.

Sharing the road: Cyclists and automated vehicles

Supervisors:
Steve Brewster (School of Computing Science) and Frank Pollick (School of Psychology).

Automated vehicles must share the road with pedestrians and cyclists, and drive safely around them. Autonomous cars, therefore, must have some form of social intelligence if they are to function correctly around other road users. There has been work looking at how pedestrians may interact with future autonomous vehicles [ROT15] and potential solutions have been proposed (e.g. displays on the outside of cars to indicate that the car has seen the pedestrian). However, there has been little work on automated cars and cyclists.

When there is no driver in the car, social cues such as eye contact, waving, etc., are lost [ROT15]. This changes the social interaction between the car and the cyclist, and may cause accidents if it is no longer clear, for example, who should proceed. Automated cars also behave differently to cars driven by humans, e.g. they may appear more cautious in their driving, which the cyclist may misinterpret. The aim of this project is to study the social cues used by drivers and cyclists, and create multimodal solutions that can enable safe cycling around autonomous vehicles.

The first stage of the work will be observation of the communication between human drivers and cyclists through literature review and fieldwork. The second stage will be to build a bike into our driving simulator [MAT19] so that we can test interactions between cyclists and drivers safely in a simulation.

We will then start to look at how we can facilitate the social interaction between autonomous cars and cyclists. This will potentially involve visual displays on cars or audio feedback from them, to indicate state information to cyclists nearby (eg whether they have been detected, whether the car is letting the cyclist go ahead). We will also investigate interactions and displays for cyclists, for example multimodal displays in cycling helmets [MAT19] to give them information about car state (which could be collected by V2X software on the cyclist’s phone, for example). Or directly communicating with the car by input made on the handlebars or via gestures. These will be experimentally tested in the simulator and, if we have time, in highly controlled real driving scenarios.

The output of this work will be a set new techniques to support the social interaction between autonomous vehicles and cyclists. We currently work with companies such as Jaguar Land Rover and Bosch and our results will have direct application in their products.

[ROT15] Rothenbucher, D., Li, J., Sirkin, D. and Ju, W., Ghost driver: a platform for investigating interactions between pedestrians and driverless vehicles, Adjunct Proceedings of the International Conference on Automotive User Interfaces and Interactive Vehicular Applications, pp. 44–49, 2015.

[MAT19] Matviienko, A. Brewster, S., Heuten, W. and Boll, S. Comparing unimodal lane keeping cues for child cyclists (https://doi.org/10.1145/3365610.3365632), Proceedings of the 18th International Conference on Mobile and Ubiquitous Multimedia

Cross-cultural detection of and adaptation to different user types for a public-space robot

Supervisors:
Monika Harvey (School of Psychology), Mary Ellen Foster (School of Computing Science) and Olga Perepelkina (Neurodata Lab).

It is well known that people from different demographic groups – be it  age, gender, socio-economic status, culture to name a few – have different preferred interaction styles. However, when a robot is placed in a public space, it often has a single, standard interaction style that it uses in all situations acorss the different populations engaging with it. If a robot were able to detect the type of person it was interacting with and adapt its behaviour accordingly on the fly, this would support longer, higher-quality interactions which in turn would increase its utility and acceptance.

The overarching goal of this PhD project is to create such a robot and our collaboration with Neurodata Lab in Russia will allow us to investigate cultural as well as other more common demographic markers. We will further make use of the audiovisual sensing software developed by Neurodata Lab to be implemented in the robot.

As a result, the proposed  project will consist of several distint phases. Firstly, a simple robot system will be build and deployed in various locations across Scotland and Russia, and the audiovisual data of all people interacting with it will be recorded. As a second step, this data will be processed and classified with the aim of identifying  characteristic behaviours of different user types. In a further step, the robot behaviour will be modified so that it is able to adapt to the different users, and, in a final step, the the modified robot will be evaluated in the original deployment locations.

The results of the project will be of  great relevance to our industrial partner, allowing them to further develop and market their audiovisual sensing software. The student will greatly benefit from the industrial as well as the  cross-cultural work experience. More generally the results will be of  significant interest in areas including social robotics, affective computing, and intelligent user interfaces.

[FOS16] Foster, M. E., Alami, R., Gestranius, O., Lemon, O., Niemelä, M., Odobez, J.-M., & Pandey, A. K. (2016). The MuMMER Project: Engaging Human-Robot Interaction in Real-World Public Spaces. In Social Robotics (pp. 753–763).

[LEA18] Learmonth, G., Maerker, G., McBride, N., Pellinen, P. & Harvey, M. (2018). Right-lateralised lane keeping in young and older British drivers. PLoS One, 13(9),

[MAE19] Maerker, G., Learmonth, G., Thut. G. & Harvey, M. (2019). Intra- and inter-task reliability of spatial attention measures in healthy older adults. PLoS One, 14(2), 1-21.

[PER19] Perepelkina, O., & Vinciarelli, A. (2019). Social and Emotion AI: The Potential for Industry Impact. 2019 8th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW). Presented at the 2019 8th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW).

Social Interaction via Touch Interactive Volummetric 3D Virtual Agents

Supervisors:
Ravinder Dahiya (School of Engineering) and Philippe Schyns (School of Psychology)

Vision and touch based interactions are fundamental modes of interaction between humans and between humans and the real world. Several portable devices use these modes to display gestures that communicate social messages such as emotions. Recently, non-volumetric 3D displays have attracted considerable interest because they give users a 3D visual experience – for example, 3D movies provide viewers with a perceptual sensation of depth via a pair of glasses. Using a newly developed haptics-based holographic 3D volumetric display, this project will develop these new forms of social interactions with virtual agents. Unlike various VR tools that require headsets (which can lead to motion sickness), here the interaction with 3D virtual objects will be less restricted, closer to its natural form, and, critically, give the user the illusion that the virtual agent is physically present. The experiments will involve interactions with holographically displayed virtual human faces and bodies engaging in various social gestures. To this end, the simulated 2D images showing these various gestures will be displayed mid-air in 3D. For enriched interaction and enhanced realism, this project will also involve hand gesture recognition and controlling haptic feedback (i.e. air patterns) to simulate the surface of several classes of virtual objects. This fundamental study is transformative for sectors where physical interaction with virtual objects is critical, including medical, mental health, sports, education, heritage, security, and entertainment.

Evaluating and Shaping Cognitive Training with Artificial Intelligence Agents

Supervisors:
Fani Deligianni (School of Computing Science) and Monika Harvey (School of Psychology)

Virtual reality (VR) has emerged as a promising tool for cognitive training for several neurological conditions (ie. mild cognitive impairment, acquired brain injury) as well as for enhancing healthy ageing and reducing the impact of mental health conditions (ie. anxiety and fear). Cognitive training refers to behavioural training that results in enhancement of specific cognitive abilities such as visuospatial attention and working memory. Using VR for such training offers several advantages towards achieving improvements, including its high level of versatility and its ability to dynamically adjust difficulty in real-time. Furthermore, it is an immersive technology and thus has great potential to increase motivation and compliance in subjects. Currently, VR and serious video games come in a wide variety of shapes and forms and the emerging data are difficult to quantify and compare in a meaningful way (Sokolov 2020).

This project aims to exploit machine learning to develop intuitive measures of cognitive training in a platform independent way. The project is challenging as there is great variability in cognitive measures even in well controlled/designed lab experiments (Learmonth et al., 2017; Benwell et al., 2014). So the objectives of the projects are:

  1. Predict psychological dimensions (ie. enjoyment, anxiety, valence and arousal) based on performance and neurophysiological data.
  2. Relate performance improvements (ie. learning rate) to psychological dimensions and physiological data (ie. EEG and eye-tracking).
  3. Develop artificial intelligence approaches that are able to modulate the VR world to control learning rate and participant satisfaction.

VR is a promising new technology that provides new means of building frameworks that will help to improve socio-cognitive processes. Machine learning methods that dynamically control aspects of the VR games are critical to enhanced engagement and learning rates (Darzi et al. 2019, Freer et al. 2020). Developing continuous measures of spatial attention, cognitive workload and overall satisfaction would provide intuitive ways for users to interact with the VR technology and allow the development of a personalised experience. Furthermore, these measures will play a significant role in objectively evaluating and shaping new emerging VR platforms and this approach will thus generate significant industrial interest.

[BEN14] Benwell, C.S.Y, Thut, G., Grant, A. and Harvey, M. (2014). A rightward shift in the visuospatial attention vector with healthy aging. Frontiers in Aging Neuroscience, 6, article 113, 1-11.

[DAR19] A. Darzi, T. Wondra, S. McCrea and D. Novak (2019). Classification of Multiple Psychological Dimensions in Computer Game Players Using Physiology, Performance, and Personality Characteristics. Frontiers in Neuroscience, 2019.

[FRE20] D. Freer, Y. Guo, F. Deligianni and G-Z. Yang (2020). On-Orbit Operations Simulator for Workload Measurement during Telerobotic Training. IEEE RA-L, https://arxiv.org/abs/2002.10594.

[LEA17] Learmonth, G., Benwell, C. S.Y., Thut, G. and Harvey, M. (2017). Age-related reduction of hemispheric lateralization for spatial attention: an EEG study. Neuro-Image, 153, 139-151.

[SOK20] A. Sokolov, A. Collignon and M. Bieler-Aeschlimann (2020). Serious video games and virtual reality for prevention and neurorehabilitation of cognitive decline because of aging and neurodegeneration. Current Opinion in Neurology, 33(2), 239-248.

Modulating Cognitive Models of Emotional Intelligence

Supervisors:
Fani Deligianni (School of Computing Science) and Frank Pollick (School of Psychology)

State-of-the-art artificial intelligent (AI) systems mimic how the brain processes information to develop systems with unprecedented accuracy and performance in accomplishing tasks such as object/face recognition and text/speech translation. However, one key characteristic that defines human success is emotional intelligence. Empathy, the ability to understand others’ people feelings and emotionally reflect upon them, shapes social interaction and it is important in both personal and professional success. Although, some progress has been achieved in developing systems that detect emotions based on facial expressions and physiological data, a way of relating and reflecting upon them is far more challenging. Therefore, understanding how empathy/emotional responses emerge via complex information processing between key brain regions is of paramount importance to develop emotionally-aware AI agents.

In this project, we will exploit real-time functional Magnetic Resonance Imaging (fMRI) neurofeedback techniques to build cognitive models that explain modulation of brain activity in key regions related to empathy and emotions. For example, anterior insula is a brain region located in deep gray matter and it has been consistently implicated in empathy/emotional responses and abnormal emotional processing observed in several disorders such as Autism Spectrum Disorder and misophonia. Neurofeedback has shown promising results in regulating the activity of anterior insula and it could enable therapeutic training techniques (Kanel et al. 2019).

This approach would extract how brain regions interact during neuromodulation and allow cognitive models to emerge in real-time. Subsequently, to allow training in more naturalistic environments we suggest cross-domain learning between fMRI and EEG. The motivation behind this is that whereas fMRI is the gold standard imaging technique for deep gray matter structures it is limited by the lack of portability, comfort in use and low temporal resolution (Deligianni et al. 2014). On the other hand, advances in wearable EEG technology show promising results in the use of the device beyond well-controlled lab experiments. Toward this end advanced machine learning algorithms based on representation learning and domain generalisation would be developed. Domain/Model generalisation in deep learning aims to learn generalised features and extract representations in an ‘unseen’ target domain by eliminating bias observed via multiple source domain data (Volpi et al. 2018) .

Summarising, the overall aims of the project are:

  1. To build data-driven cognitive models of real-time brain network interaction during emotional modulation via neurofeedback techniques.
  2. To develop advanced machine learning algorithm to perform cross-domain learning between fMRI and EEG.
  3. To develop intelligent artificial agents based on portable EEG systems to successfully regulate emotional responses, taking into account cognitive models derived in the fMRI scanner.

[DEL14] Deligianni et al. ‘Relating resting-state fMRI and EEG whole-brain connectomes across frequency bands’, Frontiers in Neuroscience, 8(258), 2014.

[KAN19] Kanel et al. ‘Empathy to emotional voices and the use of real-time fMRI to enhance activation of the anterior insula’, NeuroImage, 198, 2019.

[KUM17] Kumar et al. ‘The Brain Basis for Misophonia’, Current Biology, 27(4), 2017.

[VOL18] Volpi et al. ‘Generalizing to Unseen Domains via Adversarial Data Augmentation’, Neural Information Processing Systems, 2018.

Detecting Affective States based on Human Motion Analysis

Supervisors:
Fani Deligianni (School of Computing Science) and Marios Philiastides (School of Psychology)

Human motion analysis is a powerful tool to extract biomarkers for disease progression in neurological conditions, such as Parkinson disease and Alzheimer’s. Gait analysis has also revealed several indices that relate to emotional well-being. For example, increased gait speed, step length and arm swing has been related with positive emotions, whereas a low gait initiation reaction time and flexion of posture has related with negative feelings (Deligianni et al. 2019). Strong neuroscientific evidence show that the reason behind these relationships are due to an interaction between brain networks involved in gait and emotion. Therefore, it does not come to surprise that gait has been also related to mood disorders, such as depression and anxiety.

In this project, we aim to investigate the relationship between effective mental states and psychomotor abilities with relation to gait, balance and posture while emotions are modulated via augmented reality displays. The goal is to develop a comprehensive continuous map of interrelationships in both normal subjects and subjects affected by a mood disorder. In this way, we are going to derive objective measures that would allow to detect early signs of abnormalities and intervene via intelligent social agents. This is a multi-disciplinary project with several challenges to address:

  1. Build robust experimental setup of intuitive naturalistic paradigms.
  2. Develop AI algorithms to relate neurophysiological data with gait characteristics based on state-of-the-art motion capture systems (taking into account motion artefacts during gait)
  3. Develop AI algorithms to improve detection of gait characteristics via rgbd cameras (Gu et al. 2020) and possibly new assistive living technologies based on pulsed laser beam.

The proposed AI technology for social agents has several advantages. It can enable the development of intelligent social agents that would track mental well-being based on objective measures and provide personalised feedback and suggestions. In several cases, assessment is done based on self-reports via mobile apps. These measures of disease progression are subjective and it has been found that in major disorders they do not correlate well with objective evaluations. Furthermore, measurements of gait characteristics are continuous and they can reveal episodes of mood disorders that are not present when the subject visits a health practitioner. This approach might shed a light on subject variability with relation to behavioural therapy and provide more opportunities for earlier intervention (Queirazza et al. 2019). Finally, compared to other state-of-the-art effect recognition approaches, human motion analysis might pose less privacy issues and enhance users’ trust and comfort with the technology. In several situations, where facial expressions are not easy to track, human motion analysis is far more accurate in classifying subjects with mental disorders.

[DEL19] F Deligianni, Y Guo, GZ Yang, ‘From Emotions to Mood Disorders: A Survey on Gait Analysis Methodology’, IEEE journal of biomedical and health informatics, 2019.

[GUO19] Y Guo, F Deligianni, X Gu, GZ Yang, ‘3-D Canonical pose estimation and abnormal gait recognition with a single RGB-D camera’, IEEE Robotics and Automation Letters, 2019.

[XGU20] X Gu, Y Guo, F Deligianni, GZ Yang, ‘Coupled Real-Synthetic Domain Adaptation for Real-World Deep Depth Enhancement.’, IEEE Transactions on Image Processing, 2020.

[QUE19] F Queirazza, E Fouragnan, JD Steele, J Cavanagh and MG Philiastides, Neural correlates of weighted reward prediction error during reinforcement learning classify response to Cognitive Behavioural Therapy in depression, Science Advances, 5 (7), 2019.

Improving engagement with mobile health apps by understanding (mis)alignment between design elements and personal characteristics

Supervisors:
Lawrence Barsalou (School of Psychology) and Aleksandar Matic (Koa Health)

Background

Mobile health apps have brought growing enthusiasm to delivering behavioural and health interventions at a low-cost and in a scalable fashion. Unfortunately, the potential impact of mobile health applications has typically been seriously limited by low user engagement and high drop-out rates. A number of studies have unpacked potential reasons for these problems, including non-optimal fit to a user’s problems, difficulty of use, privacy concerns, and low trustworthiness [TOR18]. Although best practices for developing engaging apps have been established, a growing consensus has concluded that improving engagement further requires personalisation at an individual level. Because, however, the factors that influence individual engagement are complex, individually personalised mobile health apps have rarely been developed.

Psychological literature provides numerous clues about how user interactions can be designed in more engaging ways based on personal characteristics. For instance, it is recommended to highlight rewards and social factors for extraverts, safety and certainty for neurotic individuals, achievements and structure for conscientious individuals [HIR12], and external vs internal factors for individuals with high vs low locus of control [CC14]. Developing and testing personalised mobile health apps based on each personal characteristic would require a long process and many A/B trials, together with significant efforts and costs. Perhaps this explains why personalisation has been limited in practice and why most of mobile health apps have been designed in a one-size-fits-all manner.

Project approach, objectives, and outcomes

Instead of designing and testing each personalised app element, the project here will pursue two novel approaches. First, we will conduct a retrospective exploration of previous app use as documented in the literature. Specifically, we will assess a) personal characteristics of individuals who have previously used mobile health apps, b) design elements (including intervention mechanisms) of these apps, and c) outcomes related to app engagement (e.g., drop-out rates, frequency of use). Of focal interest will be how personal characteristics and app design interact to produce different levels of app engagement.  We aim to publish a major review of the literature based on this work.

Second, in a well-established stress app that we continue to develop, we will allow users to configure its design features in various ways.  We will also collect data about users’ personal characteristics.  From these data, we hope to develop design principles for tailoring future apps and intervention mechanisms to specific individuals.  A series of studies will be performed in this line of work, together with related publications.

This project is likely to focus on stress as the primary use-case.  In a related project, we are developing and evaluating stress apps that measure and predict stress in specific situations, linking psychological assessment to physiological data harvested implicitly from wearables.  In a third project, we are implementing behaviour change interventions in digital health apps to reduce distress and increase eustress.  Work from all three projects will be integrated to develop maximally effective stress apps, tailored to individuals, that effectively measure, predict, and alter stress experience.

Alignment with industrial interests

This work will be of a direct interest to a collaboration between Koa Health and the University of Glasgow to develop wellbeing services via digital health apps, including digital therapeutics.  Not only does this work attempt to better understand and design health apps, it has the central aim of implementing actual apps for use by clinicians, health professionals, and the general population.

[TOR18] Torous, John, Jennifer Nicholas, Mark E. Larsen, Joseph Firth, and Helen Christensen. “Clinical review of user engagement with mental health smartphone apps: evidence, theory and improvements.” Evidence-based mental health 21, no. 3 (2018): 116-119.

[HIR12] Hirsh, J. B., Kang, S. K., & Bodenhausen, G. V. (2012). Personalized persuasion: Tailoring persuasive appeals to recipients’ personality traits. Psychological science, 23(6), 578-581.

[CC14] Cobb-Clark, D. A., Kassenboehmer, S. C., & Schurer, S. (2014). Healthy habits: The connection between diet, exercise, and locus of control. Journal of Economic Behavior & Organization, 98, 1-28.

Developing a digital avatar that invites user engagement

Supervisors:
Philippe Schyns (School of Psychology) and Mary Ellen Foster (School of Computing Science)

Digital avatars can engage with humans to interact socially. However, before they do so they typically are in a resting, default state. The question that arises is how we should design such digital avatars in a resting state so that they have a realistic appearance that promotes engagement with a human. We will combine methods from human psychophysics, computer graphics, machine vision and social robotics to design a digital avatar (presented in VR or on a computer screen) that looks to a human participant like a sentient being (e.g. with realistic appearance and spontaneous dynamic movements of the face and the eyes), who can then engage with humans before starting an interaction (i.e. tracks their presence, engage with realistic eye contact and so forth). Building on the strength of digital design avatars in the Institute of Neuroscience and Psychology and the social robotics research on the School of Computing Science, this project will attempt to achieve the following scientific and technological goals:

  • Identify the default face movements (including eye movements) that produce a realistic sentient appearance.
  • Implement those movements on a digital avatar which can be displayed on a computer screen or in VR.
  • Use tracking software to detect human beings in the environment, follow their movements and engage with realistic eye contact.
  • Develop models to link human behaviour with avatar movements to encourage engagement.
  • Evaluate the performance of the implemented models through deployment in labs and in public spaces.

You Never get a Second Chance to Make a First Impression – Establishing how best to align human expectations about a robot’s performance based on the robot’s appearance and behaviour

Supervisors:
Mary Ellen Foster (School of Computing Science) and Emily Cross (School of Psychology)

Aims and objectives:

  • A major aim of social robotics is to create embodied agents that humans can instantly and automatically understand and interact with, using the same mechanisms that they use when interacting with each other. While considerable research attention has been invested in this endeavour, it is still the case that when humans encounter robots, they need time to understand how the robot works; in other words, people need time to learn to read the signals the robot generates. People consistently have expectations that are far too high for the artificial agents they encounter, which often leads to confusion and disappointment.
  • If we can better understand human expectations about robot capabilities based on the robot’s appearance (and/or initial behaviours) and ensure that those are aligned with the actual robot abilities, this should accelerate progress in human-robot interaction, specifically in the domains of human acceptance of robots in social settings and cooperative task performance between humans and robots. This project will combine expertise in robotic design and the social neuroscience of how we perceive and interact with artificial agents to develop a socially interactive robot designed for use in public spaces that requires (little or) no learning or effort for humans to interact with while carrying out tasks such as guidance, cooperative navigation, and interactive problem-solving tasks.

Proposed methods:

  • Computing Science: System development and integration (Developing operational models of interactive behaviour and implementing them on robot platforms); deployment of robot systems in lab-based settings and in real-world public spaces
  • Psychology/Brain Science: Behavioural tasks (questionnaires and measures of social perception, such as the Social Stroop task), non-invasive mobile brain imaging (functional near infrared spectroscopy) to record human brain activity when encountering the artificial agent in question.

Likely outputs:

  • empirically-based principles for social robot design to optimize alignment between robot’s appearance, user expectations, and robot performance, based on brain and behavioural data
  • A publicly available, implemented, and validated robot system embodying these principles
  • Empirical research papers detailing findings for a computing science audience (e.g., ACM Transactions on Human-Robot Interaction) a psychology/neuroscience audience (e.g., Psychological Science, Cognition) and a general audience, that draws on the multidisciplinary aspects of the work (PNAS, Current Biology), as well as papers at appropriate conferences and workshops such as Human-Robot Interaction, Intelligent Virtual Agents, CHI, and similar.

[Fos17] Foster, M. E.; Gaschler, A.; and Giuliani, M. Automatically Classifying User Engagement for Dynamic Multi-party Human–Robot Interaction. International Journal of Social Robotics. July 2017.

[Fos16] Foster, M. E.; Alami, R.; Gestranius, O.; Lemon, O.; Niemelä, M.; Odobez, J.; and Pandey, A. K. The MuMMER project: Engaging human-robot interaction in real-world public spaces. In Proceedings of the Eighth International Conference on Social Robotics, 2016.

[Cro19] Cross, E. S., Riddoch, K. A., Pratts, J., Titone, S., Chaudhury, B. & Hortensius, R. (2019). A neurocognitive investigation of the impact of socialising with a robot on empathy for pain. Philosophical Transactions of the Royal Society B.

[Hor18] Hortensius, R. & Cross, E.S. (2018). From automata to animate beings: The scope and limits of attributing socialness to artificial agents. Annals of the New York Academy of Science: The Year in Cognitive Neuroscience.

Brain Based Inclusive Design

Supervisors:
Monika Harvey (School of Psychology) and Alessandro Vinciarelli (School of Computing Science)

It is clear to  everybody that people differ widely, but the underlying assumption of current technology designs is that all users are equal. The large cost of this, is the exclusion of  users that fall far from the average that technology designers use as their ideal abstraction (Holmes, 2019). In some cases, the mismatch is evident (e.g., a mouse typically designed for right-handed people is more difficult to use for left-handers) and attempts have been made to accommodate the differences. In other cases, the differences are more subtle and difficult to observe and no attempt has been made, to the best of our knowledge, as yet to take them into account. This is the case, in particular, for change blindness (Rensink, 2004) and inhibition of return (Posner & Cohen, 1984), two brain phenomena that limit our ability to process stimuli presented too closely in space and time. 

The overarching goal of the project is thus to design Human-Computer Interfaces capable of adapting to the limits of every user, in view of a fully inclusive design capable putting every user at ease, i.e., enabling him/her to interact with technology according to her/his processing speed and not according to the speed imposed by technology designers. 

The proposed approach includes four  steps: 

  1. Development of the methodologies for the automatic measurement of the phenomena described above through their effect on EEG signals (e.g., changes in P1, N1 components (McDonald et al., 1999) and behavioural performance (e.g., in/decreased accuracy, in/decreased reaction times); 
  2. Identification of the relationship between the phenomena above and observable factors such as age, education level, computer familiarity, etc. of the user; 
  3. Adaptation of the technology design to the factors above, 
  4. Analysis of the improvement of the users’ experience. 

The main expected outcome is that technology will become more inclusive and capable of accommodating the individual needs of its users in terms of processing speed and ease of use. This will be particularly beneficial for those groups of users that, for different reasons, tend to be penalised in terms of processing speed, in particular older adults  and  special populations (e.g., children with developmental issues, stroke survivors, and related cohorts). 

The project is of great industrial interest because, ultimately, improving the inclusion of technical design greatly increases user satisfaction, a crucial requirement  for every company that aims to commercialise technology. 

[HOL19] Holmes, K. (2019). Mismatch, MIT Press. 

[MCD99] McDonald,J., Ward,L.M. &.Kiehl,A.H. (1999). An event-elated brain potential study of inhibition of return. PerceptionandPsychophysics, 61, 1411–1423. 

[POS84] Posner, M.I. & Cohen, Y. (1984). “Components of visual orienting”. In Bouma, H.; Bouwhuis, D. (eds.). Attention and performance X: Control of language processes. Hillsdale, NJ: Erlbaum. pp. 531–56. 

[RES04] Rensink, R.A. (2004). Visual Sensing without Seeing. Psychological Science, 15, 27-32. 

Modelling Conversational Facial Signals for Culturally Sensitive Artificial Agents

Supervisors:
Rachael Jack (Institute of Neuroscience and Psychology) and Gabriel Skantze (Furhat Robotics)

In spoken interactions, face-to-face meetings are often preferred. This is because the human face is highly expressive and can facilitate coordinated interactions. Embodied conversational agents with expressive faces therefore have the potential for smoother interactions than voice assistants. However, knowledge of how the face expresses these social signals – the “language” of facial expressions – is limited, with no coherent modelling framework (e.g., see Jack & Schyns, 2017). For example, current models focus primarily on basic emotions such as fear, anger and happiness, which are not suitable for everyday conversations or recognized cross-culturally (e.g., Jack, 2013). Instead, signals of affirmation, uncertainty, interest, and turn-taking in different cultures (e.g., Chen et al., 2015) are more relevant (e.g., Skantze, 2016). Conversational digital agents typically employ these signals in an ad hoc manner, with smiles or frowns manually inserted at speech-coordinated time points. However, this is costly, time consuming, and provides only a limited repertoire of, often Western-centric, face signals, which in turn restricts the utility of conversational agents.

To address this knowledge gap, this project will (a) Develop a modelling framework for conversationally relevant facial expressions in distinct cultures – East Asian and Western, (b) Develop methods to  automatically generate these facial expressions in conversational systems, and (c) Evaluate these models in different human-robot cultural interaction settings. This automatic modelling will coordinate with the agent’s speech (e.g. auto-inserting smiles at appropriate times), the user’s behaviour (e.g. directing gaze and raising eyebrows when the user starts speaking), and the agent’s level of understanding (e.g. frowning during low comprehension).

We will employ state-of-the-art 3D capture of human-human interactions and psychological data-driven methods to model dynamic facial expressions (see Jack & Schyns, 2017). We will deploy these models using FurhatOS – a software platform for human-robot interactions – and the Furhat robot head, which has a highly expressive animated face with superior social signalling capacity compared to other platforms (Al Moubayed et al., 2013). The flexibility of Furhat’s display system, combined with state-of-the-art psychological-derived 3D face models will also enable exploration of other socially relevant facial characteristics, such as ethnicity, gender, and age (e.g., see Zhan et al., 2019).

The results will be highly relevant to companies developing virtual agents/social robots, such as Furhat Robotics. Skantze, Furhat Robotics co-founder/chief scientist, will facilitate impact of the results. The project will also inform fundamental knowledge of human-human and human-robot interactions by precisely characterizing how facial signals facilitate spoken interactions. We anticipate outputs in international psychology and computer science conferences (e.g., Society for Personality and Social Psychology; IEEE Automatic Face & Gesture Recognition) and high-profile scientific outlets (e.g., Nature Human Behaviour).  Jack is PI of a large-scale funded laboratory specializing in modelling facial expressions across cultures.

Year 1 (Master’s): Training in (a) programming human-robot interactions; (b) data-driven modelling of dynamic facial expressions.

Year 2 – 3: Data-driven modelling of dynamic conversational facial expressions in each culture.

Year 3 – 4: Application and evaluation of facial expression models in human-robot interaction scenarios.

References:

  1. Jack, R. E. & Schyns, P. G. (2017). Toward a social psychophysics of face communication. Annual review of  psychology, 68, 269-297.
  2. Jack, R. E. (2013). Culture and facial expressions of emotion. Visual Cognition, 21(9-10), 1248-1286.
  3. Chen, C., Garrod, O., Schyns, P., Jack, R. (2015). The face is the mirror of the cultural mind. Journal of Vision, 15(12), 928-928.
  4. Skantze, G. (2016). Real-time coordination in human-robot interaction using face and voice. Ai Magazine, 37(4), 19-31.
  5. Moubayed, S. A., Skantze, G., & Beskow, J. (2013). The furhat back-projected humanoid head – lip reading, gaze and multi-party interaction. International Journal of Humanoid Robotics, 10(01), 1350005.
  6. Zhan, J., Liu, M, Garrod, O.G., Jack, R. E., & Schyns, P. G. (2020, October). A Generative Model of Cultural Face Attractiveness. In Proceedings of the 20th ACM International Conference on Intelligent Virtual Agents (pp. 1-3).

Watch out autonomous vehicle about! Partnering older drivers with new driving technology

Supervisors:
Stephen Brewster (School of Computing Science) and Monika Harvey (School of Psychology)

• Main aims and objectives;
At present there are over 12million people aged 65+ in the UK, but as yet there has been little investigation of how older drivers cope with advanced driving technologies such as head-up displays, augmented reality windscreens, or semi-autonomous vehicles. As they age, drivers’ cognitive and spatial abilities in particular, change (Maerker et al. 2019; Learmonth et al., 2018) and increased prevalence of diseases, such as stroke further impact these difficulties (Rossit et al., 2012), enhancing accident risks.

In addition, if care is not taken, the increasing amount of information presented by advanced driving technologies can overload older drivers (Learmonth et al., 2017) and make driving even less safe. The aim of this project is instead to ‘partner’ the car to the older driver: we will investigate how the car could work with the older driver, augmenting and enhancing their spatial and cognitive abilities, and thus providing support rather than increased distraction/complication.

Using a simulator, the first stage of the work will look at measuring basic driving performance such as steering, gaze and responses to distractors, to allow us to build a good model for older drivers’ behaviour under different driving scenarios.

Two potential example areas will then be considered to support older drivers:
Augmented reality windscreens: these could adaptively provide an enhancement of the environment, other vehicles or obstacles on the road, thus making them easier to recognise for those with poorer vision. One example would be automatic obstacle detection with the car further enhancing edges to make them stand out more against the background. Additional modalities such as 3D audio and haptic cues could be further areas of investigation.

Semi-autonomous driving: A semi-autonomous NHTSA Level 3+ car could offer to take over the driving if the driver appears to need support, an option most likely to be taken up by older drivers, especially those with age-related diseases such as stroke . However, once in autonomous mode, vigilance is required so that the driver can take back control, if needed. Yet maintaining vigilance is a known issue for older populations, so maintaining enough situational awareness to take over driving may well prove very difficult. We will study this problem and look at how the car can hand back control to the driver in a way that does not cause more problems.

At the same time we will investigate driver acceptance of these new technological aids to ensure that the tools we design are acceptable to older drivers.

• Proposed methods:
We will work with Brewster’s driving simulator which allows us to run controlled studies with inbuilt tools such as eye tracking, gesture tracking and driving performance measurements, along with in-car displays of different types. These can be run in either VR or on a large projection screen. For example, with eye-tracking we can monitor where drivers look in the scene and if they are neglecting particular areas, we can encourage them to move their gaze.

REFERENCES:
1. Learmonth, G., Maerker, G., McBride, N., Pellinen, P. and Harvey, M. (2018). Right-lateralised lane keeping in young and older British drivers. PLoS One, 13(9),(doi:10.1371/journal.pone.0203549).
2. Learmonth, G., Benwell, C. S.Y., Thut, G. and Harvey, M. (2017). Age-related reduction of hemispheric lateralization for spatial attention: an EEG study. Neuro-Image, 153, 139-151.
3. Maerker, G., Learmonth, G., Thut. G. and Harvey, M. (2019). Intra- and inter-task reliability of spatial attention measures in healthy older adults. PLoS One, 14(2), 1-21. (doi: 10.1371/journal.pone.0226424).
4. Rossit, S., McIntosh, R.D., Malhotra, P., Butler S.H., Muir, K. and Harvey, M. (2012) Attention in action: evidence from on-line corrections in left visual neglect. Neuropsychologia, 50, 1124-1135.

Social and Behavioural Markers of Hydration States

Supervisors:
Esther K. Papies (School of Psychology) and Matthew Chalmers (School of Computing Science)

Aims and Objectives.  This project will explore whether data derived from a person’s smartphone can be used to establish that person’s hydration status so that, in a well–guided and responsive way, a system can prompt the person to drink water.  Many people are frequently underhydrated, which has negative physical and mental health consequences.  Low hydration states can manifest in impaired cognitive and physical performance, experiences of fatigue or lethargy, and negative affect (e.g, Muñoz et al., 2015; Perrier et al., 2020).  Here, we will establish whether such social and behavioural markers of dehydration can be inferred from a user’s smartphone, and which of these markers, or their combination, are the best predictors of hydration state (Aim 1).  Sophisticated user models of hydration states could also be adapted over time, and help to predict possible instances of dehydration in advance (Aim 2).  This would be useful because many individuals find it difficult to identify when they need to drink, and could benefit from clear, personalized indicators of dehydration.  In addition, smart phones could then be used to prompt users to drink water, once a state of dehydration has been detected, or when dehydration is likely to occur.  Thus, we will also test how hydration information should be communicated to users to prompt attitude and behaviour change and ultimately, improve hydration behaviour (Aim 3).  Throughout, we will implement data collection, modelling, and feedback on smartphones in a secure way that respects and protects a user’s privacy.

Background and Novelty.  The data that can be derived from smart phones (and related digital services) ranges from low level data on sensors (e.g. accelerometers) to patterns of app usage and social interaction. As such, ‘digital phenotyping’ is a rich source of information on an individual’s social and physical behaviours, and affective states. Some recent survey papers this burgeoning field include Thieme et al. on machine learning in mental health (2020), Chancellor and de Choudhury on using social media data to predict mental health status (2020), Melcher et al. on digital phenotyping of college students (2020), and Kumar et al. on toolkits and frameworks for data collection (2020).

Here, we propose that these types of data may also reflect a person’s hydration state. Part of the project’s novelty is in its exploration of a wider range of phone-derived data as a resource for system agency than prior work in this general area, as well as pioneering work specifically on hydration.  We will relate cognitive and physical performance, fatigue, lethargy and affect to patterns in phone-derived data.  We will test whether such data can be harnessed to provide people with personalized, external, actionable indicators of their physiological state, i.e. to facilitate useful behaviour change. This would have clear advantages over existing indicators of dehydration, such as thirst cues or urine colour, which are easy to ignore or override, and/or difficult for individuals to interpret (Rodger et al, 2020).

Methods.  We will build on an existing mobile computing framework (e.g. AWARE-Light) to collect reports of a participant’s fluid intake, and to integrate them with phone-derived data.  We will attempt to model users’ hydration states, and validate this against self-reported thirst and urine frequency, and self-reported and photographed urine colour (Paper 1).  We will then examine in prospective studies if these models can be used to predict future dehydration states (Paper 2).  Finally, we will examine effective ways to provide feedback and prompt water drinking, based on individual user models (Paper 3).

Outputs.  This project will lead to presentations and papers at both Computer Science and Psychology conferences outlining the principles of using sensing data to understand physiological states, and to facilitate health behaviour change.

Impact.  Results from this work will have implications for the use of a broad range of data in health behaviour interventions across domains, as well as for our understanding of the processes underlying behaviour change. This project would also outline new research directions for studying the effects of hydration in daily life.

References

Chancellor, S., & De Choudhury, M. (2020). Methods in predictive techniques for mental health status on social media: a critical review. Npj Digital Medicine, 3(1), 1–11. http://doi.org/10.1038/s41746-020-0233-7

Melcher, J., Hays, R., & Torous, J. (2020). Digital phenotyping for mental health of college students: a clinical review. Evidence Based Mental Health, 4, ebmental–2020–300180–6. http://doi.org/10.1136/ebmental-2020-300180

Muñoz, C. X., Johnson, E. C., McKenzie, A. L., Guelinckx, I., Graverholt, G., Casa, D. J., … Armstrong, L. E. (2015). Habitual total water intake and dimensions of mood in healthy young women. Appetite, 92, 81–86. https://doi.org/10.1016/j.appet.2015.05.002

Rodger, A., Wehbe, L., & Papies, E. K. (2020). “I know it’s just pouring it from the tap, but it’s not easy”: Motivational processes that underlie water drinking. Under Review. https://psyarxiv.com/grndz

Perrier, E. T., Armstrong, L. E., Bottin, J. H., Clark, W. F., Dolci, A., Guelinckx, I., Iroz, A., Kavouras, S. A., Lang, F., Lieberman, H. R., Melander, O., Morin, C., Seksek, I., Stookey, J. D., Tack, I., Vanhaecke, T., Vecchio, M., & Péronnet, F. (2020). Hydration for health hypothesis: A narrative review of supporting evidence. European Journal of Nutrition. https://doi.org/10.1007/s00394-020-02296-z

Thieme, A., Belgrave, D., & Doherty, G. (2020). Machine Learning in Mental Health. ACM Transactions on Computer-Human Interaction (TOCHI), 27(5), 1–53. http://doi.org/10.1145/3398069