Projects

 The application process for entry in 2022 is now closed (deadline on February 25th, 2022), please see the following page for instructions on how to apply: https://socialcdt.org/how-to-apply/
THE PROJECTS AVAILABLE FOR ENTRY IN ACADEMIC YEAR 2022 CAN BE VIEWED BELOW.

Should you have any enquiries regarding project applications please contact us at social-cdt@glasgow.ac.uk.

AI assistive tools to predict mental wellbeing within care homes

Supervisors:
Marwa Mahmoud (School of Computing Science) and Emily Cross (School of Psychology)

Cultivating and maintaining mental health is a significant challenge for many residents in care homes. Depression and loneliness, self isolation and low levels of life satisfaction are common among the elderly, who are also more likely to suffer from other physical health problems compared to community-dwelling elderly citizens. Many care homes do not (and can not) provide one-on-one, person-centred care due to lack of resources. This project aims to build data-driven AI models using multimodal machine learning to create a comprehensive picture of care home residents’ mental health. The aim will be to use these models to predict the emergence of potential mental health problems as early as possible, based on analysing multimodal data (audio, video, wearables) collected from care homes.

Main objectives and novelty.

There has been an increased interest in automatic detection of mental health problems over the past several years, mainly focussing on (1) signals collected from mobile phones and wearables; and (2) younger, tech-savvy populations. However, audio-visual signals provide a vast array of extra cues that can improve inference models (Lin et al. 2021, Zhang et al. 2020), but which are currently underused.
The main aims of this project are to: 1) Build a dataset of structured interviews to be collected at care homes using multimodal sensors (audio/video/wearable sensors).
2) Devise novel machine learning models that extend state-of-the-art methods to analyse and use the multimodal signal collected to predict mental health conditions.
3) Validate and evaluate the accuracy of these models via quantitative and qualitative measures of loneliness, depression and anxiety among aged care residents, as well as build a clear picture of aged care residents’ feelings and personal experience with this technology through qualitative interview methods.

Methods.

This project will use experimental methods on collecting, validating and evaluating multimodal data related to mental health (Lin et. al. 2021, Laban et. al. 2021,2022). Using the collected data, it will also build on and extend state-of-the-art approaches on multimodal data representation and feature selection to devise inference models to predict and correlate with mental health conditions and risks identified within care homes.

Likely outcome and impact.

Nearly a half million UK residents currently live in care homes, representing 4% of the population older than 65, and 15% of those aged 85 and over. The onset of the global coronavirus pandemic, and resulting restrictions on face-to-face meetings with friends, family and loved ones has highlighted how fragile human mental health is when faced with even short-term restrictions to socialising, and these effects have been experienced even more acutely by older individuals living in live-in care settings (Cross and Henschel 2020) . If we are able to achieve our goals with this project, we will be able to develop tools to assist residents as well as care home staff to identify when individuals are at risk of deteriorating mental health and/or in need of extra 1:1 care or companionship from staff.

References:

[1] Lin, W., Orton, I.,Li, Q., Pavarini, G. and Mahmoud, M. (2021). Looking At The Body: Automatic Analysis of Body Gestures and Self-Adaptors in Psychological Distress. IEEE Transaction on Affective Computing.

[2] Zhang, Z., Lin, W., Liu, M. and Mahmoud, M. (2020). Multimodal Deep Learning Framework for Mental Disorder Recognition. IEEE International Conference on Automatic Face and Gesture Recognition.

[3] Laban, G., Ben-Zion, Z. & Cross, E. S. (2022). Social robots for supporting post-traumatic stress disorder diagnosis and treatment. Frontiers in Psychiatry (in press).

[4] Laban, G., George, J.-N., Morrison, V. & Cross, E. S. (2021). Tell me more! Assessing interactions with social robots from speech. Paladyn: Journal of Behavioral Robotics, 12(1), 136-159.

[5] Cross, E. and Henschel, A. (2020) The neuroscience of loneliness – and how technology is helping us.
https://theconversation.com/the-neuroscience-of-loneliness-and-how-technology-is-helping-us-136093

An adaptive agent dialogue framework for driving sustainable dietary behaviour change

Supervisors:
Mathieu Chollet (School of Computing Science) and Esther Papies (School of Psychology)

Context

The food system contributes 34% of greenhouse gas emissions, the majority of which coming from animal agriculture [1] also disproportionately contributing to deforestation, water scarcity, biodiversity loss, and ecosystem pollution [2]. Despite this, most consumers are resistant to substantially reduce their meat consumption, even when considering the accompanying health benefits. Efforts to improve eating habits are traditionally approached through behaviour change counselling sessions with dieticians. Such approaches are time and resource consuming, but digital intervention alternatives lack the essential component of human interaction and social support that drives the effectiveness of behaviour change counselling [3]. Virtual agents hold the potential to fill that gap; however past approaches have typically only been loosely coupled to existing social science in behaviour change [4].

Objectives and novelty

The project will focus on designing a virtual agent dialogue framework for longitudinal behaviour change interactions rooted in an established psychological theory. The adaptive dialogue agent will be able to guide users through their journey towards dietary change, interspersing activities from behaviour change programmes with social dialogue aimed at reinforcing the user-agent relationship while simultaneously probing users’ preferences and attitudes. These preference-infering exchanges will help maintain and update user models including idiosyncratic sensitivities to key variables identified to be key drivers for transitioning to more plant-based foods [5]: Taste expectations (i.e. meat-based foods are expected to be tastier), Availability (i.e. plant-based foods are less widely available in many settings), Skills (many consumers don’t know how to prepare meat-free meals), Identity (Vegetarian/vegan social identities are not seen as positive by many consumers and contribute to the polarization of perspectives on sustainable eating) and Social Norms (consuming meat is seen as normative, and these norms are communicated through features of the food environment and others’ behaviour). These user models will further impact task-related and relationship-building tasks, altering dialogue such as agents’ food presentation strategies. A key research challenge will consist in designing dialogue policies reconciling concurrent but inter-linked dialogue goals, in this case preference-infering, relationship-building, and delivering task-related dialogue.

Methods & Timeline

After a literature review, the student will extend an existing socially-aware recipe recommender agent framework developed at UofG [6] with a baseline rule-based dialogue model for inferring user preferences and attitudes and integrating these variables to alter subsequent dialogue. The model will be used to collect initial data and train further model iterations, considering supervised/reinforcement learning approaches. The resulting dialogue models will be deployed in a series of user experiments to evaluate their effectiveness at promoting user engagement and motivation, infering accurate user models, and driving effective and long-lasting behaviour change.

Outputs and impact

The project is expected to contribute novel dialogue models and policies for human-agent interactions as well as methodological and experimental insights on technologically-mediated behaviour change frameworks. The project’s findings may further feedback into theory formation on habit change and maintenance. The project will have societal impact, both locally through deployments of the resulting behaviour change framework, and further through dissemination with academic and institutional partners.

References:

[1] Xu, X., Sharma, P., Shu, S., Lin, T.-S., Ciais, P., Tubiello, F. N., Smith, P., Campbell, N., & Jain, A. K. (2021). Global greenhouse gas emissions from animal-based foods are twice those of plant-based foods. Nature Food, 1–9. https://doi.org/10.1038/s43016-021-00358-x

[2] Poore, J., & Nemecek, T. (2018). Reducing food’s environmental impacts through producers and consumers. Science, 360(6392), 987–992. https://doi.org/10.1126/science.aaq0216[4] Graça, J., Godinho, C. A., & Truninger, M. (2019). Reducing meat consumption and following plant-based diets: Current evidence and future directions to inform integrated transitions. Trends in Food Science & Technology, 91, 380–390. https://doi.org/10.1016/j.tifs.2019.07.046

[3] Schippers, M., et al. “A meta‐analysis of overall effects of weight loss interventions delivered via mobile phones and effect size differences according to delivery mode, personal contact, and intervention intensity and duration.” Obesity reviews 18.4 (2017): 450-459.

[4] Bickmore, Timothy W., et al. “A randomized controlled trial of an automated exercise coach for older adults.” Journal of the American Geriatrics Society 61.10 (2013): 1676-1683.

[5] Papies, E. K., Johannes, N., Daneva, T., Semyte, G., & Kauhanen, L.-L. (2020). Using consumption and reward simulations to increase the appeal of plant-based foods. Appetite, 155, 104812. https://doi.org/10.1016/j.appet.2020.104812

[6] Florian Pecune, Lucile Callebert, and Stacy Marsella. 2020. A Socially-Aware Conversational Recommender System for Personalized Recipe Recommendations. In Proceedings of the 8th International Conference on Human-Agent Interaction (HAI ’20). Association for Computing Machinery, New York, NY, USA, 78–86. DOI:https://doi.org/10.1145/3406499.3415079

Better Look Away :-): Using AI methods to understand Gaze Aversion in Real and Mixed Reality Settings (exploring the Tell-Tale Task)

Supervisors:
Monika Harvey (School of Psychology) and Mohamed Khamis (School of Computing Science)

Main aims and objectives

The eyes are said to be a window to the brain [1]. The way we move our eyes reflects our cognitive processes and visual interests, and we use our eyes to coordinate social interactions (e.g., take turns in conversations) [2]. While there is a lot of research on attentive user interfaces that respond to user’s gaze [3], and directing user’s gaze towards targets [4], there is relatively less work on understanding and eliciting gaze aversion. This is unfortunate as the ability to not look is a classic psychological and neural measure of how much people are in voluntary control over their environment [5]. In fact, people often avert their eyes to alleviate a negative social experience (such as avoiding a fight) and in some cultures, looking someone in the eyes directly can be seen as disrespectful. Efficient gaze aversion is thus an essential adaptive response and its brain correlates have been mapped extensively [6]. The main aim of this project is to investigate and enhance/train gaze aversion using virtual environments. Two potential examples will be considered in the 1st instance: Cultural gaze aversion training to accustom users to cultural norms, before encountering such a situation. Secondly, gaze elicitation and aversion will be integrated into augmented reality glasses to nudge the user to avert (or instead direct, as appropriate) their gaze while encountering for example an aggressive or socially desirable scenario. Another example could be the use of gaze aversion in mixed reality applications. In particular, guiding the user’s gaze and nudging them to look at targets and away from others, can help guide them in virtual environments, or ensure they see important elements of 360° videos.

Proposed methods

This research is at the intersection of eye tracking, psychology and human-computer interaction. It will involve both empirical and technical work, exploring the opportunities and challenges of detecting and eliciting intentional and unintentional gaze aversion. Using an eye-tracker as well as a virtual reality headset we will a) investigate and evaluate methods for eliciting explicit and implicit gaze aversion guided by previous research on gaze direction [4,6]; b) study the impact of intentional and unintentional gaze aversion on the brain by measuring its impact on saccadic reaction times, error rates, and other metrics; and c) utilize the findings and developed methods in one or more application areas. Programming skills are required for this project and previous experience in conducting controlled empirical studies also a plus. Likely outputs and impact The results will inform knowledge and generate state of the art tools on how to best design virtual environments that optimize and measure eye-movement control. The topic spans Psychology, Neuro-and Computing Science and we thus envisage publications in journals and conferences that reach a wide academic audience, spanning a range of expertise (e.g. Psychological Science, PNAS, ACM CHI, PACM IMWUT, ACM TOCHI).

References

[1] Ellis, S., Candrea, R., Misner, J., Craig, C. S., Lankford, C. P., & Hutchinson, T. E. (1998, June). Windows to the soul? What eye movements tell us about software usability. In Proceedings of the usability professionals’ association conference (pp. 151-178).

[2] Majaranta, P., & Bulling, A. (2014). Eye tracking and eye-based human–computer interaction. In Advances in physiological computing (pp. 39-65). Springer, London.

[3] Khamis, M., Alt, F., & Bulling, A. (2018, September). The past, present, and future of gaze-enabled handheld mobile devices: Survey and lessons learned. In Proceedings of the 20th International Conference on Human-Computer Interaction with Mobile Devices and Services (pp. 1-17).

[4] Rothe, S., Althammer, F., & Khamis, M. (2018, November). GazeRecall: Using gaze direction to increase recall of details in cinematic virtual reality. In Proceedings of the 17th International Conference on Mobile and Ubiquitous Multimedia (pp. 115-119).

[5] Butler, S.H., Rossit, R., Gilchrist, I.D., Ludwig, C.J., Olk, B., Muir, R., Reeves, I. and Harvey, M. (2009) Non-lateralised deficits in anti-saccade performance in patients with hemispatial neglect. Neuropsychologia, 47, 2488-2495.

[6] Salvia, E., Harvey M., Nazarian, B. and Grosbras, M-H. (2020). Social perception drives eye-movement related brain activity: evidence from pro- and anti-saccades to faces. Neuropsychologia, 139, 107360.

Brain Based Inclusive Design

Supervisors:
Monika Harvey (School of Psychology) and Alessandro Vinciarelli (School of Computing Science)

It is clear to  everybody that people differ widely, but the underlying assumption of current technology designs is that all users are equal. The large cost of this, is the exclusion of  users that fall far from the average that technology designers use as their ideal abstraction (Holmes, 2019). In some cases, the mismatch is evident (e.g., a mouse typically designed for right-handed people is more difficult to use for left-handers) and attempts have been made to accommodate the differences. In other cases, the differences are more subtle and difficult to observe and no attempt has been made, to the best of our knowledge, as yet to take them into account. This is the case, in particular, for change blindness (Rensink, 2004) and inhibition of return (Posner & Cohen, 1984), two brain phenomena that limit our ability to process stimuli presented too closely in space and time.  The overarching goal of the project is thus to design Human-Computer Interfaces capable of adapting to the limits of every user, in view of a fully inclusive design capable putting every user at ease, i.e., enabling him/her to interact with technology according to her/his processing speed and not according to the speed imposed by technology designers.  The proposed approach includes four  steps: 

  1. Development of the methodologies for the automatic measurement of the phenomena described above through their effect on EEG signals (e.g., changes in P1, N1 components (McDonald et al., 1999) and behavioural performance (e.g., in/decreased accuracy, in/decreased reaction times); 
  2. Identification of the relationship between the phenomena above and observable factors such as age, education level, computer familiarity, etc. of the user; 
  3. Adaptation of the technology design to the factors above, 
  4. Analysis of the improvement of the users’ experience. 

The main expected outcome is that technology will become more inclusive and capable of accommodating the individual needs of its users in terms of processing speed and ease of use. This will be particularly beneficial for those groups of users that, for different reasons, tend to be penalised in terms of processing speed, in particular older adults  and  special populations (e.g., children with developmental issues, stroke survivors, and related cohorts).  The project is of great industrial interest because, ultimately, improving the inclusion of technical design greatly increases user satisfaction, a crucial requirement  for every company that aims to commercialise technology. 

[HOL19] Holmes, K. (2019). Mismatch, MIT Press. 

[MCD99] McDonald,J., Ward,L.M. &.Kiehl,A.H. (1999). An event-elated brain potential study of inhibition of return. PerceptionandPsychophysics, 61, 1411–1423. 

[POS84] Posner, M.I. & Cohen, Y. (1984). “Components of visual orienting”. In Bouma, H.; Bouwhuis, D. (eds.). Attention and performance X: Control of language processes. Hillsdale, NJ: Erlbaum. pp. 531–56. 

[RES04] Rensink, R.A. (2004). Visual Sensing without Seeing. Psychological Science, 15, 27-32. 

Bridging the Uncanny Valley with Decoded Neurofeedback

Supervisors:
Frank Pollick (School of Psychology) and Fani Deligianni (School of Computing Science)

A problem with artificial characters that appear nearly human in appearance is that they can sometimes lead users to report that they feel uncomfortable, and that the character is creepy. An explanation for this phenomenon comes from the Uncanny Valley Effect (UVE), which holds that characters approaching human likeness elicit a strong negative response (Mori, et al., 2012; Pollick, 2009). Empirical research into the UVE has grown over the past 15 years and the conditions needed to produce a UVE, and reliably measure its effect have been extensively examined (Diel & MacDorman, 2021). These empirical studies inform design standards of artificial characters (Lay et al., 2016), but deep theoretical questions of why the UVE exists and its underlying mechanisms remain elusive. One technique that has shown promise to answer these questions is that of neuroimaging, where brain measurements are obtained while the UVE is experienced (Saygin, et al., 2012). In this research we propose to use the technique of realtime fMRI neurofeedback, which allows fMRI experiments to go past correlational evidence by enabling the manipulation of brain processing to study the effect of brain state on behaviour. In particular, we plan to use the technique of decoded neurofeedback (DecNef), which employs methods of machine learning to build a decoder of brain activity. Previous experiments have used DecNef to alter facial preferences (Shibata, et al., 2016) and this study by Shibata and colleagues will guide our efforts to develop a decoder that can be used during fMRI scanning to influence how the UVE is experienced. It is hoped that these experiments will reveal the brain circuits involved in experiencing the UVE, and lead to a deeper theoretical understanding of the basis of the UVE, which can be exploited in the design of successful artificial characters. The project will develop skills in 1) the use of animation tools to create virtual characters, 2) the ability to design and perform psychological assessment of people’s attitudes and behaviours towards these characters, 3) the use of machine learning in the design of decoded neurofeedback algorithms, and finally 4) how to perform realtime fMRI neurofeedback experiments.

  1. Diel, A., & MacDorman, K. F. (2021). Creepy cats and strange high houses: Support for configural processing in testing predictions of nine uncanny valley theories. Journal of Vision.
  2. Lay, S., Brace, N., Pike, G., & Pollick, F. (2016). Circling around the uncanny valley: Design principles for research into the relation between human likeness and eeriness. i-Perception, 7(6), 2041669516681309.
  3. Mori, M., MacDorman, K. F., & Kageki, N. (2012). The uncanny valley [from the field]. IEEE Robotics & Automation Magazine, 19(2), 98-100. (Original work published in 1970).
  4. Pollick, F. E. (2009). In search of the uncanny valley. In International Conference on User Centric Media (pp. 69-78). Springer, Berlin, Heidelberg.
  5. Saygin, A. P., Chaminade, T., Ishiguro, H., Driver, J., & Frith, C. (2012). The thing that should not be: predictive coding and the uncanny valley in perceiving human and humanoid robot actions. Social cognitive and affective neuroscience, 7(4), 413-422.
  6. Shibata, K., Watanabe, T., Kawato, M., & Sasaki, Y. (2016). Differential activation patterns in the same brain region led to opposite emotional states. PLoS biology, 14(9), e1002546.

Deep Learning feature extraction for social interaction prediction in movies and visual cortex

Supervisors:
Lars Muckli (School of Psychology) and Fani Deligianni (School of Computing Science)

While watching a movie, a viewer is immersed in the spatiotemporal structure of the movie’s audiovisual and high level conceptual content [Raz19]. The nature of the movies induces a natural waxing and waning of more and less social immersive content. This immersion can be exploited during brain imaging experiments to emulate as closely as possible the every-day human life experience, including brain processes involved in social perception. The human brain is a prediction machine: in addition to receiving sensory information, it actively generates sensory predictions. It implements this by creating internal models about the world which are used to predict upcoming sensory inputs. This basic but powerful concept is used in several studies in Artificial Intelligence (AI) to perform different type of predictions: from video inner-frames for video interpolation [Bao19], to irregularity detection [Sabokrou18], passing through future sound prediction [Oord18]. Despite different studies on AI focusing on how to use visual features to detect and track actors in a movie [Afouras20], it is not clear in the brain how cortical networks for social cognition involve layers in the visual cortex for processing the social interaction cues occurring between actors. Several studies suggest that biological motion recognition (the visual processing of others’ actions) is central to understanding interactions between agents and involves top-down social cognition with bottom up visual processing. We will use cortical layer specific fMRI at Ultra High Field to read brain activity during movie stimulation. Using the latest advances in Deep Learning [Bao19, Afouras20], we will study how the interaction between two people in a movie is processed, trying to analyse predictions that occur between frames. The comparison between the two representation sets, which involves the analysis of the movie video with Deep Learning and its response measured within the brain, will occur doing model comparison with Representational Similarity Analysis (RSA) [Kriegeskorte08]. The work and its natural extensions will help clarify how the early visual cortex is responsible for guiding attention in social scene understanding. The student will spend time in both domains: studying and analysing the state-of-the-art methods in pose estimation and scene understanding in Artificial Intelligence. In brain imaging, they will learn how to perform a brain imaging study with fMRI: from data collection and understanding, to analysis methods. These two fields will provide a solid background in both brain imaging and artificial intelligence, teaching the student the ability to transfer skills and draw conclusions across domains.

References:

[Afouras20] Afouras, T., Owens, A., Chung, J. S., & Zisserman, A. (2020). Self-supervised learning of audio-visual objects from video. European Conference on Computer Vision (ECCV 2020).

[Bao19] Bao, W., Lai, W. S., Ma, C., Zhang, X., Gao, Z., & Yang, M. H. (2019). Depth-aware video frame interpolation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 3703-3712).

[Kriegeskorte08] Kriegeskorte, N., Mur, M., & Bandettini, P. A. (2008). Representational similarity analysis-connecting the branches of systems neuroscience. Frontiers in systems neuroscience, 2, 4.

[Oord18] Oord, A. V. D., Li, Y., & Vinyals, O. (2018). Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748.

[Raz19] Raz, G., Valente, G., Svanera, M., Benini, S., & Kovács, A. B. (2019). A Robust Neural Fingerprint of Cinematic Shot-Scale. Projections, 13(3), 23-52.

[Sabokrou18] Sabokrou, M., Pourreza, M., Fayyaz, M., Entezari, R., Fathy, M., Gall, J., & Adeli, E. (2018, December). Avid: Adversarial visual irregularity detection. In Asian Conference on Computer Vision (pp. 488-505). Springer, Cham.

Designing Mindful Intervention with Therapeutic Music on Earables to Manage Occupational Fatigue

Supervisors:
Fahim Kawsar (Nokia Bell Labs) and Tanaya Guha (School of Computing Science)

Remember the moments when you find yourself tweaking the same line of code over and over or find yourself reading the same paragraph again and again! Those are the moments when your brain was overtaxed or – clinically – you had an acute mental fatigue episode. While effective fatigue mitigation strategy is a subject of intense research, recent studies have shown that therapeutic music can help mitigate fatigue and the associated sleep disorder and memory decline. In this research, we ask – what contributes to mental fatigue and aspire to devise techniques to measure these factors using sensory earables accurately? Next, we ask what makes music therapeutic and aim to study therapeutic features of music towards the automatic transformation of songs and music to their therapeutic versions. Finally, we want to bring these two facets together, i.e., identifying and predicting fatigue episodes and contextually offering mindful intervention to help manage fatigue with therapeutic music. We will leverage sensory earables for modeling fatigue building upon observed biomarkers and for real-time generation and playback of therapeutic music using acoustic channels. We will evaluate the developed solution initially in controlled lab settings followed by ecologically valid in-the-wild settings assessing both efficacy and usability of the solution. We anticipate, our findings will uncover a set of unique physiological dynamics that explain why we feel what we feel and advocate the principles to design a practical fatigue management toolkit with earables.

References:

Fahim Kawsar, Chulhong Min, Akhil Mathur, and Alessandro Montanari,. “Earables for Personal-scale Behaviour Analytics”, IEEE Pervasive Computing, Volume: 17, Issue: 3, 2018

Andrea Ferlini, Alessandro Montanari, Chulhong Min, Hongwei Li, Ugo Sassi,and Fahim Kawsar. “In-EarPPG for Vital Signs”, IEEE Pervasive Computing, 2021

Greer et al. A Multimodal View into Music’s Effect on Human Neural, Physiological, and Emotional Experience. In Proceedings of the 27th ACM International Conference on Multimedia (MM ’19). 167–175. DOI:https://doi.org/10.1145/3343031.3350867

Digital user representations and perspective taking in mediated communication

Supervisors:
Dale Barr (School of Psychology) and Mary Ellen Foster (School of Computing Science)

Human social interaction is increasingly mediated by technology, with many of the signals present in traditional face-to-face interaction being replaced by digital representations (e.g., avatars, nameplates, and emojis). To communicate successfully, participants in a conversational interaction must keep track of the identities of their co-participants, as well as the “common ground” they share with each—the dynamically changing set of mutually held beliefs, knowledge, and suppositions. Perceptual representations of interlocutors may serve as important memory cues to shared information in communicative interaction (Horton & Gerrig, 2016; O’Shea, Martin, & Barr, in press). Our main question concerns how digital representations of users across different interaction modalities (text, voice, video chat) influence the development of and access to common ground during communication. To examine the impact of digital user representations on real-time language production and comprehension, the project will use a variety of behavioral methods including visual world eye-tracking (Tanenhaus, et al. 1995), latency measures, as well as analysis of speech/text content. In the first phase of the project, we will examine how well people can keep track of who said what during a discourse depending on the abstract versus rich nature of user representations (e.g., from abstract symbols to dynamic avatar-based user representations), and how these representations impact people’s ability to tailor messages to their interlocutors, as well as to correctly interpret a communicator’s intended meaning. For example, in one such study, we will test participants’ ability to track “conceptual pacts” (Brennan & Clark, 1996) with a pair of interlocutors during an interactive task where each partner appears (1) through a video stream; (2) as an animated avatar; or (3) as a static user icon. In the second phase, we will examine whether the nature of the user representation during encoding affects the long-term retention of common ground information. In support of the behavioural experiments, this project will also involve developing a range of conversational agents, both embodied and speech-only, and defining appropriate behaviour models to allow those agents to take part in the studies. The defined behaviour will incorporate both verbal interaction as well as non-verbal actions, to replicate the full richness of human face-to-face conversation (Foster, 2019; Bavelas et al., 1997). Insights and techniques developed during the project are intended to improve interfaces for computer-mediated human communication.

References

  1. Bavelas, J. B., Hutchinson, S., Kenwood, C., & Matheson, D. H. (1997). Using Face-to-face Dialogue as a Standard for Other Communication Systems. Canadian Journal of Communication, 22(1).
  2. Brennan, S. E., & Clark, H. H. (1996). Conceptual pacts and lexical choice in conversation. Journal of Experimental Psychology: Learning, Memory, and Cognition, 22, 1482.
  3. Foster, M. E. (2019). Face-to-face conversation: why embodiment matters for conversational user interfaces. Proceedings of the 1st International Conference on Conversational User Interfaces – CUI ’19. the 1st International Conference.
  4. Horton, W. S., & Gerrig, R. J. (2016). Revisiting the memory‐based processing approach to common ground. Topics in Cognitive Science, 8, 780-795.
  5. O’Shea, K. J., Martin, C. R., & Barr, D. J. (2021). Ordinary memory processes in the design of referring expressions. Journal of Memory and Language, 117, 104186.
  6. Tanenhaus, M. K., Spivey-Knowlton, M. J., Eberhard, K. M., & Sedivy, J. C. (1995). Integration of visual and linguistic information in spoken language comprehension. Science, 268, 1632-1634.

Enhancing Social Interactions via Physiologically-Informed AI

Supervisors:
Marios Philiastides (School of Psychology) and Alessandro Vinciarelli (School of Computing Science).

Over the past few years major developments in machine learning (ML) have enabled important advancements in artificial intelligence (AI). Firstly, the field of deep learning (DL) – which has enabled models to learn complex input-output functions (e.g. pixels in an image mapped onto object categories), has emerged as a major player in this area. DL builds upon neural network theory and design architectures, expanding these in ways that enable more complex function approximations. The second major advance in ML has combined advances in DL with reinforcement learning (RL) to enable new AI systems for learning state-action policies – in what is often referred to as deep reinforcement learning (DRL) – to enhance human performance in complex tasks. Despite these advancements, however, critical challenges still exist in incorporating AI into a team with human(s). One of the most important challenges is the need to understand how humans value intermediate decisions (i.e. before they generate a behaviour) through internal models of their confidence, expected reward, risk etc. Critically, such information about human decision-making is not only expressed through overt behaviour, such as speech or action, but more subtlety through physiological changes, small changes in facial expression and posture etc. Socially and emotionally intelligent people are excellent at picking up on this information to infer the current disposition of one another and to guide their decisions and social interactions. In this project, we propose to develop a physiologically-informed AI platform, utilizing neural and systemic physiological information (e.g. arousal, stress) ([Fou15][Pis17][Ghe18]) together with affective cues from facial features ([Vin09][Bal16]) to infer latent cognitive and emotional states from humans interacting in a series of social decision-making tasks (e.g. trust game, prisoner’s dilemma etc). Specifically, we will use these latent states to generate rich reinforcement signals to train AI agents (specifically DRL) and allow them to develop a “theory of mind” ([Pre78][Fri05]) in order to make predictions about upcoming human behaviour. The ultimate goal of this project is to deliver advancements towards “closing-the-loop”, whereby the AI agent feeds-back its own predictions to the human players in order to optimise behaviour and social interactions.

References

[Ghe18] S Gherman, MG Philiastides, “Human VMPFC encodes early signatures of confidence in perceptual decisions”, eLife, 7: e38293, 2018.

[Pis17] MA Pisauro, E Fouragnan, C Retzler, MG Philiastides, “Neural correlates of evidence accumulation during value-based decisions revealed via simultaneous EEG-fMRI”, Nature Communications, 8: 15808, 2017.

[Fou15] E Fouragnan, C Retzler, KJ Mullinger, MG Philiastides, “Two spatiotemporally distinct value systems shape reward-based learning in the human brain”, Nature Communications, 6: 8107, 2015. [Vin09] A.Vinciarelli, M.Pantic, and H.Bourlard, “Social Signal Processing: Survey of an Emerging Domain“, Image and Vision Computing Journal, Vol. 27, no. 12, pp. 1743-1759, 2009.

[Bal16] T.Baltrušaitis, P.Robinson, and L.-P. Morency. “Openface: an open source facial behavior analysis toolkit.” Proceedings of the IEEE Winter Conference on Applications of Computer Vision, 2016.

[Pre78] D. Premack, G. Woodruff, “Does the chimpanzee have a theory of mind?”, Behavioral and brain sciences Vol. 1, no. 4, pp. 515-526, 1978. [Fri05] C. Frith, U. Frith, “Theory of Mind”, Current Biology Vol. 15, no. 17, R644-646, 2005.

Evaluating and Enhancing Human-Robot Interaction for Multiple Diverse Users in Real-World Contexts

Supervisors:
Mary Ellen Foster (School of Computing Science) and Jane Stuart Smith (School of Critical Studies)

The increasing availability of socially-intelligent robots with functionality for a range of purposes, from guidance in museums [Geh15], to companionship for the elderly [Heb16], has motivated a growing number of studies attempting to evaluate and enhance Human-Robot Interaction (HRI). But, as Honig and Oron-Gilad’s review of recent work on understanding and resolving failures in HRI observes [Hon18], most research has focussed on technical ways of improving robot reliability. They argue that progress requires a “holistic approach” in which “[t]he technical knowledge of hardware and software must be integrated with cognitive aspects of information processing, psychological knowledge of interaction dynamics, and domain-specific knowledge of the user, the robot, the target application, and the environment” (p.16). Honig and Oron-Gilad point to a particular need to improve the ecological validity of evaluating user communication in HRI, by moving away from experimental, single-person environments, with low-relevance tasks, mainly with younger adult users, to more natural settings, with users of different social profiles and communication strategies, where the outcome of successful HRI matters.

The main contribution of this PhD project is to develop an interdisciplinary approach to evaluating and enhancing communication efficacy of HRI, by combining state-of-the-art social robotics with theory and methods from socially-informed linguistics [Cou14] and conversation analysis [Cli16]. Specifically, the project aims to deploy a state-of-the-art HRI system similar to the recent MultiModal Mall Entertainment Robot [Fos16], which was successfully deployed in a Finnish shopping mall for 14 weeks in the autumn of 2019 [Fos19]. Deploying a robot in a public context requires an interaction model which is socially acceptable, helpful and entertaining for multiple, diverse users in a real-world context. As part of the project, a similar social robot system will be developed and deployed in a new sociolinguistic and educational context in The Hunterian, the Museum and Art Gallery at the University of Glasgow. Glasgow is Scotland’s largest, and most socially and ethnically-diverse city, and deployment in The Hunterian provides a unique opportunity to test HRI with users from a wide range of demographic backgrounds. The robot deployments will continue throughout the PhD project in order for the impact of any technical and design modifications to be assessed.

Project objectives are to:

  • Carry out a series of sociolinguistically-informed observational studies of HRI in situ with users from a range of social, ethnic, and language backgrounds, using direct and indirect methods
  • Identify the minimal requirements (dialogue, non-verbal, other) to optimise HRI in this context, and thereby enhance user experience and engagement, also considering indices such as visitor surveys and attendance
  • Implement the identified modifications to the robot system, and re-evaluate with new users.

References

[Cli16] Clift, R. (2016). Conversation Analysis. Cambridge: Cambridge University Press.

[Cou14] Coupland, N., Sarangi, S., & Candlin, C. N. (2014). Sociolinguistics and social theory. Routledge.

[Fos16] Foster M.E., Alami, R., Gestranius, O., Lemon, O., Niemela, M., Odobez, J-M., Pandey, A.M. (2016) The MuMMER Project: Engaging Human-Robot Interaction in Real-World Public Spaces. In: Agah A., Cabibihan J., Howard A., Salichs M., He H. (eds) Social Robotics. ICSR 2016. Lecture Notes in Computer Science, vol 9979. Springer, Cham

[Fos19] Foster M.E. et al. (2019) MuMMER: Socially Intelligent Human-Robot Interaction in Public Spaces. In Proceedings of AI-HRI 2019.

[Geh15] Gehle R., Pitsch K., Dankert T., Wrede S. (2015). Trouble-based group dynamics in real-world HRI – Reactions on unexpected next moves of a museum guide robot., in 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), 2015 (Kobe), 407–412.

[Heb16] Hebesberger, D., Dondrup, C., Koertner, T., Gisinger, C., Pripfl, J. (2016).Lessonslearned from the deployment of a long-term autonomous robot as companion inphysical therapy for older adults with dementia: A mixed methods study. In: TheEleventh ACM/IEEE International Conference on Human Robot Interaction, 27–34

[Hon18] Honig, S., & Oron-Gilad, T. (2018). Understanding and Resolving Failures in Human-Robot Interaction: Literature Review and Model Development. Frontiers in Psychology, 9, 861.

Exploring the multimodal dynamics of bodily emotional expressions with a generative body grammar

Supervisors:
Mathieu Chollet (School of Computing Science) and Rachael Jack (School of Psychology)

Context and Main Aims

Human communication is inherently multimodal, involving the face, body and voice in intricate coordination to express wide ranges of social and emotional phenomena. Untangling the complexity of multimodal communicative cues seems intractable at first glance, and much of the research on behaviour and emotion studied communicative modalities in isolation, focusing on gesture, facial expressions, postures or voice, progressing from fixed sets of simple static stimuli to complex dynamic models. Research on the human face, arguably our most powerful communication, has progressed immensely since early models based on small static image datasets of stereotypical expressions [Ekman & Friesen, 1978]. In particular, reverse correlation methods and virtual human stimui have been recently applied with great success to advance our understanding of facial expressions [Jack & Schyns, 2017]. Such methods hold great promise to also further our understanding of bodily and multimodal expressions, but such a proposition poses a number of challenges, from the combinatorial explosion of the expression space to sample from, to the choice of an adequate representation of body movements [Dael et al., 2012; Fourati & Pelachaud, 2016; Laban, 1971]. In this ambitious PhD project, you will contribute novel approaches and methodologies to apply reverse correlation methods to bodily emotional expressions. You will design a representation scheme of body movement primitives suitable for a reverse correlation methods, and leverage this scheme in a series of studies to further our understanding of the complex dynamics of multimodal expressions of emotions.

Methods & Timeline

After a literature review, the student will propose representation schemes of bodily expressions and body movement primitives (e.g. expansion/contraction; approach/retreat; tension/relaxation; orienting towards/away from) both economical in scale but expressive enough in order to be usable in reverse-correlation studies. This representation scheme will be transposed onto a generative model of bodily expressions based a virtual agent platform, enabling the generation of batches of videos where a virtual agent’s initially neutral pose is systematically varied along the representation scheme’s dimensions with several temporal dynamics. Initial experiments will investigate the role of the model’s dimensions in expressing typical categories and dimensions of emotions. The refined model will be combined in later studies with an existing reverse-correlation based model of facial expressions of emotions, with a particular focus into the temporal dynamics between facial and body modalities.

Outputs and impact

The project is expected to contribute methodological findings to reverse-correlation approaches and their use in the behavioural sciences (e.g., social perception) and the design of virtual agent technologies. The project contributions will be relevant for industrial applications involving social robotics and virtual agents equipped with emotional expression capabilities. Further, the project will contribute to the literature on bodily expressions of emotions, including long-standing questions in multimodal expression research, such as cross-modal effects when multimodal cues interact or conflict; gender effects; or intercultural differences [de Gelder & den Stock, 2011] thereby advancing central theories in the behavioural sciences.

References

  1. Dael, N., Mortillaro, M., & Scherer, K. R. (2012). The body action and posture coding system (BAP): Development and reliability. Journal of Nonverbal Behavior, 36(2), 97-121.
  2. de Gelder, B. & den Stock, J. V. “Real faces, real emotions: perceiving facial expressions in naturalistic contexts of voices, bodies and scenes.” The handbook of face perception (2011): 535-550.
  3. Ekman, P., & Friesen, W. V. (1978). Facial action coding system. Environmental Psychology & Nonverbal Behavior.
  4. Ennis, C., Hoyet, L., Egges, A., & McDonnell, R. (2013). Emotion capture: Emotionally expressive characters for games. In Proceedings of motion on games (pp. 53-60).
  5. Fourati, N., & Pelachaud, C. (2016). Perception of emotions and body movement in the emilya database. IEEE Transactions on Affective Computing, 9(1), 90-101.).
  6. Jack, R. E., & Schyns, P. G. (2017). Toward a social psychophysics of face communication. Annual review of psychology, 68, 269-297.
  7. Laban, R., & Ullmann, L. (1971). The mastery of movement.

Fashion Analytics Based on Deep Learning Visual Processing

Supervisors:
Marco Cristani (Humatics Srl) and Alessandro Vinciarelli (School of Computing Science)

Understanding and anticipating future trends is crucial for fashion companies looking to maximise their profit. Many machine learning approaches have been devoted to fashion forecasting, all of them with a strong limitation: they model fashion styles as sets of textual attributes; for example, “dotted t-shirt with skinny jeans” defines an outfit which may correspond to many real outfits, since it misses the color, the size of the dots, the type of the neckline, etc. Actually, the description does not incorporate the crucial part: the appearance. A picture is worth a thousand words, especially when it comes to fashion, where subtle, fine-grained variations of a pattern may define a style. Few instants are needed to distinguish a female outfit of 1920 from one of the last years, but both of them have the same textual description: “Below-knee length drop-waist dresses with a loose, straight fit” describes a 1920 style; when copy-pasted to Google, it brings you to Zalando’s contemporary products! The devil is in the detail, and this detail is visual, and cannot described by words. With this PhD project, we want to model fashion exploiting visual patterns, as they were letters of a new artistic vocabulary, within deep network architectures. Deep learning allows to map complicate patterns into a mathematical space, including images, without the need to use words. In this space, similarities can be computed, which are way more effective then written descriptions, clearly differentiating the last trends from the ones of a century ago. Deep learning is particularly effective when many data are used. And fashion, nowadays, comes together with social media, where tons of images are now the new oil of communication, presenting clothing items with pictures and video, with a pace of hundreds of thousands of items each day. This is the scenario where we will locate: our PhD theme will deal with fashion images collected on social media, in order to give deep learning the capability of perfectly understanding a style. Finally, our PhD will aim at forecasting fashion trends, in order to predict the rise and fall of a particular visual trend. This will be possible by social signal processing, which treats the images together with the “likes” associated to them, predicting when an image of a clothing will become viral, understanding among all of the images the ones which are more important than the others in defining a trend, his rise and fall. The PhD theme will put the student in contact with Humatics, a young Italian start-up, which is currently working with important fast-fashion companies as Nunalie , Sirmoney, furnishing forecasting services, and looking to international collaborations to improve their services, and to create specialized professional in the field of computational fashion and aesthetics.

Improving Human and Quadruped Robot Interaction

Supervisors:
Emily Cross (School of Psychology) and Emma Liying Li (School of Computing Science)

It is a clear trend that over the coming decade, we can expect to see quadruped robots, which offer great mobility, become widely adopted across different sectors, including industry, logistics, hospitality, and healthcare/social care. People across these sectors will be expected to work closely alongside such quadruped robots. However, it remains unknown (1) how best humans can effectively interact and collaborate with quadruped robots; and (2) for how to establish and maintain social bonds between human and quadruped robots. In this project, we aim to tackle the above two challenges. We plan to conduct experiments in the advanced motion capture laboratory with the state-of-the-art quadruped and animal-like robots (e.g., spot from Boston Dynamics, Miro and Aibo). We will also bring some pet dogs into the laboratory for experiments. We will aim to understand what physical behaviors people want to see in quadruped robot companions; what physical behaviors will lead to building social relationships between humans and companion robots; the extent to which these physical behaviors can be implemented into quadruped robots (e.g., Spot, Miro, Aibo, etc). Proposed methods: From a computing science perspective, the student will engage with system development and integration (developing operational models of animal behaviors and implementing them on quadruped robot platforms). From psychology/social science perspective, the student will measure human and robot interaction via qualitative measures(such as questionnaires and participatory design interview approaches), non-invasive mobile brain imaging (record human brain activity when interact with the quadruped robots), physical response time, and pupillometry (eye tracking when people engage with these robots). Expected outcomes and impact The outcome of this project will be new knowledge, strategies and technologies to support harmonious interaction between humans and quadruped robots. The topics spans Psychology, Neuroscience and Computing Science. Empirical research papers and demos will be published in journals and conferences that reach a wide academic audience, e.g., Psychological Science, Cognition (for psychology/neuroscience audience), ACM Transactions, ACM/IEEE HRI and ACM CHI (for computing science audience) and PNAS for multidisciplinary audience. References: [Hua 2021] Huang L., Meng Z., Deng Z., Wang C., Li L., Zhao G. (2021, Oct.) Extracting human behavioral biometrics from robot motions, in Proc. 27th Annual International Conference on Mobile Computing and Networking (MobiCom2021), Oct. 2021. DOI: 10.1145/3447993.3482860) [Hua 2021] Huang L., Meng Z., Deng Z., Wang C., Li L., Zhao G. (2021, Oct.) Towards verifying the user of motion-controlled robotic arm systems via the robot behavior, IEEE IoT Journal Special Issue on Security, Privacy, and Trustworthiness in Intelligent Cyber-Physical Systems and Internet-of-Things, Oct. 2021. (DOI: 10.1109/JIOT.2021.3121623) [Cro 2021] Cross, E. S. & Ramsey, R. (2021). Mind meets machine: Toward a cognitive science of human-machine interactions. Trends in Cognitive Sciences. 25(3), 200-212.

Learning to Play: Assessing Music Playing Skill from AudioVisual Data

Supervisors:
Tanaya Guha (School of Computing Science) and Subarna Tripathi (Intel Labs San Diego)

Motivation & novelty:

Humans can often assess how well someone performs at a given task simply by watching (and hearing) them in action. The task of ‘skill assessment’, if automated, can potentially create assistive technology for humans to learn and practice independently, achieving eventual mastery. Although several learning apps and tools are available these days, few can offer automated feedback on the learners’ skill level.

Aims and methodology:

This project will develop a multimodal (audiovisual) AI tool to assess human skills from video streams accompanied with audio. In particular, our aim is to assess the skill of a learner playing a musical instrument using both audio and video as inputs. This is a fine-grained video understanding problem, where the input videos have similar actions while audio could be different. The project will develop new deep learning models to combine information from the two modalities that can attend to the modalities spatially and temporally with appropriate attention. A relevant database will need to be curated from YouTube and labeled in a semi-automated fashion.

Alignment with industrial interests:

Multimodal sensing and sense-making technologies are at the heart of Intel’s effort to build smart and personalized learning space. For example, Intel’s deployment of such technologies in ‘Kid Space’ showed encouraging results in terms of students’ engagement and learning effectiveness.

Timeline:

This is envisioned as a full-time PhD project involving the following activities: Literature survey, database curation, baseline model development, new model development, testing and evaluation, dissemination of results (e.g., publication, presentation) and thesis writing.

Desired skills:

Python, Machine Learning, prior experience of working with video/audio.

References:

[1] Doughty et al., ‘The Pros and Cons: Rank-aware Temporal Attention for Skill Determination in Long Videos’, CVPR 2019.

[2] Parmar and Morris, ‘What and how well you performed? A multitask approach to action quality assessment,’ in Proc. CVPR 2019.

[3] Aslan et al. Exploring Kid Space in the wild: a preliminary study of multimodal and immersive collaborative play-based.

Neural Networks models to predict individual behaviour from Multimodal MRI Data

Supervisors:
Cassandra Sampaio Batista (School of Psychology) and Tanaya Guha (School of Computing Science)

Understanding and predicting individual complex behaviour in healthy and pathological conditions is a key goal in neuroscience and psychology. Magnetic Resonance Imaging (MRI) allows us to image functional and structural human brain properties in vivo and to relate them to behavioural performance. Furthermore, multimodal MRI, such as functional MRI (fMRI), Diffusion Tensor Imaging (DTI) and Multiparameter Mapping (MPM) can be acquired in the same session, capturing different brain tissue properties (Lazari et al., 2021). However, multimodal MRI data remains underused to explore brain-behaviour relationships. The majority of unimodal MRI studies use simple correlation methods either voxel-wise or based on region of interest (ROI).

Machine Learning (ML) methods have seen major breakthroughs in the last decade in the domain of natural image understanding, making its way to medical image analysis. So far, most ML-based MRI analysis use large data sets with hundreds or thousands of individuals, making it less than ideal for MRI-based studies that, with some exceptions (e.g. Human Connectome project, Biobank), rely on small sample sizes. Recent advances in ML, in an attempt to address the criticism for being ‘data-hungry’, focus on learning from smaller datasets through approaches like self-supervised/unsupervised learning and data augmentation.

In this PhD project the student will leverage multimodal MRI (task and resting- state fmri, DTI, MPM, T1-weighted) using ML methods to perform data augmentation and to discover participant-specific attributes (biomarkers) that relate to the performance in different cognitive-motor tasks in healthy individuals in small sample studies.

The main objectives of this project are as follows:

  • Objective 1: To identify the biomarkers from multimodal MRI that relate to behaviour/impairment in small sample studies
  • Objective 2: How to effectively fuse information from multiple modalities to achieve objective 1

Building on this initial project the student will then develop models to predict impairment in stroke survivors from multimodal MRI.

Expected outcome/impact

This project will develop tools and models that can be applied to small sample studies to understand individual differences in complex behaviour and to patient studies, that are typically small, to make predictions about prognosis and recovery. The resulting predictive models can potentially be used to understand how brain traits relate to individual behavioural and learning characteristics.

References:

  • Lazari, A., Salvan, P., Cottaar, M., Papp, D., Jens van der Werf, O., Johnstone, A., Sanders, Z.B., Sampaio-Baptista, C., Eichert, N., Miyamoto, K., Winkler, A., Callaghan, M.F., Nichols, T.E., Stagg, C.J., Rushworth, M.F.S., Verhagen, L., Johansen-Berg, H., 2021. Reassessing associations between white matter and behaviour with multimodal microstructural imaging. Cortex 145, 187-200.
  • Doersch, C., Zisserman, A. 2017. Multi-task Self-Supervised Visual Learning. IEEE International Conference on Computer Vision (ICCV). 2070–2079.
  • Shorten, Connor; Khoshgoftaar, Taghi M. (2019). “A survey on Image Data Augmentation for Deep Learning”. Mathematics and Computers in Simulation. springer. 6: 60. doi:10.1186/s40537-019-0197-0.

Optimizing habit development with adaptive digital interventions

Supervisors:
Esther Papies (School of Psychology) and Mark Bowles (PUL Hydration)

Aims and Objectives

This project will establish key features of just-in-time adaptive interventions (JITAIs; Nahum-Shani et al., 2018) that contribute to habit formation. While JITAIs have been shown to be more effective than statically controlled interventions (Wang & Miller, 2020), little is known about the precise interaction of adaptive intervention features and psychological processes that lead to lasting health habits. The research will be conducted in the domain of hydration, i.e., water drinking. Water drinking is an ideal domain to study the effect of JITAIs, given that water drinking is a relatively simple health behaviour, compared to, for example, eating or physical activity; water drinking needs to happen frequently each day, so that it is susceptible to habit formation; and indeed, healthy water drinkers seem to rely heavily on habits (Rodger et al., 2021). However, many people are underhydrated, with implications for cognitive functioning, mood, and physical health (e.g., risk of diabetes, overweight, kidney damage; see Muñoz et al., 2015; Perrier et al., 2020).

Working with the PUL smartcap and accompanying smartphone app, we will examine how habit formation occurs using an intervention that provides goal setting, monitoring, feedback, as well as situated and personalized reminders. We will address questions such as: Which intervention features predict habit formation, and how can the intervention be optimized to facilitate this process? Given that habits form in response to stable context cues, do reminders at specific, fixed times facilitate habit formation compared to “smart”, adaptive reminders? What is the role of rewarding feedback in habit formation and habit maintenance (cf. Papies et al., 2020)? Does the intervention lead to “specific” habit formation (i.e., drinking water with the PUL device) or to “generalized” habit formation (i.e., drinking water)? Which intervention features (e.g., dynamic goals, smart reminders, visual reward signals) or intervention effects (e.g., reduced dehydration symptoms) predict continued engagement with the app? In addressing these questions, we will conduct research that speaks to JITAI development, as well as to fundamental psychological questions about situated learning and habit change.

Methods, Outputs, and Impact

We will work closely with the PUL Hydration team to conduct both qualitative, quantitative, and mixed-methods experimental studies to assess the questions outlined above in real-life settings. We will present and publish our findings as three empirical subprojects (averageing one per year) in both Computer Science and Psychology conferences and journals. In addition to academic and industry users, the findings on how to develop healthy water drinking habits will be of interest to the general public. The Healthy Cognition Lab regularly engages in knowledge exchange activities, which the student would participate in. We also regularly engage with industry and third-sector partners, such as Danone or the British Dietetic Association. Finally, the student on this project would work closely with other ECR lab members working on hydration, and on healthy and sustainable eating behaviours.

References

Muñoz, C. X., Johnson, E. C., McKenzie, A. L., Guelinckx, I., Graverholt, G., Casa, D. J., … Armstrong, L. E. (2015). Habitual total water intake and dimensions of mood in healthy young women. Appetite, 92, 81–86. https://doi.org/10.1016/j.appet.2015.05.002

Nahum-Shani, I., Smith, S. N., Spring, B. J., Collins, L. M., Witkiewitz, K., Tewari, A., & Murphy, S. A. (2018). Just-in-Time Adaptive Interventions (JITAIs) in Mobile Health: Key Components and Design Principles for Ongoing Health Behavior Support. Annals of Behavioral Medicine: A Publication of the Society of Behavioral Medicine, 52(6), 446–462. https://doi.org/10.1007/s12160-016-9830-8

Papies, E. K., Barsalou, L. W., & Rusz, D. (2020). Understanding Desire for Food and Drink: A Grounded-Cognition Approach. Current Directions in Psychological Science, 29(2), 193–198. https://doi.org/10.1177/0963721420904958

Perrier, E. T., Armstrong, L. E., Bottin, J. H., Clark, W. F., Dolci, A., Guelinckx, I., Iroz, A., Kavouras, S. A., Lang, F., Lieberman, H. R., Melander, O., Morin, C., Seksek, I., Stookey, J. D., Tack, I., Vanhaecke, T., Vecchio, M., & Péronnet, F. (2020). Hydration for health hypothesis: A narrative review of supporting evidence. European Journal of Nutrition. https://doi.org/10.1007/s00394-020-02296-z

Rodger, A., Wehbe, L. H., & Papies, E. K. (2021). “I know it’s just pouring it from the tap, but it’s not easy”: Motivational processes that underlie water drinking. Appetite, 164, 105249. https://doi.org/10.1016/j.appet.2021.105249

Wang, L., & Miller, L. C. (2020). Just-in-the-Moment Adaptive Interventions (JITAI): A Meta-Analytical Review. Health Communication, 35(12), 1531–1544. https://doi.org/10.1080/10410236.2019.1652388

Sharing the road: Cyclists and automated vehicles

Supervisors:
Steve Brewster (School of Computing Science) and Frank Pollick (School of Psychology).

Automated vehicles must share the road with pedestrians and cyclists, and drive safely around them. Autonomous cars, therefore, must have some form of social intelligence if they are to function correctly around other road users. There has been work looking at how pedestrians may interact with future autonomous vehicles [ROT15] and potential solutions have been proposed (e.g. displays on the outside of cars to indicate that the car has seen the pedestrian). However, there has been little work on automated cars and cyclists. When there is no driver in the car, social cues such as eye contact, waving, etc., are lost [ROT15]. This changes the social interaction between the car and the cyclist, and may cause accidents if it is no longer clear, for example, who should proceed. Automated cars also behave differently to cars driven by humans, e.g. they may appear more cautious in their driving, which the cyclist may misinterpret. The aim of this project is to study the social cues used by drivers and cyclists, and create multimodal solutions that can enable safe cycling around autonomous vehicles. The first stage of the work will be observation of the communication between human drivers and cyclists through literature review and fieldwork. The second stage will be to build a bike into our driving simulator [MAT19] so that we can test interactions between cyclists and drivers safely in a simulation. We will then start to look at how we can facilitate the social interaction between autonomous cars and cyclists. This will potentially involve visual displays on cars or audio feedback from them, to indicate state information to cyclists nearby (eg whether they have been detected, whether the car is letting the cyclist go ahead). We will also investigate interactions and displays for cyclists, for example multimodal displays in cycling helmets [MAT19] to give them information about car state (which could be collected by V2X software on the cyclist’s phone, for example). Or directly communicating with the car by input made on the handlebars or via gestures. These will be experimentally tested in the simulator and, if we have time, in highly controlled real driving scenarios. The output of this work will be a set new techniques to support the social interaction between autonomous vehicles and cyclists. We currently work with companies such as Jaguar Land Rover and Bosch and our results will have direct application in their products.

References

[ROT15] Rothenbucher, D., Li, J., Sirkin, D. and Ju, W., Ghost driver: a platform for investigating interactions between pedestrians and driverless vehicles, Adjunct Proceedings of the International Conference on Automotive User Interfaces and Interactive Vehicular Applications, pp. 44–49, 2015.

[MAT19] Matviienko, A. Brewster, S., Heuten, W. and Boll, S. Comparing unimodal lane keeping cues for child cyclists (https://doi.org/10.1145/3365610.3365632), Proceedings of the 18th International Conference on Mobile and Ubiquitous Multimedia

Situating mobile interventions for healthy hydration habits

Supervisors:
Esther Papies (School of Psychology) and Matthew Chalmers (School of Computing Science)

Aims and Objectives.

This project will examine which kinds of data to use to best integrate a digital mobile health intervention into a users’ daily life, to lead to habit formation.  Previous research has shown that just-in-time adaptive interventions (JITAIs) are more effective than statically controlled interventions (Wang & Miller, 2020).  In other words, health interventions are more likely to lead to behaviour change if they are well situated, i.e., with agency adapted to specific user characteristics, and applied in situations where behaviour change should happen.  However, there is limited evidence on how to best design JITAIs for health apps, so as to create artificial agents that lead to lasting behaviour change through novel habit formation.  In addition, there is no systematic evidence as to which features of situations a health app should use to support a user to perform a healthy behaviour (e.g., time of day, location, mood, activity pattern, social context).  We will address these issues in the under-researched domain of hydration behaviours.  The aim is to establish—given the same intervention—which type of contextual data, or which heterogeneous mix of types of data, is most effective at increasing water consumption, and at establishing situated water drinking habits that persist when the initial engagement with the intervention has ceased.

Background and Novelty.

Mobile health interventions are a powerful new tool in the domain of individual health behaviour change.  Health apps can reach large numbers of users at relatively low cost, and can be tailored to an individual’s health goals and adapted to support users in specific, critical situations.    Identifying the right contextual features to trigger an intervention is critical, because context plays a key role both in triggering unhealthy behaviours, and in developing habits that support the long-term maintenance of healthy behaviours.  A particular challenge,  which existing theories typically don’t yet address, lies in the dynamic nature of health behaviours and their contextual triggers, and in establishing how these behaviours and contexts can best be monitored (Nahum-Shani et al., 2018).  This project will take on these challenges in the domain of hydration, because research suggests that many adults may be chronically dehydrated, with implications for cognitive functioning, mood, and physical health (e.g., risk of diabetes, overweight, kidney damage; see Muñoz et al., 2015; Perrier et al., 2020). Our previous work has shown that healthy hydration is associated with drinking water habitually across many different situations each day (Rodger et al., 2020).  This underlines the particular importance of establishing dynamic markers of situations that are cognitively associated with healthy behaviours so that they can support habit formation.

Methods.

(1) We will examine the internal (e.g., motivation, mood, interoception) and external (e.g., time of day, location, activity pattern, social context) markers of situations in which high water drinkers consume water, using objective intake monitors.  Then, integrating these findings with theory on habit formation and motivated behaviour (Papies et al., 2020), and using an existing app platform (e.g. AWARE-Light),

(2) we will test which types of data or mixes of data types are most effective in an intervention to increase water consumption in a sample of low water drinkers in the short term, and

(3) whether those same data types are effective at creating hydration habits that persist in the longer term.

Outputs.

This project will lead to presentations and papers of three quantitative subprojects at both Computer Science and Psychology conferences, as well as a possible qualitative contribution on the dynamic nature of habit formation. Impact.  Results from this work will have implication for the design of health behaviour interventions across domains. This work will further contribute to the emerging theoretical understanding of the formation and context sensitivity of the cognitive processes that support healthy habits.  It will explore how sensing and adaptive user modeling can situate both user and AI system in a common contextual frame and whether this facilitates engagement and behavior change.

References:

  1. Muñoz, C. X., Johnson, E. C., McKenzie, A. L., Guelinckx, I., Graverholt, G., Casa, D. J., … Armstrong, L. E. (2015). Habitual total water intake and dimensions of mood in healthy young women. Appetite, 92, 81–86. https://doi.org/10.1016/j.appet.2015.05.002
  2. Nahum-Shani, I., Smith, S. N., Spring, B. J., Collins, L. M., Witkiewitz, K., Tewari, A., & Murphy, S. A. (2018). Just-in-Time Adaptive Interventions (JITAIs) in Mobile Health: Key Components and Design Principles for Ongoing Health Behavior Support. Annals of Behavioral Medicine: A Publication of the Society of Behavioral Medicine, 52(6), 446–462. https://doi.org/10.1007/s12160-016-9830-8
  3. Papies, E. K., Barsalou, L. W., & Rusz, D. (2020). Understanding Desire for Food and Drink: A Grounded-Cognition Approach. Current Directions in Psychological Science, 29(2), 193–198. https://doi.org/10.1177/0963721420904958
  4. Perrier, E. T., Armstrong, L. E., Bottin, J. H., Clark, W. F., Dolci, A., Guelinckx, I., Iroz, A., Kavouras, S. A., Lang, F., Lieberman, H. R., Melander, O., Morin, C., Seksek, I., Stookey, J. D., Tack, I., Vanhaecke, T., Vecchio, M., & Péronnet, F. (2020). Hydration for health hypothesis: A narrative review of supporting evidence. European Journal of Nutrition. https://doi.org/10.1007/s00394-020-02296-z
  5. Rodger, A., Wehbe, L., & Papies, E. K. (2020). “I know it’s just pouring it from the tap, but it’s not easy”: Motivational processes that underlie water drinking. Under Review. https://psyarxiv.com/grndz
  6. Wang, L., & Miller, L. C. (2020). Just-in-the-Moment Adaptive Interventions (JITAI): A Meta-Analytical Review. Health Communication, 35(12), 1531–1544. https://doi.org/10.1080/10410236.2019.1652388

Social and Behavioural Markers of Hydration States

Supervisors:
Esther K. Papies (School of Psychology) and Matthew Chalmers (School of Computing Science)

Aims and Objectives.

This project will explore whether data derived from a person’s smartphone can be used to establish that person’s hydration status so that, in a well–guided and responsive way, a system can prompt the person to drink water.  Many people are frequently underhydrated, which has negative physical and mental health consequences.  Low hydration states can manifest in impaired cognitive and physical performance, experiences of fatigue or lethargy, and negative affect (e.g, Muñoz et al., 2015; Perrier et al., 2020).  Here, we will establish whether such social and behavioural markers of dehydration can be inferred from a user’s smartphone, and which of these markers, or their combination, are the best predictors of hydration state (Aim 1).  Sophisticated user models of hydration states could also be adapted over time, and help to predict possible instances of dehydration in advance (Aim 2).  This would be useful because many individuals find it difficult to identify when they need to drink, and could benefit from clear, personalized indicators of dehydration.  In addition, smart phones could then be used to prompt users to drink water, once a state of dehydration has been detected, or when dehydration is likely to occur.  Thus, we will also test how hydration information should be communicated to users to prompt attitude and behaviour change and ultimately, improve hydration behaviour (Aim 3).  Throughout, we will implement data collection, modelling, and feedback on smartphones in a secure way that respects and protects a user’s privacy.

Background and Novelty.

The data that can be derived from smart phones (and related digital services) ranges from low level data on sensors (e.g. accelerometers) to patterns of app usage and social interaction. As such, ‘digital phenotyping’ is a rich source of information on an individual’s social and physical behaviours, and affective states. Some recent survey papers this burgeoning field include Thieme et al. on machine learning in mental health (2020), Chancellor and de Choudhury on using social media data to predict mental health status (2020), Melcher et al. on digital phenotyping of college students (2020), and Kumar et al. on toolkits and frameworks for data collection (2020). Here, we propose that these types of data may also reflect a person’s hydration state. Part of the project’s novelty is in its exploration of a wider range of phone-derived data as a resource for system agency than prior work in this general area, as well as pioneering work specifically on hydration.  We will relate cognitive and physical performance, fatigue, lethargy and affect to patterns in phone-derived data.  We will test whether such data can be harnessed to provide people with personalized, external, actionable indicators of their physiological state, i.e. to facilitate useful behaviour change. This would have clear advantages over existing indicators of dehydration, such as thirst cues or urine colour, which are easy to ignore or override, and/or difficult for individuals to interpret (Rodger et al, 2020).

Methods.

We will build on an existing mobile computing framework (e.g. AWARE-Light) to collect reports of a participant’s fluid intake, and to integrate them with phone-derived data.  We will attempt to model users’ hydration states, and validate this against self-reported thirst and urine frequency, and self-reported and photographed urine colour (Paper 1).  We will then examine in prospective studies if these models can be used to predict future dehydration states (Paper 2).  Finally, we will examine effective ways to provide feedback and prompt water drinking, based on individual user models (Paper 3).

Outputs.

This project will lead to presentations and papers at both Computer Science and Psychology conferences outlining the principles of using sensing data to understand physiological states, and to facilitate health behaviour change.

Impact.

Results from this work will have implications for the use of a broad range of data in health behaviour interventions across domains, as well as for our understanding of the processes underlying behaviour change. This project would also outline new research directions for studying the effects of hydration in daily life.

References

Chancellor, S., & De Choudhury, M. (2020). Methods in predictive techniques for mental health status on social media: a critical review. Npj Digital Medicine, 3(1), 1–11. http://doi.org/10.1038/s41746-020-0233-7

Melcher, J., Hays, R., & Torous, J. (2020). Digital phenotyping for mental health of college students: a clinical review. Evidence Based Mental Health, 4, ebmental–2020–300180–6. http://doi.org/10.1136/ebmental-2020-300180

Muñoz, C. X., Johnson, E. C., McKenzie, A. L., Guelinckx, I., Graverholt, G., Casa, D. J., … Armstrong, L. E. (2015). Habitual total water intake and dimensions of mood in healthy young women. Appetite, 92, 81–86. https://doi.org/10.1016/j.appet.2015.05.002

Rodger, A., Wehbe, L., & Papies, E. K. (2020). “I know it’s just pouring it from the tap, but it’s not easy”: Motivational processes that underlie water drinking. Under Review. https://psyarxiv.com/grndz

Perrier, E. T., Armstrong, L. E., Bottin, J. H., Clark, W. F., Dolci, A., Guelinckx, I., Iroz, A., Kavouras, S. A., Lang, F., Lieberman, H. R., Melander, O., Morin, C., Seksek, I., Stookey, J. D., Tack, I., Vanhaecke, T., Vecchio, M., & Péronnet, F. (2020). Hydration for health hypothesis: A narrative review of supporting evidence. European Journal of Nutrition. https://doi.org/10.1007/s00394-020-02296-z

Thieme, A., Belgrave, D., & Doherty, G. (2020). Machine Learning in Mental Health. ACM Transactions on Computer-Human Interaction (TOCHI), 27(5), 1–53. http://doi.org/10.1145/3398069

Social Interaction via Touch Interactive Volummetric 3D Virtual Agents

Supervisors:
Ravinder Dahiya (School of Engineering) and Philippe Schyns (School of Psychology)

Vision and touch based interactions are fundamental modes of interaction between humans and between humans and the real world. Several portable devices use these modes to display gestures that communicate social messages such as emotions. Recently, non-volumetric 3D displays have attracted considerable interest because they give users a 3D visual experience – for example, 3D movies provide viewers with a perceptual sensation of depth via a pair of glasses. Using a newly developed haptics-based holographic 3D volumetric display, this project will develop these new forms of social interactions with virtual agents. Unlike various VR tools that require headsets (which can lead to motion sickness), here the interaction with 3D virtual objects will be less restricted, closer to its natural form, and, critically, give the user the illusion that the virtual agent is physically present. The experiments will involve interactions with holographically displayed virtual human faces and bodies engaging in various social gestures. To this end, the simulated 2D images showing these various gestures will be displayed mid-air in 3D. For enriched interaction and enhanced realism, this project will also involve hand gesture recognition and controlling haptic feedback (i.e. air patterns) to simulate the surface of several classes of virtual objects. This fundamental study is transformative for sectors where physical interaction with virtual objects is critical, including medical, mental health, sports, education, heritage, security, and entertainment.

Sustainable Me: A persuasive social AI for adopting sustainable lifestyles

Supervisors:
Martin Lages (School of Psychology) and Simone Stumpf (School of Computing Science)

Adopting a sustainable lifestyle in a modern society is challenging because it requires not only to change the status quo but also to digest a wealth of information that spans across many domains, for example diet, shopping, transport, waste management, heating, leisure activities, etc. Persuasive technology has been studied for its role in behaviour change [1] and decision-making [2] where the technology is seen as a persuasive social actor which can exploit physical, psychological, language, and social cues to persuade people to take a desired action [3]. This research is set against a background of human-human persuasion and decision-making which has a long history of research and study. Individual changes can have a dramatic impact on sustainability, for example, by reducing meat consumption, switching to more environmentally friendly transportation, or making changes to everyday practices, all of which can reduce a person’s carbon footprint. However, a ‘one-size-fits-all’ approach to behaviour change does not work well for people in different circumstances, with different preferences and motivations. This can be exploited by building an AI system that communicates with an individual as a social actor [4], and suggests the most suitable sustainable decisions as well as behaviours in a personalised conversation with the user. Previous research in the energy domain has shown the feasibility of this approach [5,6].

Main aims and objectives

In this PhD project we set out to develop an interactive AI system that is tailored to individual preferences and maximises individual behaviour change while customizing the interaction with the user. As part of this work, the PhD student will gather background on persuasive technology and human-human persuasion to build a conceptual framework for persuasive social interactions that can be applied to adopting a more sustainable lifestyle. The student will design and implement an AI agent that builds a user model of an individual’s circumstances and preferences, and then instantiates and customises the persuasive dialogue with the user to generate individualised interventions. A major aspect of this work will be evaluating the acceptability and usability of this AI agent and its effects on decision-making and behaviour change.

Proposed methods

This PhD will draw on a theoretical basis of behaviour change and persuasion to build a practical system which can be empirically evaluated. It will combine skills in AI development, the design of interactive systems/HCI and experimental studies.

Likely outputs and impacts

This PhD will contribute to a better understanding of how to influence decision-making and change behaviour. It will provide a conceptual framework for building and evaluating persuasive AI agents that make personalized suggestions and engage effectively in a dialogue with the user. It will demonstrate how to design and implement such a socially interactive and persuasive AI. Finally, it will provide guidelines for developing effective socially interactive and persuasive AI. We envisage that this AI system could be made available to individuals, households, local communities, and local government to reduce greenhouse emissions in order to keep global average temperature rise below 1.5C as pledged by the UN at COP26 in Glasgow.

References

[1] Thaler RH and Sunstein CR (2008) Nudge: improving decisions about health wealth and happiness. Yale University Press.

[2] Lages M, and Jaworska K (2012) How predictable are “spontaneous decisions” and “hidden intentions”. Comparing classification results based on previous responses with multivariate pattern analysis of fMRI BOLD signals. Frontiers in Psychology, 3, 56.

[3] Fogg BJ (2002) Persuasive technology: using computers to change what we think and do. Ubiquity 2002, December: 5:2.

[4] Crosswhite J, Fox J, Reed C, Scaltsas T, and Stumpf S (2004) Computational Models of Rhetorical Argument. In Argumentation Machines: New Frontiers in Argument and Computation, Chris Reed and Timothy J. Norman (eds.). Springer Netherlands, Dordrecht, 175–209.

[5] Mogles N, Padget J, Gabe-Thomas E, Walker I, and Lee JH (2018) A computational model for designing energy behaviour change interventions. User Modeling and User-Adapted Interaction 28, 1: 1–34.

[6] Skrebe S and Stumpf S (2017) An exploratory study to design constrained engagement in smart heating systems. In Proceedings of the 31st British Human Computer Interaction Conference.

Thoughtful gestures: Designing a formal framework to code and decode hand-over-face gestures

Supervisors:
Marwa Mahmoud (School of Computing Science) and Rachael Jack (School of Psychology)

Human faces provide a wealth of social information for non-verbal communication. From observing the complex dynamic patterns of facial expressions and associated gestures, such as hand movements, both human and non-human agents can make myriad inferences about the person’s emotions, mental states, personality traits, cultural background, or even certain medical conditions. Specifically, people often place their hands on their face as a gesture during social interaction and conversation, which could provide information over and above what is provided by their facial expressions. Indeed, some specific hand-over-face gestures serve as salient cues for the recognition of cognitive mental states, such as thinking, confusion, curiosity, frustration, and boredom (Mahmoud et al 2016). Such hand gestures therefore provide a valuable additional channel for multi-modal inference (Mahmoud & Robinson, 2011).

Knowledge gap/novelty.

However, the systematic study of hand-over-face gestures—i.e., the complex combination of face and hand gestures—remains limited due to two main empirical challenges: 1. the lack of a large objective labelled datasets, and 2. the demands of coding signals that have a high degree of freedom. Thus, while early studies have provided initial quantitative analyses and interpretation of hand-over-face gestures (Mahmoud et al 2016, Nojavanasghari et al. 2017), these challenges have hindered the development of coding models and interpretations frameworks.

Aims/objectives.

This project aims to address this gap by designing and implementing the first formal objective model for hand-over-face gestures by achieving three main aims:

1) Build a formal naturalistic synthetic dataset of hand-over-face gestures by extending generative models of dynamic social face signals (Jack & Schyns, 2017) and modelling dynamic hand-over-face gestures as an additional social cue in combination with facial expressions and eye and head movements (Year 1)

2) Use state-of-the-art methodologies from human perception science to systematically model the specific face and hand gesture cues that communicate social and emotion signals within and across cultures e.g., Western European vs. Eastern Asian population (Year 2)

3) Produce the first formal objective model for coding hand-over-face gestures (Year 3)
Methods. The project will build on and extend state-of-the-art 3D modelling of dynamic hand gesture and facial expression to produce an exhaustive dataset of hand-over-face gestures based on natural expressions. It will also use theories and experimental methods from the study of emotion theories to run human perception experiments to identity taxonomies and coding schemes for these gestures and validate interpretations within and across cultures.

Outputs/impact/industrial interests.

The formal objective framework produced from this project will serve as a vital building block for vision-based facial expression and gesture inference systems and applications in many domains, including emotion recognition, mental health applications, online education platforms, robotics, and marketing research.

References

  1. Mahmoud, M. &Robinson P. (2011). Interpreting hand-over-face gestures. International Conference on Affective Computing and Intelligent Interaction (ACII).
  2. Mahmoud, M., Baltrušaitis T. & Robinson P. (2016). Automatic analysis of naturalistic hand-over-face gestures. ACM Transaction on Interactive Intelligent Systems, Vol. 6 Issue 2.
  3. Nojavanasghari B., Hughes C.E., Baltrušaitis T., Morency L.P. (2017). Hand2Face: Automatic synthesis and recognition of hand over face occlusions. International Conference on Affective Computing and Intelligent Interaction (ACII).
  4. Jack, R.E., & Schyns, P.G. (2017). Toward a social psychophysics of face communication. Annual review of psychology, 68, 269-297.
  5. Jack, R.E., Garrod, O.G.B., Yu, H., Caldara, R., & Schyns, P.G. (2012). Facial expressions of emotion are not culturally universal. Proceedings of the National Academy of Sciences, 109(19), 7241-7244.
  6. Jack, R.E., & Schyns, P.G. (2015). The human face as a dynamic tool for social communication. Current Biology, 25(14), R621-R634

Towards a Culturally Inclusive Facial Expression Recognition System

Supervisors:
Rachael Jack (School of Psychology) and Tanaya Guha (School of Computing Science)

Motivation & novelty:

The automatic recognition of facial expressions of the six classic basic emotions from images/videos is considered a matured technology. However, the entire paradigm assumes that such facial expressions are culturally universal. Rather, growing evidence shows high systematic variability in how these facial expressions are represented across cultures (Jack et al 2012; Chen et al. 2021; see Jack, 2013 for a review). Such findings thus challenge the validity, accuracy, and fairness of existing Facial Expression Recognition (FER) models. Consequently, it is now crucial to investigate exactly how facial expressions vary across cultures, how to model and quantify such differences, and finally, how to incorporate this knowledge into developing a culturally flexible and inclusive FER system.

Aims:

The objectives of this project are to:

  • Enable an objective understanding of how facial expressions vary across cultures
  • Build a culturally flexible and inclusive FER system

Methodology:

This project will use a data-driven,computational approach to achieve the above objectives. Starting with a database of facial expressions produced across cultures (e.g., image or video-based), we will develop a visual information retrieval system to discover the most (dis)similar facial expressions across cultures (Phase I). Using state-of-the-art methodologies from psychology and perception science, we will formally characterize the dynamic facial expression signals within and across cultures to identify the specific facial signals that support accurate cross-cultural communication and those that give rise to confusions. Insights from this and the retrieval results will inform the development of an inclusive FER system (Phase II). Impact: The project addresses the considerable limitations of current FER systems that are based on the long-held assumption that facial expressions are culturally universal, thereby questioning their validity, accuracy, and fairness. Successful completion of the project therefore has the potential for far-reaching impact in the fields of Psychology, Affective Computing, and HCI, by contributing to our understanding of both cross-cultural facial expressions and the validity and utility of Facial AI. In practice, the project outcomes also have the potential to directly improve the experience of the broad range of users who interact with FER-related applications and to expand their utility and marketability.

Alignment with Industrial Interests:

FER has wide applications in healthcare, robotics, and entertainment spanning web, mobile, and wearable platforms. Further, fairness and bias mitigation in AI is a key challenge for many industries eager to offer responsible and inclusive AI solutions. Thus, the project is relevant and well aligned with interests of AI industries in various sections.

Timeline:

This full-time PhD project will involve the following activities—Literature review, database
creation, retrieval system development, data analysis, inclusive FER system development, testing and evaluation, dissemination of results (e.g., publication, conference presentation) and thesis writing.

Desired skills:

Python, Matlab, Machine Learning, prior experience of working with complex images and videos.

References:

Jack, R. E., Garrod, O. G. B., Yu, H., Caldara, R., & Schyns, P. G. (2012). Facial expressions of emotion are not culturally universal. Proceedings of the National Academy of Science of the USA, 109(19), 7241–7244. https://doi.org/10.1073/pnas.1200155109

Chen, C., Garrod, O. G., Schyns, P. G., & Jack, R. E. (October 2020). Dynamic Face Movement Texture Enhances the Perceived Realism of Facial Expressions of Emotion. In Proceedings of the 20th ACM International Conference on Intelligent Virtual Agents (pp. 1–3). https://doi.org/10.1145/3383652.3423912

Xu et al. Investigating Bias and Fairness in Facial Expression Recognition. ECCV Workshops, 2020.

Vision-based AI for automatic detection of individual and social behaviour in Rodents

Supervisors:
Marwa Mahmoud (School of Computing Science) and Cassandra Sampaio Batista (School of Psychology)

Rodents are the most extensively used models to understand the cellular and molecular underpinnings of behaviour, neurodegenerative and psychiatric disorders, as well as, for the development of interventions and pharmacological treatments. Screening behavioural phenotypes in rodents is very time consuming, as a large battery of cognitive and motor behavioural tests is necessary. Further, standard behavioural testing usually requires the removal of the animal from their home-cage environment and individual testing, therefore excluding the assessment of spontaneous social interactions. Monitoring of home-cage spontaneous behaviours, such as eating, grooming, sleeping and social interactions, has already proven to be sensitive to different models of neurodevelopmental and neurodegenerative disorders. For instance, home-cage monitoring can distinguish different mouse strains and models of autistic-like behaviour (Jhuang et al., 2010) and detect early alterations in sleep patterns before behavioural alterations in a rodent model of amyotrophic lateral sclerosis (ALS) (Golini et al., 2020).

Most of the traditional home-cage monitoring systems use sensors and therefore are restricted on the type of activities that it can detect, requiring the animals to interact with the sensors (Goulding et al., 2008; Kiryk et al., 2020; Voikar and Gaburro, 2020). The recent development of vision-based computing and machine learning opens up the possibility to monitor and potentially label all home-cage behaviours automatically (Jhuang et al., 2010; Mathis et al., 2018). Still, most automatic detection machine learning-based work has focused on movements, mainly joints and movements trajectory (Mathis et al., 2018) rather than social or group behaviour.

Aims/objectives/novelty.

The aim of this PhD is to leverage the advancements of computer vision for animal behaviour understanding (Pessanha et. al. 2020) and build machine learning models that can automatically interpret and classify different individual and social behaviours by analysing videos collected using continuous monitoring.

Objectives:

    1. Define a set of behavioural and social cues that are relevant to understanding their interactions and group behaviour. This will include building a dataset of their spontaneous social behaviour.
    2. Developing computer vision and machine learning models to automatically detect and classify these behaviours.
    3. Validate and evaluate the developed tools on disorder models (e.g learning deficits, stroke, etc.)?
        Expected outcome/impact

 

      1. The models developed in this project will have wide applications, both in academic research as well as industry, not only by providing tools for automatic behavioural phenotyping but also as means to measure animal welfare during these experiments and procedures.

References:

Golini, E., Rigamonti, M., Iannello, F., De Rosa, C., Scavizzi, F., Raspa, M., Mandillo, S., 2020. A Non-invasive Digital Biomarker for the Detection of Rest Disturbances in the SOD1G93A Mouse Model of ALS. Front Neurosci 14, 896.

Goulding, E.H., Schenk, A.K., Juneja, P., MacKay, A.W., Wade, J.M., Tecott, L.H., 2008. A robust automated system elucidates mouse home cage behavioral structure. Proc Natl Acad Sci U S A 105, 20575-20582.

Jhuang, H., Garrote, E., Mutch, J., Yu, X., Khilnani, V., Poggio, T., Steele, A.D., Serre, T., 2010. Automated home-cage behavioural phenotyping of mice. Nat Commun 1, 68.

Kiryk, A., Janusz, A., Zglinicki, B., Turkes, E., Knapska, E., Konopka, W., Lipp, H.P., Kaczmarek, L., 2020. IntelliCage as a tool for measuring mouse behavior – 20 years perspective. Behav Brain Res 388, 112620.

Mathis, A., Mamidanna, P., Cury, K.M., Abe, T., Murthy, V.N., Mathis, M.W., Bethge, M., 2018. DeepLabCut: markerless pose estimation of user-defined body parts with deep learning. Nat Neurosci 21, 1281-1289.

Voikar, V., Gaburro, S., 2020. Three Pillars of Automated Home-Cage Phenotyping of Mice: Novel Findings, Refinement, and Reproducibility Based on Literature and Experience. Front Behav Neurosci 14, 575434.

Pessanha F., McLennan K., Mahmoud M. Towards automatic monitoring of disease progression in sheep: A hierarchical model for sheep facial expressions analysis from video in IEEE International Conference on Automatic Face and Gesture Recognition, Buenos Aires, May 2020

Who you gonna call? developing rat-to-rat communication interfaces

Supervisors:
Cassandra Sampaio Batista (School of Psychology) and Ilyena Hirskyj-Douglas (School of Computing Science)

Main aims and objectives

Many rats live in laboratory conditions residing in cages in small groups or alone (such as after surgery, dominance issues, safety, or research purposes). Yet, rats are highly social animals with complex social skills needing the company of others. Rats become attached and form solid bonds and large communities in the wild. Thus, while the research done on laboratory rats is vital to human and animal health, their social living conditions are not always ideal. The main aim of this project is to increase rats’ sociality by exploring how rats can use computers that have audio, visual and olfactory output to communicate with other rats. We will then use this output to develop an artificial rat agent to support lonely rats autonomously. While researchers have investigated how rats react to screen systems [1], dog-to-dog interfaces [2] and dog-to-human video interfaces [3], there is no research undertaken around rat-to-rat and rat-to-computer interaction.

Proposed Methods and Outputs

This research is at the intersection of animal-computer interaction and neuroscience, exploring rats’ behaviour, brain, vocal analysis, and computer usage. Using novel devices to enable rats to virtually calling, we will look at a) how connecting to other rats virtually can improve a rats life, b) how different modalities (audio, visual and olfactory) support rats’ social communication, and c) how rats interact virtually with known and unknown rats. To enable rat-to-rat communication, novel remote calling devices will be developed that facilitate rats to trigger and answer calls. The rats’ behaviour and vocal analysis (such as [4]) and brain neuroimaging [5] will be studied to assess the impact of these remote interactions. The results of these studies will inform on how to support rat communication virtually and with virtual agents, and on the impact of rat-computer interfaces on behaviour, social interactions, and brain function and structure. The student will need to apply for a Personal Home Office Licence (PIL) as part of their first year of study and will be working directly with laboratory rats, programming, building devices and encoding the rats’ behaviours. The project is of great industrial and academic interest in social behaviour, laboratory and domesticated rodent welfare as it will provide key insights and tools for supporting their social needs.

References

[1] Yakura T, Yokota H, Ohmichi Y, Ohmichi M, Nakano T, et al. (2018) Visual recognition of mirror, video-recorded, and still images in rats. PLOS ONE 13(3): e0194215.https://doi.org/10.1371/journal.pone.0194215

[2] Hirskyj-Douglas, I and Lucero, A.(2019). On the Internet, Nobody Knows You’re a Dog… Unless You’re Another Dog. In 2019 CHI Conference on Human Factors in Computing Systems Proceedings (CHI 2019), May 4–9, 2019, Glasgow, Scotland, UK. ACM, New York, NY, USA. https://doi.org/10.1145/3290605.3300347

[3] Hirskyj-Douglas, I., Piituanen, R., Lucero, A. (2021). Forming the Dog Internet: Prototyping a Dog-to-Human Video Call Device. Proc. ACM Hum.-Comput. Interact. 5, ISS, Article 494 (November 2021), 20 pages. DOI:https://doi.org/10.1145/3488539

[4] Coffey, K.R., Marx, R.G. & Neumaier, J.F. DeepSqueak: a deep learning-based system for detection and analysis of ultrasonic vocalizations. Neuropsychopharmacol. 44, 859–868 (2019). https://doi.org/10.1038/s41386-018-0303-6

[5] Sallet, J., Mars, R.B., Noonan, M.P., Andersson, J.L., O’Reilly, J.X., Jbabdi, S., Croxson, P.L., Jenkinson, M., Miller, K.L., Rushworth, M.F., 2011. Social network size affects neural circuits in macaques. Science 334, 697-700.

You Never get a Second Chance to Make a First Impression – Establishing how best to align human expectations about a robot’s performance based on the robot’s appearance and behaviour

Supervisors:
Mary Ellen Foster (School of Computing Science) and Emily Cross (School of Psychology)

Aims and objectives:

  • A major aim of social robotics is to create embodied agents that humans can instantly and automatically understand and interact with, using the same mechanisms that they use when interacting with each other. While considerable research attention has been invested in this endeavour, it is still the case that when humans encounter robots, they need time to understand how the robot works; in other words, people need time to learn to read the signals the robot generates. People consistently have expectations that are far too high for the artificial agents they encounter, which often leads to confusion and disappointment.
  • If we can better understand human expectations about robot capabilities based on the robot’s appearance (and/or initial behaviours) and ensure that those are aligned with the actual robot abilities, this should accelerate progress in human-robot interaction, specifically in the domains of human acceptance of robots in social settings and cooperative task performance between humans and robots. This project will combine expertise in robotic design and the social neuroscience of how we perceive and interact with artificial agents to develop a socially interactive robot designed for use in public spaces that requires (little or) no learning or effort for humans to interact with while carrying out tasks such as guidance, cooperative navigation, and interactive problem-solving tasks.

Proposed methods:

  • Computing Science: System development and integration (Developing operational models of interactive behaviour and implementing them on robot platforms); deployment of robot systems in lab-based settings and in real-world public spaces
  • Psychology/Brain Science: Behavioural tasks (questionnaires and measures of social perception, such as the Social Stroop task), non-invasive mobile brain imaging (functional near infrared spectroscopy) to record human brain activity when encountering the artificial agent in question.

Likely outputs:

  • empirically-based principles for social robot design to optimize alignment between robot’s appearance, user expectations, and robot performance, based on brain and behavioural data
  • A publicly available, implemented, and validated robot system embodying these principles
  • Empirical research papers detailing findings for a computing science audience (e.g., ACM Transactions on Human-Robot Interaction) a psychology/neuroscience audience (e.g., Psychological Science, Cognition) and a general audience, that draws on the multidisciplinary aspects of the work (PNAS, Current Biology), as well as papers at appropriate conferences and workshops such as Human-Robot Interaction, Intelligent Virtual Agents, CHI, and similar.

References

[Fos17] Foster, M. E.; Gaschler, A.; and Giuliani, M. Automatically Classifying User Engagement for Dynamic Multi-party Human–Robot Interaction. International Journal of Social Robotics. July 2017.

[Fos16] Foster, M. E.; Alami, R.; Gestranius, O.; Lemon, O.; Niemelä, M.; Odobez, J.; and Pandey, A. K. The MuMMER project: Engaging human-robot interaction in real-world public spaces. In Proceedings of the Eighth International Conference on Social Robotics, 2016.

[Cro19] Cross, E. S., Riddoch, K. A., Pratts, J., Titone, S., Chaudhury, B. & Hortensius, R. (2019). A neurocognitive investigation of the impact of socialising with a robot on empathy for pain. Philosophical Transactions of the Royal Society B.

[Hor18] Hortensius, R. & Cross, E.S. (2018). From automata to animate beings: The scope and limits of attributing socialness to artificial agents. Annals of the New York Academy of Science: The Year in Cognitive Neuroscience.