Projects

The application process for entry in 2021-2022 is now closed, please see the following page for instructions on how to apply in future rounds: https://socialcdt.org/how-to-apply/

EXAMPLE PROJECTS FOR THE ACADEMIC YEAR 2021-2022 CAN BE VIEWED BELOW. Should you have any enquiries regarding project applications please contact us at social-cdt@glasgow.ac.uk.

Better Look Away :-): Using AI methods to understand Gaze Aversion in Real and Mixed Reality Settings (exploring the Tell-Tale Task)

Supervisors:
Monika Harvey (School of Psychology) and Mohamed Khamis (School of Computing Science)

Main aims and objectives
The eyes are said to be a window to the brain [1]. The way we move our eyes reflects our cognitive processes and visual interests, and we use our eyes to coordinate social interactions (e.g., take turns in conversations) [2]. While there is a lot of research on attentive user interfaces that respond to user’s gaze [3], and directing user’s gaze towards targets [4], there is relatively less work on understanding and eliciting gaze aversion. This is unfortunate as the ability to not look is a classic psychological and neural measure of how much people are in voluntary control over their environment [5]. In fact, people often avert their eyes to alleviate a negative social experience (such as avoiding a fight) and in some cultures, looking someone in the eyes directly can be seen as disrespectful.

Efficient gaze aversion is thus an essential adaptive response and its brain correlates have been mapped extensively [6]. The main aim of this project is to investigate and enhance/train gaze aversion using virtual environments. Two potential examples will be considered in the 1st instance: Cultural gaze aversion training to accustom users to cultural norms, before encountering such a situation. Secondly, gaze elicitation and aversion will be integrated into augmented reality glasses to nudge the user to avert (or instead direct, as appropriate) their gaze while encountering for example an aggressive or socially desirable scenario. Another example could be the use of gaze aversion in mixed reality applications. In particular, guiding the user’s gaze and nudging them to look at targets and away from others, can help guide them in virtual environments, or ensure they see important elements of 360° videos.

Proposed methods
This research is at the intersection of eye tracking, psychology and human-computer interaction. It will involve both empirical and technical work, exploring the opportunities and challenges of detecting and eliciting intentional and unintentional gaze aversion. Using an eye-tracker as well as a virtual reality headset we will a) investigate and evaluate methods for eliciting explicit and implicit gaze aversion guided by previous research on gaze direction [4,6]; b) study the impact of intentional and unintentional gaze aversion on the brain by measuring its impact on saccadic reaction times, error rates, and other metrics; and c) utilize the findings and developed methods in one or more application areas. Programming skills are required for this project and previous experience in conducting controlled empirical studies also a plus.

Likely outputs and impact
The results will inform knowledge and generate state of the art tools on how to best design virtual environments that optimize and measure eye-movement control. The topic spans Psychology, Neuro-and Computing Science and we thus envisage publications in journals and conferences that reach a wide academic audience, spanning a range of expertise (e.g. Psychological Science, PNAS, ACM CHI, PACM IMWUT, ACM TOCHI).

References
[1] Ellis, S., Candrea, R., Misner, J., Craig, C. S., Lankford, C. P., & Hutchinson, T. E. (1998, June). Windows to the soul? What eye movements tell us about software usability. In Proceedings of the usability professionals’ association conference (pp. 151-178).
[2] Majaranta, P., & Bulling, A. (2014). Eye tracking and eye-based human–computer interaction. In Advances in physiological computing (pp. 39-65). Springer, London.
[3] Khamis, M., Alt, F., & Bulling, A. (2018, September). The past, present, and future of gaze-enabled handheld mobile devices: Survey and lessons learned. In Proceedings of the 20th International Conference on Human-Computer Interaction with Mobile Devices and Services (pp. 1-17).
[4] Rothe, S., Althammer, F., & Khamis, M. (2018, November). GazeRecall: Using gaze direction to increase recall of details in cinematic virtual reality. In Proceedings of the 17th International Conference on Mobile and Ubiquitous Multimedia (pp. 115-119).
[5] Butler, S.H., Rossit, R., Gilchrist, I.D., Ludwig, C.J., Olk, B., Muir, R., Reeves, I. and Harvey, M. (2009) Non-lateralised deficits in anti-saccade performance in patients with hemispatial neglect. Neuropsychologia, 47, 2488-2495.
[6] Salvia, E., Harvey M., Nazarian, B. and Grosbras, M-H. (2020). Social perception drives eye-movement related brain activity: evidence from pro- and anti-saccades to faces. Neuropsychologia, 139, 107360.

Brain Based Inclusive Design

Supervisors:
Monika Harvey (School of Psychology) and Alessandro Vinciarelli (School of Computing Science)

It is clear to  everybody that people differ widely, but the underlying assumption of current technology designs is that all users are equal. The large cost of this, is the exclusion of  users that fall far from the average that technology designers use as their ideal abstraction (Holmes, 2019). In some cases, the mismatch is evident (e.g., a mouse typically designed for right-handed people is more difficult to use for left-handers) and attempts have been made to accommodate the differences. In other cases, the differences are more subtle and difficult to observe and no attempt has been made, to the best of our knowledge, as yet to take them into account. This is the case, in particular, for change blindness (Rensink, 2004) and inhibition of return (Posner & Cohen, 1984), two brain phenomena that limit our ability to process stimuli presented too closely in space and time. 

The overarching goal of the project is thus to design Human-Computer Interfaces capable of adapting to the limits of every user, in view of a fully inclusive design capable putting every user at ease, i.e., enabling him/her to interact with technology according to her/his processing speed and not according to the speed imposed by technology designers. 

The proposed approach includes four  steps: 

  1. Development of the methodologies for the automatic measurement of the phenomena described above through their effect on EEG signals (e.g., changes in P1, N1 components (McDonald et al., 1999) and behavioural performance (e.g., in/decreased accuracy, in/decreased reaction times); 
  2. Identification of the relationship between the phenomena above and observable factors such as age, education level, computer familiarity, etc. of the user; 
  3. Adaptation of the technology design to the factors above, 
  4. Analysis of the improvement of the users’ experience. 

The main expected outcome is that technology will become more inclusive and capable of accommodating the individual needs of its users in terms of processing speed and ease of use. This will be particularly beneficial for those groups of users that, for different reasons, tend to be penalised in terms of processing speed, in particular older adults  and  special populations (e.g., children with developmental issues, stroke survivors, and related cohorts). 

The project is of great industrial interest because, ultimately, improving the inclusion of technical design greatly increases user satisfaction, a crucial requirement  for every company that aims to commercialise technology. 

[HOL19] Holmes, K. (2019). Mismatch, MIT Press. 

[MCD99] McDonald,J., Ward,L.M. &.Kiehl,A.H. (1999). An event-elated brain potential study of inhibition of return. PerceptionandPsychophysics, 61, 1411–1423. 

[POS84] Posner, M.I. & Cohen, Y. (1984). “Components of visual orienting”. In Bouma, H.; Bouwhuis, D. (eds.). Attention and performance X: Control of language processes. Hillsdale, NJ: Erlbaum. pp. 531–56. 

[RES04] Rensink, R.A. (2004). Visual Sensing without Seeing. Psychological Science, 15, 27-32. 

Bridging the Uncanny Valley with Decoded Neurofeedback

Supervisors:
Frank Pollick (School of Psychology) and Fani Deligianni (School of Computing Science)

A problem with artificial characters that appear nearly human in appearance is that they can sometimes lead users to report that they feel uncomfortable, and that the character is creepy. An explanation for this phenomenon comes from the Uncanny Valley Effect (UVE), which holds that characters approaching human likeness elicit a strong negative response (Mori, et al., 2012; Pollick, 2009). Empirical research into the UVE has grown over the past 15 years and the conditions needed to produce a UVE, and reliably measure its effect have been extensively examined (Diel & MacDorman, 2021). These empirical studies inform design standards of artificial characters (Lay et al., 2016), but deep theoretical questions of why the UVE exists and its underlying mechanisms remain elusive. One technique that has shown promise to answer these questions is that of neuroimaging, where brain measurements are obtained while the UVE is experienced (Saygin, et al., 2012). In this research we propose to use the technique of realtime fMRI neurofeedback, which allows fMRI experiments to go past correlational evidence by enabling the manipulation of brain processing to study the effect of brain state on behaviour.

In particular, we plan to use the technique of decoded neurofeedback (DecNef), which employs methods of machine learning to build a decoder of brain activity. Previous experiments have used DecNef to alter facial preferences (Shibata, et al., 2016) and this study by Shibata and colleagues will guide our efforts to develop a decoder that can be used during fMRI scanning to influence how the UVE is experienced. It is hoped that these experiments will reveal the brain circuits involved in experiencing the UVE, and lead to a deeper theoretical understanding of the basis of the UVE, which can be exploited in the design of successful artificial characters.

The project will develop skills in 1) the use of animation tools to create virtual characters, 2) the ability to design and perform psychological assessment of people’s attitudes and behaviours towards these characters, 3) the use of machine learning in the design of decoded neurofeedback algorithms, and finally 4) how to perform realtime fMRI neurofeedback experiments.

  1. Diel, A., & MacDorman, K. F. (2021). Creepy cats and strange high houses: Support for configural processing in testing predictions of nine uncanny valley theories. Journal of Vision.
  2. Lay, S., Brace, N., Pike, G., & Pollick, F. (2016). Circling around the uncanny valley: Design principles for research into the relation between human likeness and eeriness. i-Perception, 7(6), 2041669516681309.
  3. Mori, M., MacDorman, K. F., & Kageki, N. (2012). The uncanny valley [from the field]. IEEE Robotics & Automation Magazine, 19(2), 98-100. (Original work published in 1970).
  4. Pollick, F. E. (2009). In search of the uncanny valley. In International Conference on User Centric Media (pp. 69-78). Springer, Berlin, Heidelberg.
  5. Saygin, A. P., Chaminade, T., Ishiguro, H., Driver, J., & Frith, C. (2012). The thing that should not be: predictive coding and the uncanny valley in perceiving human and humanoid robot actions. Social cognitive and affective neuroscience, 7(4), 413-422.
  6. Shibata, K., Watanabe, T., Kawato, M., & Sasaki, Y. (2016). Differential activation patterns in the same brain region led to opposite emotional states. PLoS biology, 14(9), e1002546.

Deep Learning feature extraction for social interaction prediction in movies and visual cortex

Supervisors:
Lars Muckli (School of Psychology) and Fani Deligianni (School of Computing Science)

While watching a movie, a viewer is immersed in the spatiotemporal structure of the movie’s audiovisual and high level conceptual content [Raz19]. The nature of the movies induces a natural waxing and waning of more and less social immersive content. This immersion can be exploited during brain imaging experiments to emulate as closely as possible the every-day human life experience, including brain processes involved in social perception.

The human brain is a prediction machine: in addition to receiving sensory information, it actively generates sensory predictions. It implements this by creating internal models about the world which are used to predict upcoming sensory inputs. This basic but powerful concept is used in several studies in Artificial Intelligence (AI) to perform different type of predictions: from video inner-frames for video interpolation [Bao19], to irregularity detection [Sabokrou18], passing through future sound prediction [Oord18].

Despite different studies on AI focusing on how to use visual features to detect and track actors in a movie [Afouras20], it is not clear in the brain how cortical networks for social cognition involve layers in the visual cortex for processing the social interaction cues occurring between actors. Several studies suggest that biological motion recognition (the visual processing of others’ actions) is central to understanding interactions between agents and involves top-down social cognition with bottom up visual processing. We will use cortical layer specific fMRI at Ultra High Field to read brain activity during movie stimulation. Using the latest advances in Deep Learning [Bao19, Afouras20], we will study how the interaction between two people in a movie is processed, trying to analyse predictions that occur between frames. The comparison between the two representation sets, which involves the analysis of the movie video with Deep Learning and its response measured within the brain, will occur doing model comparison with Representational Similarity Analysis (RSA) [Kriegeskorte08].

The work and its natural extensions will help clarify how the early visual cortex is responsible for guiding attention in social scene understanding. The student will spend time in both domains: studying and analysing the state-of-the-art methods in pose estimation and scene understanding in Artificial Intelligence. In brain imaging, they will learn how to perform a brain imaging study with fMRI: from data collection and understanding, to analysis methods. These two fields will provide a solid background in both brain imaging and artificial intelligence, teaching the student the ability to transfer skills and draw conclusions across domains.

References:

[Afouras20] Afouras, T., Owens, A., Chung, J. S., & Zisserman, A. (2020). Self-supervised learning of audio-visual objects from video. European Conference on Computer Vision (ECCV 2020).
[Bao19] Bao, W., Lai, W. S., Ma, C., Zhang, X., Gao, Z., & Yang, M. H. (2019). Depth-aware video frame interpolation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 3703-3712).
[Kriegeskorte08] Kriegeskorte, N., Mur, M., & Bandettini, P. A. (2008). Representational similarity analysis-connecting the branches of systems neuroscience. Frontiers in systems neuroscience, 2, 4.
[Oord18] Oord, A. V. D., Li, Y., & Vinyals, O. (2018). Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748.
[Raz19] Raz, G., Valente, G., Svanera, M., Benini, S., & Kovács, A. B. (2019). A Robust Neural Fingerprint of Cinematic Shot-Scale. Projections, 13(3), 23-52.
[Sabokrou18] Sabokrou, M., Pourreza, M., Fayyaz, M., Entezari, R., Fathy, M., Gall, J., & Adeli, E. (2018, December). Avid: Adversarial visual irregularity detection. In Asian Conference on Computer Vision (pp. 488-505). Springer, Cham.

Developing a public-space robot: MuMMER in Glasgow

Supervisors:
Mary Ellen Foster (School of Computing Science) and Jane Stuart Smith (School of Critical Studies)

The increasing availability of socially-intelligent robots with functionality for a range of purposes, from guidance in museums (Gehle et al 2015), to companionship for the elderly (Hebesberger et al 2016), has motivated a growing number of studies attempting to evaluate and enhance Human-Robot Interaction (HRI). But, as Honig and Oron-Gilad (2018)’s review of recent work on understanding and resolving failures in HRI observes, most research has focussed on technical ways of improving robot reliability. They argue that progress requires a ‘holistic approach’ in which ‘[t]he technical knowledge of hardware and software must be integrated with cognitive aspects of information processing, psychological knowledge of interaction dynamics, and domain-specific knowledge of the user, the robot, the target application, and the environment’ (p.16). Honig and Oron-Gilad point to a particular need to improve the ecological validity of evaluating user communication in HRI, by moving away from experimental, single-person environments, with low-relevance tasks, mainly with younger adult users, to more natural settings, with users of different social profiles and communication strategies, where the outcome of successful HRI matters.

This project will combine current advances in the development of real-world social robots with methods and insights from sociolinguistic theory. Specifically, it will make use of the MuMMER robot system, which is a humanoid robot designed to interact naturally and autonomously in public spaces (Foster et al., 2016; Foster et al., 2019). MuMMER has been originally designed to entertain and engage visitors to a shopping mall, thereby enhancing their overall experience in the mall. For a robot to be successful in this context, it must support human-robot interaction which is socially acceptable, helpful and entertaining for multiple, diverse users in a real-world context. The sociolinguistic context for enhancing human-robot interaction in a real-world setting will be Scotland’s largest city, Glasgow, home to a substantial socially and ethnically diverse population, with its own range of distinctive dialect and accents, from broad Glaswegian vernacular to educated Scottish Standard English (e.g. Stuart-Smith 1999; Macaulay 2005), as well as ‘Glaswasian’, spoken by Glasgow’s South Asian heritage communities (e.g. Lambert et al 2007). Glasgow is also one of the most researched dialect areas in the English-speaking world, and so provides a wealth of comparative sociolinguistic material as the basis for the project.

The work on the PhD project will draw on sociolinguistically-informed observational studies of the MuMMER robot deployed in various locations across Glasgow, interacting with users from a range of social, ethnic, and language backgrounds. Based on the findings of these studies, the student will identify necessary technical modifications to the robot’s interaction strategy to respond to and address issues identified when the robot is interacting with a diverse set of users. The modified robot will be deployed in a new set of observational studies; if time permits, this process will be repeated with different deployment locations and different sets of users to ensure that as many Glaswegians as possible are ultimately able to interact comfortably with the robot, whatever their background.

References

  1. Clift, R. (2016). Conversation Analysis. Cambridge: Cambridge University Press.
  2. Coupland, N., Sarangi, S., & Candlin, C. N. (2014). Sociolinguistics and social theory. Routledge.
  3. Foster M.E., Alami, R., Gestranius, O., Lemon, O., Niemela, M., Odobez, J-M., Pandey, A.M. (2016) The MuMMER Project: Engaging Human-Robot Interaction in Real-World Public Spaces. In: Agah A., Cabibihan J., Howard A., Salichs M., He H. (eds) Social Robotics. ICSR 2016. Lecture Notes in Computer Science, vol 9979. Springer, Cham
  4. Foster, M.E. et al. (2019). MuMMER: Socially Intelligent Human-Robot Interaction in Public Spaces. Proceedings of the AAAI Fall Symposium of Artificial Intelligence for Human-Robot Interaction (AI-HRI 2019).
  5. Gehle R., Pitsch K., Dankert T., Wrede S. (2015). Trouble-based group dynamics in real-world HRI – Reactions on unexpected next moves of a museum guide robot., in 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), 2015 (Kobe), 407–412.
  6. Hebesberger, D., Dondrup, C., Koertner, T., Gisinger, C., Pripfl, J. (2016). Lessons learned from the deployment of a long-term autonomous robot as companion in physical therapy for older adults with dementia: A mixed methods study. In: The Eleventh ACM/IEEE International Conference on Human Robot Interaction, 27–34.
  7. Honig, S., & Oron-Gilad, T. (2018). Understanding and Resolving Failures in Human-Robot Interaction: Literature Review and Model Development. Frontiers in Psychology, 9, 861.
  8. Lambert, K., Alam, F., & Stuart-Smith, J. (2007). Investigating British Asian accents: studies from Glasgow. In J. Trouvain & W. Barry (Eds.), 16th International Congress of Phonetic Sciences (Issue August, pp. 1509–1512). Universität des Saarlandes.
  9. Macaulay, Ronald K. S. and Oxford University Press. 2005. Talk that Counts: Age, Gender, and Social Class Differences in Discourse. Oxford: Oxford University Press.
  10. Stuart-Smith, J. (1999). Glasgow: Accent and voice quality. In P. Foulkes & G. J. Docherty (Eds.), Urban voices: Accent Studies in the British Isles (pp. 203–222). Arnold.

Digital user representations and perspective taking in mediated communication

Supervisors:
Dale Barr (School of Psychology) and Mary Ellen Foster (School of Computing Science)

Human social interaction is increasingly mediated by technology, with many of the signals present in traditional face-to-face interaction being replaced by digital representations (e.g., avatars, nameplates, and emojis). To communicate successfully, participants in a conversational interaction must keep track of the identities of their co-participants, as well as the “common ground” they share with each—the dynamically changing set of mutually held beliefs, knowledge, and suppositions. Perceptual representations of interlocutors may serve as important memory cues to shared information in communicative interaction (Horton & Gerrig, 2016; O’Shea, Martin, & Barr, in press). Our main question concerns how digital representations of users across different interaction modalities (text, voice, video chat) influence the development of and access to common ground during communication.

To examine the impact of digital user representations on real-time language production and comprehension, the project will use a variety of behavioral methods including visual world eye-tracking (Tanenhaus, et al. 1995), latency measures, as well as analysis of speech/text content. In the first phase of the project, we will examine how well people can keep track of who said what during a discourse depending on the abstract versus rich nature of user representations (e.g., from abstract symbols to dynamic avatar-based user representations), and how these representations impact people’s ability to tailor messages to their interlocutors, as well as to correctly interpret a communicator’s intended meaning. For example, in one such study, we will test participants’ ability to track “conceptual pacts” (Brennan & Clark, 1996) with a pair of interlocutors during an interactive task where each partner appears (1) through a video stream; (2) as an animated avatar; or (3) as a static user icon. In the second phase, we will examine whether the nature of the user representation during encoding affects the long-term retention of common ground information.

In support of the behavioural experiments, this project will also involve developing a range of conversational agents, both embodied and speech-only, and defining appropriate behaviour models to allow those agents to take part in the studies. The defined behaviour will incorporate both verbal interaction as well as non-verbal actions, to replicate the full richness of human face-to-face conversation (Foster, 2019; Bavelas et al., 1997). Insights and techniques developed during the project are intended to improve interfaces for computer-mediated human communication.

References

  1. Bavelas, J. B., Hutchinson, S., Kenwood, C., & Matheson, D. H. (1997). Using Face-to-face Dialogue as a Standard for Other Communication Systems. Canadian Journal of Communication, 22(1).
  2. Brennan, S. E., & Clark, H. H. (1996). Conceptual pacts and lexical choice in conversation. Journal of Experimental Psychology: Learning, Memory, and Cognition, 22, 1482.
  3. Foster, M. E. (2019). Face-to-face conversation: why embodiment matters for conversational user interfaces. Proceedings of the 1st International Conference on Conversational User Interfaces – CUI ’19. the 1st International Conference.
  4. Horton, W. S., & Gerrig, R. J. (2016). Revisiting the memory‐based processing approach to common ground. Topics in Cognitive Science, 8, 780-795.
  5. O’Shea, K. J., Martin, C. R., & Barr, D. J. (2021). Ordinary memory processes in the design of referring expressions. Journal of Memory and Language, 117, 104186.
  6. Tanenhaus, M. K., Spivey-Knowlton, M. J., Eberhard, K. M., & Sedivy, J. C. (1995). Integration of visual and linguistic information in spoken language comprehension. Science, 268, 1632-1634.

Enhancing Social Interactions via Physiologically-Informed AI

Supervisors:
Marios Philiastides (School of Psychology) and Alessandro Vinciarelli (School of Computing Science).

Over the past few years major developments in machine learning (ML) have enabled important advancements in artificial intelligence (AI). Firstly, the field of deep learning (DL) – which has enabled models to learn complex input-output functions (e.g. pixels in an image mapped onto object categories), has emerged as a major player in this area. DL builds upon neural network theory and design architectures, expanding these in ways that enable more complex function approximations.

The second major advance in ML has combined advances in DL with reinforcement learning (RL) to enable new AI systems for learning state-action policies – in what is often referred to as deep reinforcement learning (DRL) – to enhance human performance in complex tasks. Despite these advancements, however, critical challenges still exist in incorporating AI into a team with human(s).

One of the most important challenges is the need to understand how humans value intermediate decisions (i.e. before they generate a behaviour) through internal models of their confidence, expected reward, risk etc. Critically, such information about human decision-making is not only expressed through overt behaviour, such as speech or action, but more subtlety through physiological changes, small changes in facial expression and posture etc. Socially and emotionally intelligent people are excellent at picking up on this information to infer the current disposition of one another and to guide their decisions and social interactions.

In this project, we propose to develop a physiologically-informed AI platform, utilizing neural and systemic physiological information (e.g. arousal, stress) ([Fou15][Pis17][Ghe18]) together with affective cues from facial features ([Vin09][Bal16]) to infer latent cognitive and emotional states from humans interacting in a series of social decision-making tasks (e.g. trust game, prisoner’s dilemma etc). Specifically, we will use these latent states to generate rich reinforcement signals to train AI agents (specifically DRL) and allow them to develop a “theory of mind” ([Pre78][Fri05]) in order to make predictions about upcoming human behaviour. The ultimate goal of this project is to deliver advancements towards “closing-the-loop”, whereby the AI agent feeds-back its own predictions to the human players in order to optimise behaviour and social interactions.

[Ghe18] S Gherman, MG Philiastides, “Human VMPFC encodes early signatures of confidence in perceptual decisions”, eLife, 7: e38293, 2018.

[Pis17] MA Pisauro, E Fouragnan, C Retzler, MG Philiastides, “Neural correlates of evidence accumulation during value-based decisions revealed via simultaneous EEG-fMRI”, Nature Communications, 8: 15808, 2017.

[Fou15] E Fouragnan, C Retzler, KJ Mullinger, MG Philiastides, “Two spatiotemporally distinct value systems shape reward-based learning in the human brain”, Nature Communications, 6: 8107, 2015.

[Vin09] A.Vinciarelli, M.Pantic, and H.Bourlard, “Social Signal Processing: Survey of an Emerging Domain“, Image and Vision Computing Journal, Vol. 27, no. 12, pp. 1743-1759, 2009.

[Bal16] T.Baltrušaitis, P.Robinson, and L.-P. Morency. “Openface: an open source facial behavior analysis toolkit.” Proceedings of the IEEE Winter Conference on Applications of Computer Vision, 2016.

[Pre78] D. Premack, G. Woodruff, “Does the chimpanzee have a theory of mind?”, Behavioral and brain sciences Vol. 1, no. 4, pp. 515-526, 1978.

[Fri05] C. Frith, U. Frith, “Theory of Mind”, Current Biology Vol. 15, no. 17, R644-646, 2005.

Fashion Analytics Based on Deep Learning Visual Processing

Supervisors:
Marco Cristani (Humatics Srl) and Alessandro Vinciarelli (School of Computing Science)

Understanding and anticipating future trends is crucial for fashion companies looking to maximise their profit. Many machine learning approaches have been devoted to fashion forecasting, all of them with a
strong limitation: they model fashion styles as sets of textual attributes; for example, “dotted t-shirt with skinny jeans” defines an outfit which may correspond to many real outfits, since it misses the
color, the size of the dots, the type of the neckline, etc. Actually, the description does not incorporate the crucial part: the appearance. A picture is worth a thousand words, especially when it comes to fashion,
where subtle, fine-grained variations of a pattern may define a style. Few instants are needed to distinguish a female outfit of 1920 from one of the last years, but both of them have the same textual description: “Below-knee length drop-waist dresses with a loose, straight fit” describes a 1920 style; when copy-pasted to Google, it brings you to Zalando’s contemporary products! The devil is in the detail, and this detail is visual, and cannot described by words. With this PhD project, we want to model fashion exploiting visual patterns, as they were letters of a new artistic vocabulary, within deep network architectures. Deep learning allows to map complicate patterns into a mathematical space, including images, without the need to use words. In this space, similarities can be computed, which are way more effective then written descriptions, clearly differentiating the last trends from the ones of a century ago. Deep learning is particularly effective when many data are used. And fashion, nowadays, comes together with social media, where tons of images are now the new oil of communication, presenting clothing
items with pictures and video, with a pace of hundreds of thousands of items each day. This is the scenario where we will locate: our PhD theme will deal with fashion images collected on social media, in order to
give deep learning the capability of perfectly understanding a style. Finally, our PhD will aim at forecasting fashion trends, in order to predict the rise and fall of a particular visual trend. This will be possible by social signal processing, which treats the images together with the “likes” associated to them, predicting when an image of a clothing will become viral, understanding among all of the images the ones which are more important than the others in defining a trend, his rise and fall. The PhD theme will put the student in contact with Humatics, a young Italian start-up, which is currently working with important fast-fashion companies as Nunalie , Sirmoney, furnishing forecasting services, and looking to international collaborations to improve their services, and to create specialized professional in the field of computational fashion and aesthetics.

Sharing the road: Cyclists and automated vehicles

Supervisors:
Steve Brewster (School of Computing Science) and Frank Pollick (School of Psychology).

Automated vehicles must share the road with pedestrians and cyclists, and drive safely around them. Autonomous cars, therefore, must have some form of social intelligence if they are to function correctly around other road users. There has been work looking at how pedestrians may interact with future autonomous vehicles [ROT15] and potential solutions have been proposed (e.g. displays on the outside of cars to indicate that the car has seen the pedestrian). However, there has been little work on automated cars and cyclists.

When there is no driver in the car, social cues such as eye contact, waving, etc., are lost [ROT15]. This changes the social interaction between the car and the cyclist, and may cause accidents if it is no longer clear, for example, who should proceed. Automated cars also behave differently to cars driven by humans, e.g. they may appear more cautious in their driving, which the cyclist may misinterpret. The aim of this project is to study the social cues used by drivers and cyclists, and create multimodal solutions that can enable safe cycling around autonomous vehicles.

The first stage of the work will be observation of the communication between human drivers and cyclists through literature review and fieldwork. The second stage will be to build a bike into our driving simulator [MAT19] so that we can test interactions between cyclists and drivers safely in a simulation.

We will then start to look at how we can facilitate the social interaction between autonomous cars and cyclists. This will potentially involve visual displays on cars or audio feedback from them, to indicate state information to cyclists nearby (eg whether they have been detected, whether the car is letting the cyclist go ahead). We will also investigate interactions and displays for cyclists, for example multimodal displays in cycling helmets [MAT19] to give them information about car state (which could be collected by V2X software on the cyclist’s phone, for example). Or directly communicating with the car by input made on the handlebars or via gestures. These will be experimentally tested in the simulator and, if we have time, in highly controlled real driving scenarios.

The output of this work will be a set new techniques to support the social interaction between autonomous vehicles and cyclists. We currently work with companies such as Jaguar Land Rover and Bosch and our results will have direct application in their products.

[ROT15] Rothenbucher, D., Li, J., Sirkin, D. and Ju, W., Ghost driver: a platform for investigating interactions between pedestrians and driverless vehicles, Adjunct Proceedings of the International Conference on Automotive User Interfaces and Interactive Vehicular Applications, pp. 44–49, 2015.

[MAT19] Matviienko, A. Brewster, S., Heuten, W. and Boll, S. Comparing unimodal lane keeping cues for child cyclists (https://doi.org/10.1145/3365610.3365632), Proceedings of the 18th International Conference on Mobile and Ubiquitous Multimedia

Social and Behavioural Markers of Hydration States

Supervisors:
Esther K. Papies (School of Psychology) and Matthew Chalmers (School of Computing Science)

Aims and Objectives.  This project will explore whether data derived from a person’s smartphone can be used to establish that person’s hydration status so that, in a well–guided and responsive way, a system can prompt the person to drink water.  Many people are frequently underhydrated, which has negative physical and mental health consequences.  Low hydration states can manifest in impaired cognitive and physical performance, experiences of fatigue or lethargy, and negative affect (e.g, Muñoz et al., 2015; Perrier et al., 2020).  Here, we will establish whether such social and behavioural markers of dehydration can be inferred from a user’s smartphone, and which of these markers, or their combination, are the best predictors of hydration state (Aim 1).  Sophisticated user models of hydration states could also be adapted over time, and help to predict possible instances of dehydration in advance (Aim 2).  This would be useful because many individuals find it difficult to identify when they need to drink, and could benefit from clear, personalized indicators of dehydration.  In addition, smart phones could then be used to prompt users to drink water, once a state of dehydration has been detected, or when dehydration is likely to occur.  Thus, we will also test how hydration information should be communicated to users to prompt attitude and behaviour change and ultimately, improve hydration behaviour (Aim 3).  Throughout, we will implement data collection, modelling, and feedback on smartphones in a secure way that respects and protects a user’s privacy.

Background and Novelty.  The data that can be derived from smart phones (and related digital services) ranges from low level data on sensors (e.g. accelerometers) to patterns of app usage and social interaction. As such, ‘digital phenotyping’ is a rich source of information on an individual’s social and physical behaviours, and affective states. Some recent survey papers this burgeoning field include Thieme et al. on machine learning in mental health (2020), Chancellor and de Choudhury on using social media data to predict mental health status (2020), Melcher et al. on digital phenotyping of college students (2020), and Kumar et al. on toolkits and frameworks for data collection (2020).

Here, we propose that these types of data may also reflect a person’s hydration state. Part of the project’s novelty is in its exploration of a wider range of phone-derived data as a resource for system agency than prior work in this general area, as well as pioneering work specifically on hydration.  We will relate cognitive and physical performance, fatigue, lethargy and affect to patterns in phone-derived data.  We will test whether such data can be harnessed to provide people with personalized, external, actionable indicators of their physiological state, i.e. to facilitate useful behaviour change. This would have clear advantages over existing indicators of dehydration, such as thirst cues or urine colour, which are easy to ignore or override, and/or difficult for individuals to interpret (Rodger et al, 2020).

Methods.  We will build on an existing mobile computing framework (e.g. AWARE-Light) to collect reports of a participant’s fluid intake, and to integrate them with phone-derived data.  We will attempt to model users’ hydration states, and validate this against self-reported thirst and urine frequency, and self-reported and photographed urine colour (Paper 1).  We will then examine in prospective studies if these models can be used to predict future dehydration states (Paper 2).  Finally, we will examine effective ways to provide feedback and prompt water drinking, based on individual user models (Paper 3).

Outputs.  This project will lead to presentations and papers at both Computer Science and Psychology conferences outlining the principles of using sensing data to understand physiological states, and to facilitate health behaviour change.

Impact.  Results from this work will have implications for the use of a broad range of data in health behaviour interventions across domains, as well as for our understanding of the processes underlying behaviour change. This project would also outline new research directions for studying the effects of hydration in daily life.

References

Chancellor, S., & De Choudhury, M. (2020). Methods in predictive techniques for mental health status on social media: a critical review. Npj Digital Medicine, 3(1), 1–11. http://doi.org/10.1038/s41746-020-0233-7

Melcher, J., Hays, R., & Torous, J. (2020). Digital phenotyping for mental health of college students: a clinical review. Evidence Based Mental Health, 4, ebmental–2020–300180–6. http://doi.org/10.1136/ebmental-2020-300180

Muñoz, C. X., Johnson, E. C., McKenzie, A. L., Guelinckx, I., Graverholt, G., Casa, D. J., … Armstrong, L. E. (2015). Habitual total water intake and dimensions of mood in healthy young women. Appetite, 92, 81–86. https://doi.org/10.1016/j.appet.2015.05.002

Rodger, A., Wehbe, L., & Papies, E. K. (2020). “I know it’s just pouring it from the tap, but it’s not easy”: Motivational processes that underlie water drinking. Under Review. https://psyarxiv.com/grndz

Perrier, E. T., Armstrong, L. E., Bottin, J. H., Clark, W. F., Dolci, A., Guelinckx, I., Iroz, A., Kavouras, S. A., Lang, F., Lieberman, H. R., Melander, O., Morin, C., Seksek, I., Stookey, J. D., Tack, I., Vanhaecke, T., Vecchio, M., & Péronnet, F. (2020). Hydration for health hypothesis: A narrative review of supporting evidence. European Journal of Nutrition. https://doi.org/10.1007/s00394-020-02296-z

Thieme, A., Belgrave, D., & Doherty, G. (2020). Machine Learning in Mental Health. ACM Transactions on Computer-Human Interaction (TOCHI), 27(5), 1–53. http://doi.org/10.1145/3398069

Situating mobile interventions for healthy hydration habits

Supervisors:
Esther Papies (School of Psychology) and Matthew Chalmers (School of Computing Science)

Aims and Objectives.  This project will examine which kinds of data to use to best integrate a digital mobile health intervention into a users’ daily life, to lead to habit formation.  Previous research has shown that just-in-time adaptive interventions (JITAIs) are more effective than statically controlled interventions (Wang & Miller, 2020).  In other words, health interventions are more likely to lead to behaviour change if they are well situated, i.e., with agency adapted to specific user characteristics, and applied in situations where behaviour change should happen.  However, there is limited evidence on how to best design JITAIs for health apps, so as to create artificial agents that lead to lasting behaviour change through novel habit formation.  In addition, there is no systematic evidence as to which features of situations a health app should use to support a user to perform a healthy behaviour (e.g., time of day, location, mood, activity pattern, social context).  We will address these issues in the under-researched domain of hydration behaviours.  The aim is to establish—given the same intervention—which type of contextual data, or which heterogeneous mix of types of data, is most effective at increasing water consumption, and at establishing situated water drinking habits that persist when the initial engagement with the intervention has ceased.

Background and Novelty.  Mobile health interventions are a powerful new tool in the domain of individual health behaviour change.  Health apps can reach large numbers of users at relatively low cost, and can be tailored to an individual’s health goals and adapted to support users in specific, critical situations.    Identifying the right contextual features to trigger an intervention is critical, because context plays a key role both in triggering unhealthy behaviours, and in developing habits that support the long-term maintenance of healthy behaviours.  A particular challenge,  which existing theories typically don’t yet address, lies in the dynamic nature of health behaviours and their contextual triggers, and in establishing how these behaviours and contexts can best be monitored (Nahum-Shani et al., 2018).  This project will take on these challenges in the domain of hydration, because research suggests that many adults may be chronically dehydrated, with implications for cognitive functioning, mood, and physical health (e.g., risk of diabetes, overweight, kidney damage; see Muñoz et al., 2015; Perrier et al., 2020). Our previous work has shown that healthy hydration is associated with drinking water habitually across many different situations each day (Rodger et al., 2020).  This underlines the particular importance of establishing dynamic markers of situations that are cognitively associated with healthy behaviours so that they can support habit formation.

Methods.  (1) We will examine the internal (e.g., motivation, mood, interoception) and external (e.g., time of day, location, activity pattern, social context) markers of situations in which high water drinkers consume water, using objective intake monitors.  Then, integrating these findings with theory on habit formation and motivated behaviour (Papies et al., 2020), and using an existing app platform (e.g. AWARE-Light), (2) we will test which types of data or mixes of data types are most effective in an intervention to increase water consumption in a sample of low water drinkers in the short term, and (3) whether those same data types are effective at creating hydration habits that persist in the longer term.

Outputs.  This project will lead to presentations and papers of three quantitative subprojects at both Computer Science and Psychology conferences, as well as a possible qualitative contribution on the dynamic nature of habit formation.

Impact.  Results from this work will have implication for the design of health behaviour interventions across domains. This work will further contribute to the emerging theoretical understanding of the formation and context sensitivity of the cognitive processes that support healthy habits.  It will explore how sensing and adaptive user modeling can situate both user and AI system in a common contextual frame and whether this facilitates engagement and behavior change.

References:

  1. Muñoz, C. X., Johnson, E. C., McKenzie, A. L., Guelinckx, I., Graverholt, G., Casa, D. J., … Armstrong, L. E. (2015). Habitual total water intake and dimensions of mood in healthy young women. Appetite, 92, 81–86. https://doi.org/10.1016/j.appet.2015.05.002
  2. Nahum-Shani, I., Smith, S. N., Spring, B. J., Collins, L. M., Witkiewitz, K., Tewari, A., & Murphy, S. A. (2018). Just-in-Time Adaptive Interventions (JITAIs) in Mobile Health: Key Components and Design Principles for Ongoing Health Behavior Support. Annals of Behavioral Medicine: A Publication of the Society of Behavioral Medicine, 52(6), 446–462. https://doi.org/10.1007/s12160-016-9830-8
  3. Papies, E. K., Barsalou, L. W., & Rusz, D. (2020). Understanding Desire for Food and Drink: A Grounded-Cognition Approach. Current Directions in Psychological Science, 29(2), 193–198. https://doi.org/10.1177/0963721420904958
  4. Perrier, E. T., Armstrong, L. E., Bottin, J. H., Clark, W. F., Dolci, A., Guelinckx, I., Iroz, A., Kavouras, S. A., Lang, F., Lieberman, H. R., Melander, O., Morin, C., Seksek, I., Stookey, J. D., Tack, I., Vanhaecke, T., Vecchio, M., & Péronnet, F. (2020). Hydration for health hypothesis: A narrative review of supporting evidence. European Journal of Nutrition. https://doi.org/10.1007/s00394-020-02296-z
  5. Rodger, A., Wehbe, L., & Papies, E. K. (2020). “I know it’s just pouring it from the tap, but it’s not easy”: Motivational processes that underlie water drinking. Under Review. https://psyarxiv.com/grndz
  6. Wang, L., & Miller, L. C. (2020). Just-in-the-Moment Adaptive Interventions (JITAI): A Meta-Analytical Review. Health Communication, 35(12), 1531–1544. https://doi.org/10.1080/10410236.2019.1652388

To style-shift is human? Testing the limits of adaptable conversational interfaces

Supervisors:
Jane Stuart-Smith (School of Critical Studies) and Mary Ellen Foster (School of Computing Science)

An important element of successful human interaction is speakers’ ability to adapt their talk in response to social context and their human interlocutors (Giles et al 1991). Sociolinguistic research on context-driven style-shifting in spoken language shows a range of adaptative linguistic behaviours, from switching to and from standard/non-standard dialects to fine-grained variation in speech sounds, rate and pitch (Coupland 2007). It also reveals limits on socially-appropriate accommodation: imagine the social consequences for a standard Southern English speaker trying to adopt a Glaswegian accent features in an informal context like a bar; linguistic hyper-correction in highly formal contexts such as interviews can be equally problematic (Labov 1966).

A generally-held view is that enabling machines to communicate more effectively with humans, as humans do with each other, requires artificial agents also to be responsive, adapting to their human interlocutors in ways which are appropriate for the specific communication goal (e.g., Axelsson and Skantze 2020). However, the development of socially acceptable, adaptive AI requires information about human-AI interaction which is currently lacking. For linguistic human-AI accommodation in particular, we need to know:

  1. How do humans style-shift when speaking to artificial agents/conversational interfaces (e.g. digital assistants such as Alexa, Siri etc)? The few small-scale studies which exist suggest that humans may adapt to artificial agents, but little is known for certain (e.g. Staum-Casasanto et al 2010; Ferenc et al 2019)
  2. What is the impact of human style-shifting on communicative efficacy in human-AI interaction, especially shifts to and from standard dialects? Early work suggests that artificial agents are poorly equipped to deal with non-standard dialect African-American Vernacular English (Lopez-Lloreda, 2020). No formal evidence exists for style-shifting involving non-standard dialects in the UK context.
  3. What kinds of adaptive linguistic behaviours from artificial agents are socially acceptable and/or communicatively effective for humans (Cassell, 2009)? Are there limits on what human interlocutors can tolerate in terms of linguistic adaption in artificial agents?

This project will work in a specific sociolinguistic context – the Central Belt of Scotland, with a well-known standard-non-standard dialect continuum, and predictable context-induced style-shifting (e.g. Macaulay 2005). It will model human-artificial agent interaction in the context of a specific communicative goal, e.g. a service encounter, incorporating a set number of conversational pragmatic routines, bounded by opening and closing speech acts (Clift 2006). The project will consist of a series of experiments which test human style-shifting in interaction with a custom-developed and constantly-refined conversational agent; in particular, the experiments will measure the impact on both of social acceptability and communicative efficacy of adapting the agent’s linguistic responses.

  1. ReferencesAxelsson, N. and Skantze, G. Using knowledge graphs and behaviour trees for feedback-aware presentation agents. Proceedings of Intelligent Virtual Agents 2020. https://doi.org/10.1145/3383652.3423884
  2. Cassell J. (2009) Social Practice: Becoming Enculturated in Human-Computer Interaction. UAHCI 2009. https://doi.org/10.1007/978-3-642-02713-0_32
  3. Clift, R. (2016). Conversation Analysis. Cambridge: Cambridge University Press.
  4. Coupland, N. (2007). Style: Language Variation and Identity. CUP.
  5. Ferenc, B., Cohn, M., & Zellou, G. (2019). Perceptual adaptation to device and human voices: learning and generalization of a phonetic shift across real and voice-AI talkers. https://doi.org/10.21437/Interspeech.2019-1433
  6. Giles, H., Coupland, N., & Coupland, J. (1991). Contexts of accommodation. CUP.
  7. Labov, W. (1966). The effect of social mobility on linguistic behavior. Sociological Inquiry, 36(2), 186–203.
    Lopez-Lloreda, C. (2020). How Speech-Recognition Software Discriminates against Minority Voices. Scientific American, 323(4).
  8. Macaulay, Ronald K. S. and Oxford University Press. 2005. Talk that Counts: Age, Gender, and Social Class Differences in Discourse. Oxford: Oxford University Press.
  9. Staum Casasanto, L Jasmin, K., & Casasanto, D. (2010). Virtual accommodating: Speech rate accommodation to a virtual interlocutor. In S. Ohlsson & R. Catrambone (Eds.), Proc. 32nd Ann Conf Cog Sci Soc., Austin (pp. 127–132).

Towards modelling of biological and artificial perspective taken

Supervisors:
Lars Muckli (School of Psychology) and Michele Sevegnani (School of Computing Science)

Context and objectives
Visual imagery, e.g. the ability to form a visual representation of unseen stimuli, is a fundamental developmental step in social cognition. Being able to take the perspective of another observer is the focus of classic paradigms in theory of mind research such as Piaget’s landscape task: overturning an egocentric world view is reached around the age of 4 when children learn to simulate another person’s perspective towards a visual screen and imagine what is in sight of that person (Piaget, 2013).

Visual imagery might be one of the cognitive processes supported by extensive feedback connections from higher order areas and other modalities to the visual system (Clavagnier et al., 2004), as evidenced by the fact that sound content can be decoded from brain activity patterns in the early visual cortex of blindfolded participants (Vetter et al. 2014). Preliminary data from Muckli’s lab also suggests that this result cannot be reproduced in aphantasic participants who report an inability to generate visual imagery (Zeman, Dewar and Della Sala, 2015).

Our project aims to further explore the neural correlates of visual imagery and aphantasia by using neural decoding techniques, which allow the reconstruction of perceived features from human magnetic resonance imaging (fMRI) data (Raz et al, 2017). This method will allow us to detect shared representation networks between visual imagery and actual visual perception of the same objects, whether these networks are shared across participants, and whether they differ between aphantasics and non-aphantasics.

Proposed methods and expected results
We will use Ultra High Field fMRI to read brain activity while participants (aphantasics and non-aphantasics) are presented with either single-sentence descriptions of object categories (e.g. “a red chair”) or different visual exemplars from the same categories.

Our hypotheses are that, in the visual system, representations of the same categories: (1) will be generalizable between the auditory and visual conditions for the non-aphantasic group, but not for the aphantasic group, (2) will be less generalizable across aphantasics than non-aphantasics in the auditory condition, (3) that the previous two points will allow us to discriminate between aphantasics and non-aphantasic participants.
In Human-Computer Interaction (HCI), we recently developed computational models capable of representing physical and virtual space, solving the problems of how to recognise virtual spatial regions starting from the detected physical position of the users (Benford et al., 2016). We used the models to investigate cognitive dissonance, namely the inability or difficulty to interact with the virtual environment. In this project, we will adapt these computational models and apply them to cognition processes to test hypotheses 1-3. The end goal is to embed them within AI agents to enable empathic-seeming behaviours.

Impact for artificial social intelligence
Our proposal is relevant for the future development of creating and contrasting artificial agents with and without imagery, not only making AI more human-like, but adding the layer of complexity that is imagery-based representations. We outline a number of key questions where we hypothesize imagery has a function in social cognition, and where imagery-based artificially intelligent machines can be applied to social phenomena. To what extent is visual imagery in social AI an advantage? Simulate the perspective another agent has on a view and being able to match the perspective.

References

  1. Clavagnier, S., Falchier, A. & Kennedy, H. (2004) “Long-distance feedback projections to area V1: Implications for multisensory integration, spatial awareness, and visual consciousness”, Cognitive, Affective, & Behavioral Neuroscience 4, 117–126
  2. Piaget, J. (2013). Child’s Conception of Space: Selected Works vol 4 (Vol. 4). Routledge.
  3. Vetter, P., Smith, F.W., and Muckli, L. (2014), “Decoding Sound and Imagery Content in the Early Visual Cortex”, Current Biology, 24, 1256-1262.
  4. Zeman, A., Dewar, M., and Della Sala, S. (2015) “Lives without imagery — Congenital aphantasia”, Cortex, 73, 378-380.
  5. Raz, G., Svanera, M., Singer, N., Gilam G., Bleich, M., Lin, T., Admon, R., Gonen, T., Thaler, A., Granot, R.Y., Goebel, R., Benini, S., Valente, G. (2017) “Robust inter-subject audiovisual decoding in functional magnetic resonance imaging using high-dimensional regression”, Neuroimage, 163, 244-263
  6. Benford, S., Calder, M., Rodden, T., & Sevegnani, M. (2016). On lions, impala, and bigraphs: Modelling interactions in physical/virtual spaces. ACM Transactions on Computer-Human Interaction (TOCHI), 23(2), 9.

Social Interaction via Touch Interactive Volummetric 3D Virtual Agents

Supervisors:
Ravinder Dahiya (School of Engineering) and Philippe Schyns (School of Psychology)

Vision and touch based interactions are fundamental modes of interaction between humans and between humans and the real world. Several portable devices use these modes to display gestures that communicate social messages such as emotions. Recently, non-volumetric 3D displays have attracted considerable interest because they give users a 3D visual experience – for example, 3D movies provide viewers with a perceptual sensation of depth via a pair of glasses. Using a newly developed haptics-based holographic 3D volumetric display, this project will develop these new forms of social interactions with virtual agents. Unlike various VR tools that require headsets (which can lead to motion sickness), here the interaction with 3D virtual objects will be less restricted, closer to its natural form, and, critically, give the user the illusion that the virtual agent is physically present. The experiments will involve interactions with holographically displayed virtual human faces and bodies engaging in various social gestures. To this end, the simulated 2D images showing these various gestures will be displayed mid-air in 3D. For enriched interaction and enhanced realism, this project will also involve hand gesture recognition and controlling haptic feedback (i.e. air patterns) to simulate the surface of several classes of virtual objects. This fundamental study is transformative for sectors where physical interaction with virtual objects is critical, including medical, mental health, sports, education, heritage, security, and entertainment.

You Never get a Second Chance to Make a First Impression – Establishing how best to align human expectations about a robot’s performance based on the robot’s appearance and behaviour

Supervisors:
Mary Ellen Foster (School of Computing Science) and Emily Cross (School of Psychology)

Aims and objectives:

  • A major aim of social robotics is to create embodied agents that humans can instantly and automatically understand and interact with, using the same mechanisms that they use when interacting with each other. While considerable research attention has been invested in this endeavour, it is still the case that when humans encounter robots, they need time to understand how the robot works; in other words, people need time to learn to read the signals the robot generates. People consistently have expectations that are far too high for the artificial agents they encounter, which often leads to confusion and disappointment.
  • If we can better understand human expectations about robot capabilities based on the robot’s appearance (and/or initial behaviours) and ensure that those are aligned with the actual robot abilities, this should accelerate progress in human-robot interaction, specifically in the domains of human acceptance of robots in social settings and cooperative task performance between humans and robots. This project will combine expertise in robotic design and the social neuroscience of how we perceive and interact with artificial agents to develop a socially interactive robot designed for use in public spaces that requires (little or) no learning or effort for humans to interact with while carrying out tasks such as guidance, cooperative navigation, and interactive problem-solving tasks.

Proposed methods:

  • Computing Science: System development and integration (Developing operational models of interactive behaviour and implementing them on robot platforms); deployment of robot systems in lab-based settings and in real-world public spaces
  • Psychology/Brain Science: Behavioural tasks (questionnaires and measures of social perception, such as the Social Stroop task), non-invasive mobile brain imaging (functional near infrared spectroscopy) to record human brain activity when encountering the artificial agent in question.

Likely outputs:

  • empirically-based principles for social robot design to optimize alignment between robot’s appearance, user expectations, and robot performance, based on brain and behavioural data
  • A publicly available, implemented, and validated robot system embodying these principles
  • Empirical research papers detailing findings for a computing science audience (e.g., ACM Transactions on Human-Robot Interaction) a psychology/neuroscience audience (e.g., Psychological Science, Cognition) and a general audience, that draws on the multidisciplinary aspects of the work (PNAS, Current Biology), as well as papers at appropriate conferences and workshops such as Human-Robot Interaction, Intelligent Virtual Agents, CHI, and similar.

[Fos17] Foster, M. E.; Gaschler, A.; and Giuliani, M. Automatically Classifying User Engagement for Dynamic Multi-party Human–Robot Interaction. International Journal of Social Robotics. July 2017.

[Fos16] Foster, M. E.; Alami, R.; Gestranius, O.; Lemon, O.; Niemelä, M.; Odobez, J.; and Pandey, A. K. The MuMMER project: Engaging human-robot interaction in real-world public spaces. In Proceedings of the Eighth International Conference on Social Robotics, 2016.

[Cro19] Cross, E. S., Riddoch, K. A., Pratts, J., Titone, S., Chaudhury, B. & Hortensius, R. (2019). A neurocognitive investigation of the impact of socialising with a robot on empathy for pain. Philosophical Transactions of the Royal Society B.

[Hor18] Hortensius, R. & Cross, E.S. (2018). From automata to animate beings: The scope and limits of attributing socialness to artificial agents. Annals of the New York Academy of Science: The Year in Cognitive Neuroscience.