Projects

The full list of projects available for the academic year 2020-2021 will be available on January 24th, 2020. Since the application process includes the selection of three projects (https://socialcdt.org/application/), we suggest to submit your application only after such a date.

Should you have any enquiry, please contact us at social-cdt@glasgow.ac.uk (please write “application enquiry” in the subject header).


Brain Based Inclusive Design

Supervisors: Monika Harvey (School of Psychology) and Alessandro Vinciarelli (School of Computing Science).

It is clear to  everybody that people differ widely, but the underlying assumption of current technology designs is that all users are equal. The large cost of this, is the exclusion of  users that fall far from the average that technology designers use as their ideal abstraction (Holmes, 2019). In some cases, the mismatch is evident (e.g., a mouse typically designed for right-handed people is more difficult to use for left-handers) and attempts have been made to accommodate the differences. In other cases, the differences are more subtle and difficult to observe and no attempt has been made, to the best of our knowledge, as yet to take them into account. This is the case, in particular, for change blindness (Rensink, 2004) and inhibition of return (Posner & Cohen, 1984), two brain phenomena that limit our ability to process stimuli presented too closely in space and time.

The overarching goal of the project is thus to design Human-Computer Interfaces capable of adapting to the limits of every user, in view of a fully inclusive designe capable putting every user at ease, i.e., enabling him/her to interact with technology according to her/his processing speed and not according to the speed imposed by technology designers.

The proposed approach includes four  steps:

  1. Development of the methdologies for the automatic measurement of the phenomena described above through their effect on EEG signals (e.g., changes in P1, N1 components (McDonald et al., 1999) and behavioural performance (e.g., in/decreased accuracy, in/decreased reaction times);
  2. Identification of the relationship between the phenomena above and observable factors such as age, education level, computer familiarity, etc. of the user;
  3. Adaptation of the technology design to the factors above,
  4. Analysis of the improvement of the users’ experience.

The main expected outcome is that technology will become more inclusive and capable of accommodating the individual needs of its users in terms of processing speed and ease of use. This will be particularly beneficial for those groups of users that, for different reasons, tend to be penalised in terms of processing speed, in particular older adults  and  special populations (e.g., children with developmental issues, stroke survivors, and related cohorts).

The project is of great industrial interest because, ultimately, improving the inclusion of technical design greatly increases user satisfaction, a crucial requirement  for every company that aims to commercialise technology.

[HOL19] Holmes, K. (2019). Mismatch, MIT Press.

[MCD99] McDonald,J., Ward,L.M. &.Kiehl,A.H. (1999). An event-elated brain potential study of inhibition of return. PerceptionandPsychophysics, 61, 1411–1423.

[POS84] Posner, M.I. & Cohen, Y. (1984). “Components of visual orienting”. In Bouma, H.; Bouwhuis, D. (eds.). Attention and performance X: Control of language processes. Hillsdale, NJ: Erlbaum. pp. 531–56.

[RES04] Rensink, R.A. (2004). Visual Sensing without Seeing. Psychological Science, 15, 27-32.


Developing and assessing social interactions with virtual agents in a digital interface for reducing personal stress

Supervisors: Larry Barsalou (School of Psychology) and Stacy Marsella (School of Psychology).

Aims: One of the key goals in general for artificial social intelligence is to use the technology in tailored health interventions. In line with that goal, this CDT project will assess whether social interactions with a virtual agent increase engagement with a stress app and the effectiveness of using it as well as explore how the design of the virtual agent impacts those outcomes.

Novel Elements: This work wil exploit a validated model of stress response, the Situated Assessment Method (SAM2), as the basis for a user model that will tailor the intervention and explore how the design of the agent and its behavior, described below, interact with the effectivenss of that model. The SAM2instrument has been developed in our previous work and is based on a state-of-the-art model of the stress response.  Besides working with classic stress (more aptly termed “distress”), users would be invited to work with positive stress that they feel excited about and enables them to grow (“eustress”).

Approach: Three variables associated with social interaction will be manipulated between participant groups.  First, we will manipulate levels of social interaction:  (a) a virtual agent guides users through interactions with the app, (b) a text dialogue guides these interactions instead, or (c) only simple text instructions control app use.  Second, we will manipulate whether the app collects a model of the user and uses it during interactions (theory of mind) to tailor the interaction. Manipulating presence versus absence of user models can be implemented across the three levels of social interaction just described. Third, in the version of the app that implements social interaction with a virtual agent, we will manipulate features of the agent, including its physical appearance/behavior, personality, and perceived social connectedness.  Of interest across these manipulations is the impact on engagement and effectiveness.

Users would work with the app over at least a two-week period.  At the start, we will collect detailed data on the user’s past and current stress experience using the SAM2. For the following two weeks, participants would provide brief reports of their daily stress experience each evening. At the end of the two-week period, participants would perform another SAM2assessment like the one that began the study, providing us with pre- and post-intervention data.  Engagement will be assessd by how often and how long participants engage with the app, and how much they enjoy doing so.  Effectiveness will be assessed by how much stress changes over the two-week period, and potentially beyond.

Outputs and Impact: The app itself, validated, is an output with potential high impact. Stress impacts many health outcomes and therefore public health costs. SAM2has been studied and evaluated across a wide range of health related domains so effective ways of using it has broad potential app use beyond stress. The creation of the app and its fielding promises a wealth of data for further study. Interesting AI problems associated with building the app include structuring dialogue and interaction between the virtual agent and user, constructing effective user models, and developing strategies to integrate user models, dialogue and virtual agent embodiment in effective health interventions.

[DUT19] Dutriaux, L., Clark, N., Papies, E. K., Scheepers, C., & Barsalou, L. W. (2019). Using the Situated Assessment Method (SAM2) to assess individual differences in common habits. Manuscript under review.

[EPE18] Epel, E. S., Crosswell, A. D., Mayer, S. E., Prather, A. A., Slavich, G. M., Puterman, E., & Mendes, W. B. (2018). More than a feeling: A unified view of stress measurement for population science. Frontiers in Neuroendocrinology, 49, 146–169.

[LEB16] Lebois, L. A. M., Hertzog, C., Slavich, G. M., Barrett, L. F., & Barsalou, L. W. (2016). Establishing the situated features associated with perceived stress. Acta Psychologica, 169,119–132.

[MAR14] Stacy Marsella and Jonathan Gratch. Computationally Modeling Human Emotion. Communications of the ACM, December, 2014.

[MAR04] Marsella, S.C., Pynadath, D.V., and Read, S.J. PsychSim: Agent-based modeling of social interactions and influence. In Proceedings of the International Conference on Cognitive Modeling, pp. 243-248, Pittsburgh, 2004.

[LYN11] Lynn C. Miller, Stacy Marsella, Teresa Dey, Paul Robert Appleby, John L. Christensen, Jennifer Klatt and Stephen J. Read. Socially Optimized Learning in Virtual Environments (SOLVE). The Fourth International Conference on Interactive Digital Storytelling (ICIDS), Vancouver, Canada, Nov. 2011.


Into the thick of it: Situating digital health behaviour interventions

Supervisors: Esther Papies (School of Psychology) and Stacy Marsella (School of Psychology).

Aims and Objectives.  This project will examine how to best integrate a digital health intervention into a users’ daily life.  Are digital interventions more effective if they are situated, i.e., adapted to specific users and situations where behaviour change should happen?  If so, which features of situations should a health (phone) app use to remind a user to perform a healthy behaviour (e.g., time of day, location, mood, activity pattern)? From a Social AI perspective, how do we make inferences about those situations from sensing data and prior models of users’ situated behaviors, how and when does the app socially interact with the user to improve the situated behavior and how do we adapt the user model over time to improve the app’s tailored interaction with a specific user? We will test this in the domain of hydration, with an intervention to increase the consumption of water.

Background and Novelty.  Digital interventions are a powerful new tool in the domain of individual health behaviour change.  Health apps can reach large numbers of users at relatively low cost, and can be tailored to an individual’s health goals.  So far, however, digital health interventions have not exploited a key strength compared to traditional interventions delivered by human health practitioners, namely the ability to situate interventions in critical situations in a users’ daily life.  Rather than being presented statically at pre-set times, situating interventions means that they respond and adapt to the key contextual features that affect a users’ health behaviour.  Previous work has shown that context features have a powerful influence on health behaviour, for example by triggering habits, impulses, and social norms. Therefore, it is vital for effective behaviour change interventions to take the specific context of a user’s health behaviours into account.  The current proposal will test whether situating a mobile health intervention, i.e., designing it to respond adaptively to contextual features, increases its effectiveness compared to unsituated interventions.  We will do this in the domain of hydration, because research suggests that many adults may be chronically dehydrated, with implications for cognitive functioning, mood, and physical health (e.g., risk of diabetes, overweight, kidney damage).

Methods.  We will build an app to increase water intake and compare a static version of this app with a dynamic version that responds to time, a user’s activity level, location, social context, mood, and other possible features that may be linked to hydration (Paper 1).  We will assess whether an app that responds actively to such features leads over time to more engagement and behaviour change than a static app (Paper 2), and which contextual inferences work best to situate an app for effective behaviour change (Paper 3).

Outputs.  This project will lead to presentations and papers at both AI and Psychology conferences outlining the principles and results of situating  health behaviour interventions, using the tested healthy hydration app.

Impact.  Results from this work will have implication for the design of health behaviour interventions across domains, as well as for our understanding of the processes underlying behaviour change. It will explore how sensing and adaptive user modeling can situate both user and AI system in a common contextual frame and whether this facilitates engagement and behavior change.

Alignment with Industrial Interests.  This work will be of interest to industry collaborators interested in personalised health behaviour, such as Danone.

[MUN15] Muñoz, C. X., Johnson, E. C., McKenzie, A. L., Guelinckx, I., Graverholt, G., Casa, D. J., … Armstrong, L. E. (2015). Habitual total water intake and dimensions of mood in healthy young women. Appetite, 92, 81–86.

[PAP17] Papies, E. K. (2017). Situating interventions to bridge the intention–behaviour gap: A framework for recruiting nonconscious processes for behaviour change. Social and Personality Psychology Compass, 11(7), n/a-n/a.

[RIE13] Riebl, S. K., & Davy, B. M. (2013). The Hydration Equation: Update on Water Balance and Cognitive Performance. ACSM’s health & fitness journal, 17(6), 21–28.

[WAN17] Wang and S. Marsella, “Assessing personality through objective behavioral sensing,” in Proceedings of the 7th international conference on affective computing and intelligent interaction, 2017.

[LYN11] Lynn C. Miller, Stacy Marsella, Teresa Dey, Paul Robert Appleby, John L. Christensen, Jennifer Klatt and Stephen J. Read. Socially Optimized Learning in Virtual Environments (SOLVE). The Fourth International Conference on Interactive Digital Storytelling (ICIDS), Vancouver, Canada, Nov. 2011.

[PYN07] Pynadath, David V.; Marsella, Stacy C.  Minimal mental models.  In Proceedings of the 22ndNational Conference on Artificial Intelligence (AAAI), pp. 1038-1044, 2007.


Optimising interactions with  virtual environments

Supervisors: Michele Sevegnani (School of Computing Science) and Monika Harvey (School of Psychology).

Virtual and Mixed Reality systems are socio-technical applications in which users experience different configurations of digital media and computation that give different senses of how a “virtual environment” relates to their local physical environment. In Human-Computer Interaction (HCI), we recently developed computational models capable of representing physical and virtual space, solving the problems of how to recognise  virtual spatial regions starting from the detected physical position of the users (Benford et al., 2016). The models are bigraphs [MIL09] derived from the universal computational model introduced by Turing Award Laureate Robin Milner. Bigraphs encapsulate both dynamic and spatial behaviour of agents that interact and move among each other, or within each other. We used the models to investigate cognitive dissonance, namely the inability or difficulty to interact with the virtual environment.

How the brain represents physical versus virtual environments is also an issue very much debated within Psychology and Neuroscience with some researchers arguing that the brain makes little distinction between the two [BOZ12]. Yet more in line with Sevegnani’s work, Harvey and colleagues have shown that different brain areas represent these different environments and that they are further processed in different time scales HAR12; ROS09]. Moreover, special populations struggle more with virtual over real environments [ROS11].

The overarching goal of this PhD project is, therefore, to adapt the computational models developed in HCI and apply them to psychological scenarios, to test whether the environmental processing within the brain is different as proposed. This information will then refine the HCI model and ideally allow a refined application to special populations.

[BEN16] Benford, S., Calder, M., Rodden, T., & Sevegnani, M., On lions, impala, and bigraphs: Modelling interactions in physical/virtual spaces. ACM Transactions on Computer-Human Interaction (TOCHI)23(2), 9, 2016.

[BOZ12] Bozzacchi., C., Giusti, M.A., Pitzalis, S., Spinelli, D., & Di Russo, F., Similar Cerebral Motor Plans for Real and Virtual Actions. PLOS One (7910), e47783, 2012.

[HAR12] Harvey, M. and Rossit, S., Visuospatial neglect in action. Neuropsychologia, 50, 1018-1028, 2012.

[MIL09] Milner, R.,  The space and motion of communicating agents. Cambridge University Press, 2009.

[ROS11] Rossit, S., Malhotra, P., Muir, K., Reeves, I., Duncan G. and Harvey, M., The role of right temporal lobe structures in off-line action: evidence from lesion-behaviour mapping in stroke patients. Cerebral Cortex, 21 (12), 2751-2761, 2011.

[ROS09] Rossit, S., Malhorta, P., Muir, K., Reeves, I., Duncan, G., Livingstone, K., Jackson H., Hogg, C., Castle P., Learmonth G. and Harvey, M., No neglect- specific deficits in reaching tasks. Cerebral Cortex, 19, 2616-2624, 2009.


Sharing the road: Cyclists and automated vehicles

Supervisors: Steve Brewster (School of Psychology) and Franck Pollick (School of Psychology).

Automated vehicles must share the road with pedestrians and cyclists, and drive safely around them. Autonomous cars, therefore, must have some form of social intelligence if they are to function correctly around other road users.There has been work looking at how pedestrians may interact with future autonomous vehicles [ROT15] and potential solutions have been proposed (e.g. displays on the outside of cars to indicate that the car has seen the pedestrian). However, there has been little work on automated cars and cyclists.

When there is no driver in the car, social cues such as eye contact, waving, etc., are lost [ROT15]. This changes the social interaction between the car and the cyclist, and may cause accidents if it is no longer clear, for example, who should proceed. Automated cars also behave differently to cars driven by humans, e.g. they may appear more cautious in their driving, which the cyclist may misinterpret. The aim of this project is to study the social cues used by drivers and cyclists, and create multimodal solutions that can enable safe cycling around autonomous vehicles.

The first stage of the work will be observation of the communication between human drivers and cyclists through literature review and fieldwork. The second stage will be to build a bike into our driving simulator [MAT19] so that we can test interactions between cyclists and drivers safely in a simulation.

We will then start to look at how we can facilitate the social interaction between autonomous cars and cyclists. This will potentially involve visual displays on cars or audio feedback from them, to indicate state information to cyclists nearby (eg whether they have been detected, whether the car is letting the cyclist go ahead). We will also investigate interactions and displays for cyclists, for example multimodal displays in cycling helmets [MAT19] to give them information about car state (which could be collected by V2X software on the cyclist’s phone, for example). Or directly communicating with the car by input made on the handlebars or via gestures. These will be experimentally tested in the simulator and, if we have time, in highly controlled real driving scenarios.

The output of this work will be a set new techniques to support the social interaction between autonomous vehicles and cyclists. We currently work with companies such as Jaguar Landrover and Bosch and our results will have direct application in their products.

[ROT15] Rothenbucher, D., Li, J., Sirkin, D. and Ju, W., Ghost driver: a platform for investigating interactions between pedestrians and driverless vehicles, Adjunct Proceedings of the International Conference on Automotive User Interfaces and Interactive Vehicular Applications, pp. 44–49, 2015.

[MAT19] Matviienko, A. Brewster, S., Heuten, W. and Boll, S. Comparing unimodal lane keeping cues for child cyclists (https://doi.org/10.1145/3365610.3365632), Proceedings of the 18th International Conference on Mobile and Ubiquitous Multimedia


Human-car interaction

Supervisors: Steve Brewster (School of Psychology) and Franck Pollick (School of Psychology).

The aim of this project is to investigate, in the context of social interactions,the interaction between a driver and an autonomous vehicle. Autonomous cars are sophisticated agents that can handle many driving tasks. However, they may have to hand control to the human driver in different circumstances, for example if sensors fail or weather conditions are bad [MCA16, BUR19]. This is potentially difficult for the driver as they may have not been driving the car for a long period and have to quickly take control [POL15]. This is an important issue for car companies as they want to add more automation to vehicles in a safe manner.Key to this problem is whether this interface would benefit from conceptualizing the exchange between human and car as a social interaction.

This project will study how best to handle handovers, from the car indicating to the driver that it is time to take over, the takeover event, and then the return to automated driving. They key factors to investigate are: situational awareness (the driver needs to know what the problem is and what must be done when they take over), responsibility (who’s task is it to drive at which point),  the in-car context (what is the driver doing: are they asleep, talking to another passenger), and driver skills (is the driver competent to drive or are they under the influence).

We will conduct a range of experiments in our driving simulator to test different types of handover situations and different types of multimodal interactions involving social cuesto support the 4 factors outlined above.

The output will be experimental results and guidelines that can help automotive designers know how best to communicate and deal with handover situations between car and driver. We currently work with companies such as Jaguar Landrover and Bosch and our results will have direct application in their products.

[MCA16] Mcall, R., McGee, F., Meschtscherjakov, A. and Engel, T., Towards A Taxonomy of Autonomous Vehicle Handover Situations, Publication: Proceedings of the International Conference on Automotive User Interfaces and Interactive Vehicular Applications, pp. 193–200, 2016.

[BUR19] Burnett, G., Large, D. R. & Salanitri, D., How will drivers interact with vehicles of the future? Royal Automobile Club Foundation for Motoring Report, 2019.

[POL15] Politis, I, Pollick, F and Brewster S. Language-based multimodal displays for the handover of control in autonomous cars, Publication, Proceedings of the International Conference on Automotive User Interfaces and Interactive Vehicular Applications, pp. 3–10, 2015.


Effective Facial Actions for Artificial Agents.

Supervisors: Rachael Jack (School of Psychology) and Stacy Marsella (School of Psychology).

Facial signals play a critical role in face-to-face social interactions because a wide range of inferences are made about a person from their face, including their emotional, mental and physiological state, their culture, ethnicity, age, sex, social class, and their personality traits (e.g., see [Jac17] for relevant references). These judgments in turn impact how people react to and interact with others, oftentimes with significant consequences such as who is hated or loved, hired or fired (e.g., [Ebe06]). However, given that the human face is a highly complex dynamic information space comprising numerous variations of facial expressions, complexions, and face shapes, understanding what specific face information (or combinations) drives social judgment is empirically very challenging.

In the absence of a formal model of social face signalling, the design of artificial agents’ faces has largely been ad hoc and, in particular, has neglected how facial dynamics impact social judgments, which can impact their performance (e.g., [Che19]). This project aims to address this knowledge gap by delivering a formal model of face signalling for use in artificial agents.

Specifically, the goal of this project is to a) model the space of face signals that drive social judgments during social interactions, b) incorporate this model into artificial agents and c) evaluate the model in different human-artificial agent interactions. The result promises to provide a powerful improvement in the design of artificial agents’ face signalling and social interaction capabilities with broad potential for applications in wider society (e.g., social skills training; challenging stereotyping/prejudice).

Modelling the space of facial signals will be derived using methods from human psychophysical perception studies (e.g., see [Jac17]) and will extend the work of Dr Jack to include a wider range of social signals that are required for face-to-face social interactions (e.g., empathy, agreeableness, skepticism). Face signals that go beyond natural boundaries such as hyper-realistic or super stimuli will also be explored. The resulting model will be initially incorporated into artificial agents using the public domain SmartBody ([Thi18]) animation platform and may be extended to other platforms. Finally, the model will be evaluated in human-agent interaction using the SmartBody platform and may be combined with other modalities including head and eye movements, hand/arm gestures, transient facial changes such as blushing, pallor, or sweating (e.g., [Mar13]).

[Jac17] Jack, R. E., & Schyns, P. G. (2017). Toward a social psychophysics of face communication. Annual review of psychology, 68, 269-297.

[Ebe06] Eberhardt, J. L., Davies, P. G., Purdie-Vaughns, V. J., & Johnson, S. L. (2006). Looking deathworthy: Perceived stereotypicality of Black defendants predicts capital-sentencing outcomes. Psychological science, 17(5), 383-386.

[Che19] Chen, C., Hensel, L. B., Duan, Y., Ince, R., Garrod, O. G., Beskow, J., Jack, R. E. & Schyns, P. G. (2019). Equipping Social Robots with Culturally-Sensitive Facial Expressions of Emotion Using Data- Driven Methods. In: 14th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2019), Lille, France, 14-18 May 2019, (Accepted for Publication).

[Mar13] Stacy Marsella, Yuyu Xu, Margaux Lhommet, Andrew Feng, Stefan Scherer, and Ari Shapiro, “Virtual Character Performance From Speech”, in Symposium on Computer Animation, July 2013.

[Mar14] Stacy Marsella and Jonathan Gratch, “Computationally modeling human emotion”, Communications of the ACM, vol. 57, Dec. 2014, pp. 56-67

[Thi08] Marcus Thiebaux, Andrew Marshall, Stacy Marsella, and Marcelo Kallmann, “SmartBody Behavior Realization for Embodied Conversational Agents”, in Proceedings of Autonomous Agents and Multi- Agent Systems (AAMAS), 2008.


Multimodal Interaction and Huggable Robot

Supervisors: Stephen Brewster (School of Computing Science) and Frank Pollick (School of Psychology).

The aim of the project is to investigate the combination of Human Computer Interaction and social/huggable robots for care, the reduction of stress and anxiety, and emotional support. Existing projects, such as Paro (www.parorobots.com) and the Huggable (www.media.mit.edu/projects/huggable-a-social-robot-for-pediatric-care), focus on very simple interactions. The goal of this PhD project will be to create more complex feedback and sensing to enable a richer interaction between the human and the robot.

The plan would be to study two different aspects of touch: thermal feedback and squeeze input/output. These are key aspects of human-human interaction but have not been studied in human-robot settings where robots and humans come into physical contact.

Thermal feedback has strong associations with emotion and social cues [Wil17]. We use terms like ‘warm and loving’ or ‘cold and distant’ in everyday language. By investigating different uses of warm and cool feedback we can facilitate different emotional relationships with a robot. (This could be used alongside more familiar vibration feedback, such as purring). A series of studies will be undertaken looking at how we can use warming/cooling, rate of change and amount of change in temperature to change responses to robots. We will study responses in terms of, for example, valence and arousal.

We will also look at squeeze interaction from the device. Squeezing in real life offers comfort and support. One half of this task will look at squeeze input, with the human squeezing the robot. This can be done with simple pressure sensors on the robot. The second half will investigate the robot squeezing the arm of the human. For this we will need to build some simple hardware. The studies will look at human responses to squeezing, the social acceptability of these more intimate interactions, and emotional responses to them.

The output of this work will be a series of design prototypes and UI guidelines to help robot designers use new interaction modalities in their robots. The impact of this work will be enable robots have a richer and more natural interaction with the humans they touch. This has many practical applications for the acceptability of robots for care and emotional support.

[Wil17] Wilson, G., and Brewster, S.: Multi-moji: Combining Thermal, Vibrotactile & Visual Stimuli to Expand the Affective Range of Feedback. In Proceedings of the 35th Conference on Human Factors in Computing Systems – CHI ’17, ACM Press, 2017.


Soft eSkin with Embedded Microactuators

Supervisors: Ravinder Dahiya (School of Engineering) and Philippe Schyns (School of Psychology).

The research on tactile skin or e-skin has attracted significant interest recently because it is important for safe human-robot interaction. The focus thus far has been on imitating some of the features of human touch sensing. However, skin is not just the medium to feel the real world. It is also the medium to express one’s feeling through gestures. For example, the skin on face is very much needed to express the emotions such as varying degree of happiness, sadness or anger etc. This important role of skin has not received any attention so far and for the first time this project will explore this direction by developing programmable soft e-skin patches with embedded micro actuators. Building on the flexible and soft electronics research in the school of engineering and the social robotics research in Institute of Neuroscience & Psychology, this project will attempt achieve the following scientific and technological goals:

  • To identify the suitable actuation method to generate simple emotive features such as wrinkles on forehead.
  • To develop a soft eSkin patch with embedded microactutors.
  • To use the model developed for a soft eSkin patch with embedded microactutors.
  • To develop an AI approach aimed at programming and controlling the actuators.

Conversational Venue Recommendation

Supervisors: Craig McDonald (School of Computing Science) and Phil McAleer (School of Psychology).

Increasingly, location-based social networks, such as Foursquare, Facebook or Yelp are replacing traditional static travel guidebooks. Indeed, personalize venue recommendation is an important task for location-based social networks. This task aims to suggest interesting venues that a user may visit, personalized to their tastes and current context, as might be detected form their current location, recent venue visits and historical venue visits. The recent development of models for venue recommendation have encompassed deep learning techniques, able to make effective personalized recommendations.

Venue recommendation is typically deployed such that the user interacts with a mobile phone application. To the best of our knowledge, voice-based venue recommendation has seen considerably less research, but is a rich area for potential improvement. In particular, a venue recommendation agent may be able to elicit further preferences, ask if they prefer one venue or another, or ask for clarification in the type of venue, or distance to be travelled to the next venue.

This proposal aims to:

  • Develop and evaluate models for making venue recommendation using chatbot interfaces, that can be adapted to voice through integration of text-to-speech technology, building upon recent neural network architectures for venue recommendation.
  • Integrate additional factors about personality of the user, or other voice-based context signals (stress, urgency, group interactions) that can inform the venue recommendation agent.

Venue recommendation is an information access scenario for citizens within a “smart city” – indeed, smart city sensors can be used to augment venue recommendation with information about which areas of the city are busy.

[Man18] Contextual Attention Recurrent Architecture for Context-aware Venue Recommendation. Jarana Manotumruksa, Craig Macdonald and Iadh Ounis. In Proceedings of SIGIR 2018.

[Man17] A Deep Recurrent Collaborative Filtering Framework for Venue Recommendation. Jarana Manotumruksa, Craig Macdonald and Iadh Ounis. In Proceedings of CIKM 2017.

[Dev15] Experiments with a Venue-Centric Model for Personalised and Time-Aware Venue Suggestion. Romain Deveaud, Dyaa Albakour, Craig Macdonald, Iadh Ounis. In Proceedings of CIKM 2015.


Language Independent Conversation Modelling

Supervisors: Olga Perepelkina (Neurodata Lab) and Alessandro Vinciarelli (School of Computing Science).

According to Emmanuel Schegloff, one of the most important linguists of the 20thCentury, conversation is the “primordial site of human sociality”, the setting that has shaped human communicative skills from neural processes to expressive abilities [TUR16]. This project focuses on these latter and, in particular, on the use of nonverbal behavioural cues such as laughter, pauses, fillers and interruptions during dyadic interactions. In particular, the project targets the following main goals:

  • To develop approaches for the automatic detection of laugther, pauses, fillers, overlapping speech and back-channel events in speech signals;
  • To analyse the interplay between the cues above and social-psychological phenomena such as emotions, agreement/disagreement, negotiation, personality, etc.

The experiments will be performed over two existing corpora. One includes roughly 12 hours of spontaneous conversations involving 120 persons [VIN15] that have been fully annotated in terms of the cues and the phenomena above. The other is the Russian Acted Multimodal Affective Set (RAMAS) − the first multimodal corpus in Russian language, including approximately 7 h of high-quality close-up video recordings of faces, speech, motion-capture data and such physiological signals as electro-dermal activity and photoplethysmogram [PER18].

The main motivation behind the focus on nonvebal behavioural cues is that these tend to be used differently in different cultural contexts, but they can still be detected independently of the language being used. In this respect, an approach based on nonverbal communication promises to be more robust to the application over data collected in different countries and linguistic areas. In addition, while the importance of nonverbal communication is widely recognised in social psychology, the way certain cues interplay with social and psychological phenomena still requires full investigation [VIN19].

From a methodological point of view, the project involves the following main aspects:

  • Development of corpus analysis methodologies (observational staistics) for the investigation of the relationships beween nonverbal behaviour and social phenomena;
  • Development of signal processing methodologies for the conversion of speech signals into measurements suitable for computer processing;
  • Development of Artificial Intelligence techniques (mainly based on deep networks) for the inference of information from raw speech signals.

From a scientific point of view, the impact of the project will be mainly in Affective Computing and Social Signal Processing [VIN09] while, from an industrial point of view, the impact will be mainly in the areas of Conversational Interfaces (e.g., Alexa and Siri), multimedia content analysis and, in more general terms, Social AI, the application domain encompassing all attempts of making machines capable to interact with people like people do with one another. For this reason, the project is based on the collaboration between the University of Glasgow and Neurodata Lab (http://www.neurodatalab.com), one of the top companies in Social and Emotion AI.

[PER18] Perepelkina O., Kazimirova E., Konstantinova M. RAMAS: Russian Multimodal Corpus of Dyadic Interaction for Affective Computing. In: Karpov A., Jokisch O., Potapova R. (eds) Speech and Computer. Lecture Notes in Computer Science, vol 11096. Springer, 2018.

[TUR16] S.Turkle, “Reclaiming conversation: The power of talk in a digital age”, Penguin, 2016.

[VIN19] M.Tayarani, A.Esposito and A.Vinciarelli, “What an `Ehm’ Leaks About You: Mapping Fillers into Personality Traits with Quantum Evolutionary Feature Selection Algorithms“, accepted for publication by IEEE Transactions on Affective Computing, to appear, 2019.

[VIN15] A.Vinciarelli, E.Chatziioannou and A.Esposito, “When the Words are not Everything: The Use of Laughter, Fillers, Back-Channel, Silence and Overlapping Speech in Phone Calls“, Frontiers in Information and Communication Technology, 2:4, 2015.

[VIN09] A.Vinciarelli, M.Pantic, and H.Bourlard, “Social Signal Processing: Survey of an Emerging Domain“, Image and Vision Computing Journal, Vol. 27, no. 12, pp. 1743-1759, 2009.