Projects

This page shows the list of projects available in the SOCIAL CDT. If you are applying for a Scholarship (see http://www.socialcdt.org/application), please do not forget to mention the project  title in your cover letter. Should you have any enquiry, please contact us at info@socialcdt.org (please write “application enquiry” in the subject header).


Social Perception of Speech.

Supervisors: Philip McAleer (School of Psychology) and Alessandro Vinciarelli (School of Computing Science).

Short vocalizations like “ehm” and “uhm”, the fillers according to the linguistics terminology, are common in everyday conversations (up to one every 10.9 seconds according to the analysis presented in [Vin15]). For this reason, it is important to understand whether the fillers uttered by a person convey personality impressions, i.e., whether people develop a different opinion about a given individual depending on how she/he utters the fillers. This project will use an existing corpus of 2988 fillers (uttered by 120 persons interacting with one another) to achieve the following scientific and technological goals:

  • To establish the vocal parameters that lead to consistent percepts of speaker personality both within and across listeners and the neural areas involved in these attributions from brief fillers.
  • To develop an AI approach aimed at predicting the trait people attribute to an individual when they hear her/his fillers.

The first goal will be achieved through behavioural [Mah18] and neuroimaging experiments [Tod08] that pinpoint how and where in the brain stable personality percepts are processed. From there, acoustical analysis and data-driven approaches using cutting-edge acoustical morphing techniques will allow for generation of hypotheses feeding subsequent AI networks [McA14]. This section will allow the development of the skills necessary to design, implement, and analyse behavioural and neural experiments for establishing social percepts from speech and voice.

The final goal will be achieved through the development of an end-to-end automatic approach that can map the speech signal underlying a filler into the traits that listeners attribute to a speaker. This will allow the development of the skills necessary to design and implement deep neural networks capable to model sequences of physical measurements (with an emphasis on speech signals).

The project is relevant to the emerging domain called personality computing [Vin14] and the main application related to this project is the synthesis of “personality colored” speech, i.e., artificial voices that can give the impression of a personality and sound not only more realistic, but also better at performing the task they are developed for [Nas05].

[Mah18]. G. Mahrholz, P. Belin and P. McAleer, “Judgements of a speaker’s personality are correlated across differing content and stimulus type”, PLOS ONE, 13(10): e0204991. 2018

[McA14]. P. McAleer, A. Todorov and P. Belin, “How Do You Say ‘Hello’? Personality Impressions from Brief Novel Voices”, PL0S ONE, 9(3): e90779. 2014

[Tod08]. A. Todorov, S. G. Baron and N. N. Oosterhof, “Evaluating face trustworthiness: a model based approach, Social Cognitive Affective Neuroscience, 3(2), pp. 119-127. 2008

[Vin15] A.Vinciarelli, E.Chatziioannou and A.Esposito, “When the Words are not Everything: The Use of Laughter, Fillers, Back-Channel, Silence and Overlapping Speech in Phone Calls“, Frontiers in Information and Communication Technology, 2:4, 2015.

[Vin14] A.Vinciarelli and G.Mohammadi, “A Survey of Personality Computing“, IEEE Transactions on Affective Computing, Vol. 5, no. 3, pp. 273-291, 2014.

[Nas05] C.Nass, S.Brave, “Wired for speech: How voice activates and advances the human-computer relationship”, MIT Press, 2005.


Evaluating and Enhancing Human-Robot Interaction for Multiple Diverse Users in a Real-World Context.

Supervisors: Mary Ellen Foster (School of Computing Science) and Jane Stuart-Smith (School of Critical Studies).

The increasing availability of socially-intelligent robots with functionality for a range of purposes, from guidance in museums [Geh15], to companionship for the elderly [Heb16], has motivated a growing number of studies attempting to evaluate and enhance Human-Robot Interaction (HRI). But, as Honig and Oron-Gilad review of recent work on understanding and resolving failures in HRI observes [Hon18], most research has focussed on technical ways of improving robot reliability. They argue that progress requires a “holistic approach” in which “[t]he technical knowledge of hardware and software must be integrated with cognitive aspects of information processing, psychological knowledge of interaction dynamics, and domain-specific knowledge of the user, the robot, the target application, and the environment” (p.16). Honig and Oron-Gilad point to a particular need to improve the ecological validity of evaluating user communication in HRI, by moving away from experimental, single-person environments, with low-relevance tasks, mainly with younger adult users, to more natural settings, with users of different social profiles and communication strategies, where the outcome of successful HRI matters.

The main contribution of this PhD project is to develop an interdisciplinary approach to evaluating and enhancing communication efficacy of HRI, by combining state-of-the-art social robotics with theory and methods from socially-informed linguistics [Cou14] and conversation analysis [Cli16]. Specifically, the project aims to improve HRI with the newly-developed MultiModal Mall Entertainment Robot (MuMMER). MuMMER is a humanoid robot, based on the SoftBank Robotics’ Pepper robot, which has been designed to interact naturally and autonomously in the communicatively-challenging space of a public shopping centre/mall with unlimited possible users of differing social backgrounds and communication styles [Fos16]. MuMMER’s role is to entertain and engage visitors to the shopping mall, thereby enhancing their overall experience in the mall. This in turn requires ensuring successful HRI which is socially acceptable, helpful and entertaining for multiple, diverse users in a real-world context. As of June 2019, the technical development of the MuMMER system has been nearly completed, and the final robot system will be located for 3 months in a shopping mall in Finland during the autumn of 2019.

The PhD project will evalute HRI with MuMMER in a new context, a large shopping mall in an English-speaking context, in Scotland’s largest, and most socially and ethnically-diverse city, Glasgow. Project objectives are to:

  • Design a set of sociolinguistically-informed observational studies of HRI with MuMMER in situ with users from a range of social, ethnic, and language backgrounds, using direct and indirect methods
  • Identify the minimal technical modification(dialogue, non-verbal, other) to optimise HRI, and thereby user experience and engagement, also considering indices such as consumer footfall to the mall
  • Implement technical alterations, and re-evaluate with new users.

[Cli16] Clift, R. (2016). Conversation Analysis. Cambridge: Cambridge University Press.

[Cou14] Coupland, N., Sarangi, S., & Candlin, C. N. (2014). Sociolinguistics and social theory. Routledge.

[Fos16] Foster M.E., Alami, R., Gestranius, O., Lemon, O., Niemela, M., Odobez, J-M., Pandey, A.M. (2016) The MuMMER Project: Engaging Human-Robot Interaction in Real-World Public Spaces. In: Agah A., Cabibihan J., Howard A., Salichs M., He H. (eds) Social Robotics. ICSR 2016. Lecture Notes in Computer Science, vol 9979. Springer, Cham

[Geh15] Gehle R., Pitsch K., Dankert T., Wrede S. (2015). Trouble-based group dynamics in real-world HRI – Reactions on unexpected next moves of a museum guide robot., in 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), 2015 (Kobe), 407–412.

[Heb16] Hebesberger, D., Dondrup, C., Koertner, T., Gisinger, C., Pripfl, J. (2016).Lessonslearned from the deployment of a long-term autonomous robot as companion inphysical therapy for older adults with dementia: A mixed methods study. In: TheEleventh ACM/IEEE International Conference on Human Robot Interaction, 27–34

[Hon18] Honig, S., & Oron-Gilad, T. (2018). Understanding and Resolving Failures in Human-Robot Interaction: Literature Review and Model Development. Frontiers in Psychology, 9, 861.


Enhancing Social Interactions via Physiologically-Informed AI.

Supervisors: Marios Philiastides (School of  Psychology) and Alessandro Vinciarelli (School of Computing Science).

Over the past few years major developments in machine learning (ML) have enabled important advancements in artificial intelligence (AI). Firstly, the field of deep learning (DL) – which has enabled models to learn complex input-output functions (e.g. pixels in an image mapped onto object categories), has emerged as a major player in this area. DL builds upon neural network theory and design architectures, expanding these in ways that enable more complex function approximations.

The second major advance in ML has combined advances in DL with reinforcement learning (RL) to enable new AI systems for learning state-action policies – in what is often referred to as deep reinforcement learning (DRL) – to enhance human performance in complex tasks. Despite these advancements, however, critical challenges still exist in incorporating AI into a team with human(s).

One of the most important challenges is the need to understand how humans value intermediate decisions (i.e. before they generate a behaviour) through internal models of their confidence, expected reward, risk etc. Critically, such information about human decision-making is not only expressed through overt behaviour, such as speech or action, but more subtlety through physiological changes, small changes in facial expression and posture etc. Socially and emotionally intelligent people are excellent at picking up on this information to infer the current disposition of one another and to guide their decisions and social interactions.

In this project, we propose to develop a physiologically-informed AI platform, utilizing neural and systemic physiological information (e.g. arousal, stress) ([Fou15][Pis17][Ghe18]) together with affective cues from facial features ([Vin09][Bal16]) to infer latent cognitive and emotional states from humans interacting in a series of social decision-making tasks (e.g. trust game, prisoner’s dilemma etc). Specifically, we will use these latent states to generate rich reinforcement signals to train AI agents (specifically DRL) and allow them to develop a “theory of mind” ([Pre78][Fri05]) in order to make predictions about upcoming human behaviour. The ultimate goal of this project is to deliver advancements towards “closing-the-loop”, whereby the AI agent feeds-back its own predictions to the human players in order to optimise behaviour and social interactions.

[Ghe18] S Gherman, MG Philiastides, “Human VMPFC encodes early signatures of confidence in perceptual decisions”, eLife, 7: e38293, 2018.

[Pis17] MA Pisauro, E Fouragnan, C Retzler, MG Philiastides, “Neural correlates of evidence accumulation during value-based decisions revealed via simultaneous EEG-fMRI”, Nature Communications, 8: 15808, 2017.

[Fou15] E Fouragnan, C Retzler, KJ Mullinger, MG Philiastides, “Two spatiotemporally distinct value systems shape reward-based learning in the human brain”, Nature Communications, 6: 8107, 2015.

[Vin09] A.Vinciarelli, M.Pantic, and H.Bourlard, “Social Signal Processing: Survey of an Emerging Domain“, Image and Vision Computing Journal, Vol. 27, no. 12, pp. 1743-1759, 2009.

[Bal16] T.Baltrušaitis, P.Robinson, and L.-P. Morency. “Openface: an open source facial behavior analysis toolkit.” Proceedings of the IEEE Winter Conference on Applications of Computer Vision, 2016.

[Pre78] D. Premack, G. Woodruff, “Does the chimpanzee have a theory of mind?”, Behavioral and brain sciences Vol. 1, no. 4, pp. 515-526, 1978.

[Fri05] C. Frith, U. Frith, “Theory of Mind”, Current Biology Vol. 15, no. 17, R644-646, 2005.


You Never get a Second Chance to Make a First Impression – Establishing how best to align human expectations about a robot’s performance based on the robot’s appearance and behaviour.

Supervisors: Mary Ellen Foster (School of Computing Science) and Emily Cross (School of Psychology).

Main aims and objectives:

  • A major aim of social robotics is to create embodied agents that humans can instantly and automatically understand and interact with, using the same mechanisms that they use when interacting with each other. While considerable research attention has been invested in this endeavour, it is still the case that when humans encounter robots, they need time to understand how the robot works; in other words, people need time to learn to read the signals the robot generates. People consistently have expectations that are far too high for the artificial agents they encounter, which often leads to confusion and disappointment.
  • If we can better understand human expectations about robot capabilities based on the robot’s appearance (and/or initial behaviours) and ensure that those are aligned with the actual robot abilities, this should accelerate progress in human-robot interaction, specifically in the domains of human acceptance of robots in social settings and cooperative task performance between humans and robots. This project will combine expertise in robotic design and the social neuroscience of how we perceive and interact with artificial agents to develop a socially interactive robot designed for use in public spaces that requires (little or) no learning or effort for humans to interact with while carrying out tasks such as guidance, cooperative navigation, and interactive problem-solving tasks.

Proposed methods:

  • Computing Science: System development and integration (Developing operational models of interactive behaviour and implementing them on robot platforms); deployment of robot systems in lab-based settings and in real-world public spaces
  • Psychology/Brain Science: Behavioural tasks (questionnaires and measures of social perception, such as the Social Stroop task), non-invasive mobile brain imaging (functional near infrared spectroscopy) to record human brain activity when encountering the artificial agent in question.

Likely outputs:

  • empirically-based principles for social robot design to optimize alignment between robot’s appearance, user expectations, and robot performance, based on brain and behavioural data
  • A publicly available, implemented, and validated robot system embodying these principles
  • Empirical research papers detailing findings for a computing science audience (e.g., ACM Transactions on Human-Robot Interaction) a psychology/neuroscience audience (e.g., Psychological Science, Cognition) and a general audience, that draws on the multidisciplinary aspects of the work (PNAS, Current Biology), as well as papers at appropriate conferences and workshops such as Human-Robot Interaction, Intelligent Virtual Agents, CHI, and similar.

[Fos17] Foster, M. E.; Gaschler, A.; and Giuliani, M. Automatically Classifying User Engagement for Dynamic Multi-party Human–Robot Interaction. International Journal of Social Robotics. July 2017.

[Fos16] Foster, M. E.; Alami, R.; Gestranius, O.; Lemon, O.; Niemelä, M.; Odobez, J.; and Pandey, A. K. The MuMMER project: Engaging human-robot interaction in real-world public spaces. In Proceedings of the Eighth International Conference on Social Robotics, 2016.

[Cro19] Cross, E. S., Riddoch, K. A., Pratts, J., Titone, S., Chaudhury, B. & Hortensius, R. (2019). A neurocognitive investigation of the impact of socialising with a robot on empathy for pain. Philosophical Transactions of the Royal Society B.

[Hor18] Hortensius, R. & Cross, E.S. (2018). From automata to animate beings: The scope and limits of attributing socialness to artificial agents. Annals of the New York Academy of Science: The Year in Cognitive Neuroscience.


Robust, Efficient, Dynamic Theory of Mind.

Supervisors: Stacy Marsella (School of Psychology) and Larry Barsalou (School of Psychology).

Background: The ability to socially function effectively is a critical human skill and providing such skills to artificial agents is a core challenge faced by these technologies. The aim of this work is to improve the social skills of artificial agents, making them more robust, by giving them a skill that is fundamental to effective human social interaction, the ability to possess and use beliefs about the mental processes and states of others, commonly called Theory of Mind (ToM) [Whiten, 1991]. Theory of Mind skills are predictive of social cooperation and collective intelligence, as well as key to cognitive empathy, emotional intelligence, and the use of shared mental models in teamwork [many references ablated]. Although people typically develop ToM at an early age, research has shown that even adults with a fully formed capability for ToM are limited in their capacity to employ it (Keysar, Lin, & Barr, 2003; Lin, Keysar, & Epley, 2010).

From a computational perspective, there are sound explanations as to why this may be the case. As critical as they are, forming, maintaining and using models of others in decision making can be computationally intractable. Pynadath & Marsella [2007] presented an approach, called minimal mental models, that sought to reduce these costs by exploiting criteria such as prediction accuracy and utility costs associated with prediction errors as a way to limit model complexity. There is a clear relation of that work to the work in psychology on ad hoc categories formed in order to achieve goals [Barsalou, 1983], as well as ideas on motivated inference [Kunda, 1990].

Approach: This effort seeks to develop more robust artificial agents with ToM using an approach that collects data on human ToM performance, analyzes the data and then constructs a computational model based on the analyses. The resulting model will provide artificial agents with a robust, efficient capacity to reason about others.

a) Study the nature of mental model formation and adaptation in people during social interaction– specifically how do one’s own goals, as well as the other’s goals influence and make tractable the model formation and use process.

b) Develop a tractable computational model of this process that takes into the account the artificial agent’s and the human’s goals, as well as models of each other, in an interaction. Tractable of course is fundamental in face-to-face social interactions where agents must respond rapidly.

c) Evaluate the model in artificial agent – human interactions.

We see this work as fundamental to taking embodied social agents beyond their limited, inflexible approaches to interacting socially with us to a significantly more robust capacity. Key to that will be making theory of mind reasoning in artificial agents more tractable via taking into account both the agent’s goals and the human’s goals in the interaction.

[Kin90] Kunda, Z. (1990). The case for motivated reasoning. Psychological Bulletin, 108(3), 480-498.

[Bar83] Barsalou, L.W. Memory & Cognition (1983) 11: 211.

[Key03] Keysar, B., Lin, S., & Barr, D. (2003). Limits on theory of mind use in adults. Cognition, 89, 25–41.

[Lin10] Lin, S., Keysar, B., & Epley, N. (2010). Reflexively mindblind: Using theory of mind to interpret behavior requires effortful attention. Journal of Experimental Social Psychology, 46, 551–556.

[Pin07] David V. Pynadath & Stacy C. Marsella (2007). Minimal Mental Models. In: AAAI, pp. 1038-1046.

[Whi91] Whiten, Andrew (ed). Natural Theories of Mind. Oxford: Basil Blackwell, 1991


Effective Facial Actions for Artificial Agents.

Supervisors: Rachael Jack (School of Psychology) and Stacy Marsella (School of Psychology).

Facial signals play a critical role in face-to-face social interactions because a wide range of inferences are made about a person from their face, including their emotional, mental and physiological state, their culture, ethnicity, age, sex, social class, and their personality traits (e.g., see [Jac17] for relevant references). These judgments in turn impact how people react to and interact with others, oftentimes with significant consequences such as who is hated or loved, hired or fired (e.g., [Ebe06]). However, given that the human face is a highly complex dynamic information space comprising numerous variations of facial expressions, complexions, and face shapes, understanding what specific face information (or combinations) drives social judgment is empirically very challenging.

In the absence of a formal model of social face signalling, the design of artificial agents’ faces has largely been ad hoc and, in particular, has neglected how facial dynamics impact social judgments, which can impact their performance (e.g., [Che19]). This project aims to address this knowledge gap by delivering a formal model of face signalling for use in artificial agents.

Specifically, the goal of this project is to a) model the space of face signals that drive social judgments during social interactions, b) incorporate this model into artificial agents and c) evaluate the model in different human-artificial agent interactions. The result promises to provide a powerful improvement in the design of artificial agents’ face signalling and social interaction capabilities with broad potential for applications in wider society (e.g., social skills training; challenging stereotyping/prejudice).

Modelling the space of facial signals will be derived using methods from human psychophysical perception studies (e.g., see [Jac17]) and will extend the work of Dr Jack to include a wider range of social signals that are required for face-to-face social interactions (e.g., empathy, agreeableness, skepticism). Face signals that go beyond natural boundaries such as hyper-realistic or super stimuli will also be explored. The resulting model will be initially incorporated into artificial agents using the public domain SmartBody ([Thi18]) animation platform and may be extended to other platforms. Finally, the model will be evaluated in human-agent interaction using the SmartBody platform and may be combined with other modalities including head and eye movements, hand/arm gestures, transient facial changes such as blushing, pallor, or sweating (e.g., [Mar13]).

[Jac17] Jack, R. E., & Schyns, P. G. (2017). Toward a social psychophysics of face communication. Annual review of psychology, 68, 269-297.

[Ebe06] Eberhardt, J. L., Davies, P. G., Purdie-Vaughns, V. J., & Johnson, S. L. (2006). Looking deathworthy: Perceived stereotypicality of Black defendants predicts capital-sentencing outcomes. Psychological science, 17(5), 383-386.

[Che19] Chen, C., Hensel, L. B., Duan, Y., Ince, R., Garrod, O. G., Beskow, J., Jack, R. E. & Schyns, P. G. (2019). Equipping Social Robots with Culturally-Sensitive Facial Expressions of Emotion Using Data- Driven Methods. In: 14th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2019), Lille, France, 14-18 May 2019, (Accepted for Publication).

[Mar13] Stacy Marsella, Yuyu Xu, Margaux Lhommet, Andrew Feng, Stefan Scherer, and Ari Shapiro, “Virtual Character Performance From Speech”, in Symposium on Computer Animation, July 2013.

[Mar14] Stacy Marsella and Jonathan Gratch, “Computationally modeling human emotion”, Communications of the ACM, vol. 57, Dec. 2014, pp. 56-67

[Thi08] Marcus Thiebaux, Andrew Marshall, Stacy Marsella, and Marcelo Kallmann, “SmartBody Behavior Realization for Embodied Conversational Agents”, in Proceedings of Autonomous Agents and Multi- Agent Systems (AAMAS), 2008.


Multimodal Interaction and Huggable Robot

Supervisors: Stephen Brewster (School of Computing Science) and Frank Pollick (School of Psychology).

The aim of the project is to investigate the combination of Human Computer Interaction and social/huggable robots for care, the reduction of stress and anxiety, and emotional support. Existing projects, such as Paro (www.parorobots.com) and the Huggable (www.media.mit.edu/projects/huggable-a-social-robot-for-pediatric-care), focus on very simple interactions. The goal of this PhD project will be to create more complex feedback and sensing to enable a richer interaction between the human and the robot.

The plan would be to study two different aspects of touch: thermal feedback and squeeze input/output. These are key aspects of human-human interaction but have not been studied in human-robot settings where robots and humans come into physical contact.

Thermal feedback has strong associations with emotion and social cues [Wil17]. We use terms like ‘warm and loving’ or ‘cold and distant’ in everyday language. By investigating different uses of warm and cool feedback we can facilitate different emotional relationships with a robot. (This could be used alongside more familiar vibration feedback, such as purring). A series of studies will be undertaken looking at how we can use warming/cooling, rate of change and amount of change in temperature to change responses to robots. We will study responses in terms of, for example, valence and arousal.

We will also look at squeeze interaction from the device. Squeezing in real life offers comfort and support. One half of this task will look at squeeze input, with the human squeezing the robot. This can be done with simple pressure sensors on the robot. The second half will investigate the robot squeezing the arm of the human. For this we will need to build some simple hardware. The studies will look at human responses to squeezing, the social acceptability of these more intimate interactions, and emotional responses to them.

The output of this work will be a series of design prototypes and UI guidelines to help robot designers use new interaction modalities in their robots. The impact of this work will be enable robots have a richer and more natural interaction with the humans they touch. This has many practical applications for the acceptability of robots for care and emotional support.

[Wil17] Wilson, G., and Brewster, S.: Multi-moji: Combining Thermal, Vibrotactile & Visual Stimuli to Expand the Affective Range of Feedback. In Proceedings of the 35th Conference on Human Factors in Computing Systems – CHI ’17, ACM Press, 2017.


Soft eSkin with Embedded Microactuators

Supervisors: Ravinder Dahiya (School of Engineering) and Philippe Schyns (School of Psychology).

The research on tactile skin or e-skin has attracted significant interest recently because it is important for safe human-robot interaction. The focus thus far has been on imitating some of the features of human touch sensing. However, skin is not just the medium to feel the real world. It is also the medium to express one’s feeling through gestures. For example, the skin on face is very much needed to express the emotions such as varying degree of happiness, sadness or anger etc. This important role of skin has not received any attention so far and for the first time this project will explore this direction by developing programmable soft e-skin patches with embedded micro actuators. Building on the flexible and soft electronics research in the school of engineering and the social robotics research in Institute of Neuroscience & Psychology, this project will attempt achieve the following scientific and technological goals:

  • To identify the suitable actuation method to generate simple emotive features such as wrinkles on forehead.
  • To develop a soft eSkin patch with embedded microactutors.
  • To use the model developed for a soft eSkin patch with embedded microactutors.
  • To develop an AI approach aimed at programming and controlling the actuators.

Conversational Venue Recommendation

Supervisors: Craig McDonald (School of Computing Science) and Phil McAleer (School of Psychology).

Increasingly, location-based social networks, such as Foursquare, Facebook or Yelp are replacing traditional static travel guidebooks. Indeed, personalize venue recommendation is an important task for location-based social networks. This task aims to suggest interesting venues that a user may visit, personalized to their tastes and current context, as might be detected form their current location, recent venue visits and historical venue visits. The recent development of models for venue recommendation have encompassed deep learning techniques, able to make effective personalized recommendations.

Venue recommendation is typically deployed such that the user interacts with a mobile phone application. To the best of our knowledge, voice-based venue recommendation has seen considerably less research, but is a rich area for potential improvement. In particular, a venue recommendation agent may be able to elicit further preferences, ask if they prefer one venue or another, or ask for clarification in the type of venue, or distance to be travelled to the next venue.

This proposal aims to:

  • Develop and evaluate models for making venue recommendation using chatbot interfaces, that can be adapted to voice through integration of text-to-speech technology, building upon recent neural network architectures for venue recommendation.
  • Integrate additional factors about personality of the user, or other voice-based context signals (stress, urgency, group interactions) that can inform the venue recommendation agent.

 

Venue recommendation is an information access scenario for citizens within a “smart city” – indeed, smart city sensors can be used to augment venue recommendation with information about which areas of the city are busy.

[Man18] Contextual Attention Recurrent Architecture for Context-aware Venue Recommendation. Jarana Manotumruksa, Craig Macdonald and Iadh Ounis. In Proceedings of SIGIR 2018.

[Man17] A Deep Recurrent Collaborative Filtering Framework for Venue Recommendation. Jarana Manotumruksa, Craig Macdonald and Iadh Ounis. In Proceedings of CIKM 2017.

[Dev15] Experiments with a Venue-Centric Model for Personalised and Time-Aware Venue Suggestion. Romain Deveaud, Dyaa Albakour, Craig Macdonald, Iadh Ounis. In Proceedings of CIKM 2015.