Aligned Students
As the Social AI CDT has grown as a centre of excellence for postgraduate training, we have invited University of Glasgow PhD students undertaking related research to join the CDT as Aligned Students. These students benefit from a membership to a multidiscipline cohort of PhD students with increased peer support, invitations to bespoke events funded and administered by the Social AI CDT and our partners, access to wider Social AI CDT stakeholder network(s) for career collaboration and opportunities, and use of the Social AI CDT Laboratory with the Advanced Research Centre (ARC).
If you are a PhD student at the University of Glasgow whose research aligns with the Social AI CDT research programme and you would like to be considered for Aligned Student status, please contact the CDT for more information.
Alia Ahmed Aldossary
Stress is a pervasive problem that affects people from all levels of society. While there are several methods to manage stress, detecting it is a challenge. This is where AI technologies come in handy. The use of physiological signals and machine learning algorithms to identify stress has become increasingly popular in recent years. In my PhD, I investigate the feasibility and application of stress detection using these techniques. My work contributes to the development of AI technologies that can impact human well-being positively. This research aligns with the principles of the Social AI CDT, which emphasises the creation of AI systems that are not only technically competent but also socially aware and tailored to meet human needs. By developing AI technologies that can detect stress, we can improve the well-being of individuals and communities.
Stress Detection through analysis of Electrodermal Activity (EDA) in public speaking concept
This project aims to detect stress through the application of self-learning models, focusing on Electrodermal Activity (EDA) analysis alongside other physiological signals. This research explores the efficacy and the application of various self-supervised learning algorithms, such as Long Short-Term Memory (LSTM) networks and Convolutional Neural Networks (CNNs), in creating stress detection models for individual physiological profiles.
Abdulrahman Alshememry
Enabling quadruped robots, i.e., AI agents, to cooperate with humans to perform joint tasks involves considering the social interaction between the human agent and the embodied artificial agent. Moreover, designing a cognitive model for cooperative quadruped robots would also help improve the integration of artificial agents into different environments more naturally. The design and evaluation of such a cognitive model should also take into account human responses to cooperative quadruped robots. Therefore, this research aligns with many of the core aspects of the Social AI CDT research programme, e.g., to outline principles and laws that govern social interactions between human and artificial agents, and to improve the impact of artificial agents through their integration into wider and more complex technological systems and infrastructures.
Developing a Cognitive Model for Quadruped Robots to Improve Human-Robot Interaction
In this project, I will work with a quadruped robot platform and explore how to develop a cognitive model for the robot. I will research existing cognitive models and choose one that is suitable for the mobile quadruped robot, then redesigning and implementing the chosen cognitive model into the quadruped robot. The purpose of applying the cognitive model into the quadruped robot is to enable it to cooperate with humans in performing different tasks autonomously, which would improve human-robot interactions.
Ebtihal Abdulmaeen Althubiti
Alignment to the Social AI CDT is evident by the use of my bigraphical framework to design, analyse and prove the adherence of AI systems to different privacy regulations, i.e., the European Union (EU) General Data Protection Regulation (GDPR), the Australian Privacy Principles, the California Consumer Privacy Act (CCPA), the Saudi Arabian Personal Data Protection Law (PDPL), the Georgia Computer Data Privacy Act (GCDPA), and the American Data Privacy and Protection Act (ADPPA). I especially prove the adherence of the AI systems to the privacy regulations related to collecting data, the purpose of processing the data, sharing the data with third parties and meeting the privacy requirements for cross-border data transfer. My framework allows the cooperation between developers of AI systems and privacy experts to design the system based on these regulations, as the diagrammatic notations help the privacy experts understand the system and provide their advice.
Formal Analysis of Privacy Properties with Bigraphs
Many governments have imposed legislation regulating user data handling, such as the General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA) etc. Ensuring organisations comply with these regulations is necessary. Formal methods, which are techniques used to model systems mathematically, provide strong guarantees to prove the organisations’ adherence to privacy regulations. My project focuses on defining a framework based on Bigraphical Reactive Systems (BRSs) to prove the privacy regulations within the systems. BRSs are a universal model of computation used to model systems’ behaviours that change their connectivity and locality during the system’s evolution. BRSs have user-specified reaction rules that are defined mathematically and diagrammatically. I focus on using diagrammatic notations to allow developers and privacy experts (privacy lawyers) to explicitly visualise the systems and describe updates by rewriting rules that describe system behaviour. My project focuses on modelling and proving notions of providing consent, withdrawing consent, purpose limitations, the right to access, sharing data with third parties and international data transfers.
Tongfei Bian
My work helps socially intelligent agents learn the principles and laws of human face-to-face social interaction, understand and analyse users and information in ongoing social interactions, and predict the likely direction of social interaction down the road, making the Social AI CDT a great fit for me and project. My research will lay a solid foundation for socially intelligent agents to naturally and appropriately participate in human social interactions, making socially intelligent agents credible and effective partners for human users in social situations.
Social Interaction Understanding from an Egocentric Perspective
My project aims to explore the potential for socially intelligent agents to learn human social intelligence and apply it in face-to-face social scenarios. Specifically, I will train socially intelligent agents to observe and participate in human social interactions from an egocentric perspective, to analyse social information in terms of feature space and time dimensions based on multimodal information such as the user’s sequence of postural features, voice feature sequences, and semantic information, to recognise and forecast the users’ social intentions, attitudes, and behaviours, and to provide appropriate and timely feedback accordingly.
Andrew Blair
My alignment to the Social AI CDT is clear as my research makes use of Large Language Models and Computer Vision techniques; both falling into the domain of Deep Learning and thus Artificial Intelligence. Social robots success depends entirely on their ability to “sense” the user, whether that be through audio or vision; and to accurately predict the intent of the user in order to achieve task success. In addition, my research makes use of similar robots to that of other Social AI CDT students with the potential for increased peer support and collaboration.
Designing for Robust and Inclusive Public-Space Human-Robot Interaction
This research seeks to explore how I can design a social robot for public spaces. I seek to do this in three ways by (1) embedding and altering existing software engineering practices in conjunction with all stakeholders, (2) implementing robust dialogue capability, such as accurate ASR and ability to respond to any query posed in some capacity, and (3) making the robot inclusive to all, as public spaces are frequented by a diverse population. I plan to evaluate my success via real-world deployments of my robot systems.
Kan Chen
Telerobotics is an advanced system that combines remote operation with robotic technology, enabling highly precise control of robots through remote communication. Telerobotics provides operators with a higher degree of control and offers significant advantages when dealing with high-risk environments. It is also widely applied in various fields such as healthcare, deep-sea research, industrial production, etc., making my involvement with the Social AI CDT so helpful in terms of collaboration with other CDT students who are working in similar research areas.
Communication and Control Co-design for Telerobotics
Future wireless communication system (Beyond 5G and 6G) would be expected to provide digital connectivity for intelligent systems, such as robot, vehicles, drones, etc, and support a wide range of future applications in transportation, industry, healthcare, etc. However, those applications require the extreme communication requirements, such as low latency, high reliability, as well as high date rate, which leads to significant challenges for future wireless systems. In my project I address the scalable and sustainable challenges by jointly designing, with my supervisors, control and communication systems. The goal is to provide an effective communication solution for future connected intelligent systems.
Xuan Cui
Metacognition, which encompasses self-evaluation and self-monitoring, plays a critical role in how individuals assess their own knowledge and guide further action. By investigating these mechanisms, it could facilitate the development of artificial agents that not only recognise sensory inputs but also adapt their responses based on self-assessments of the environment and uncertainty. This capability could enhance the agents’ interactions with users, enabling them to respond more effectively to varying contexts and user needs.
Behavioural Modelling and Neural Representation of Metacognition
In my PhD project, I investigate visual perception and metacognition through behavioural modelling, aiming to develop a comprehensive model that integrates sensory evidence, trial-by-trial variability, and task interactions. To explore the cognitive and neural mechanisms underlying these processes, I further explore the neural representation of visual perception and metacognition by applying information-theoretic methods and reverse correlation to magnetoencephalography (MEG) data.
Connor Dalby
Given my research utilises Artificial Intelligence within the field of medicine, as a Social AI CDT Aligned Student I am excited to share ideas and collaborate with my fellow CDT students in a interdisciplinary way, and I look forward to contributing and learning from the various training and networking events the Social AI CDT has on offer.
DeepThickness: A Novel Deep Learning Method for Estimating Cortical Thickness Trajectories in Alzheimer’s Patients and Healthy Populations
Alzheimer’s disease (AD) is a neurodegenerative disease that presents critical challenges in diagnosis and treatment. Emerging research indicates that AD-related cortical changes, such as cortical thickness (CTh), can appear up to a decade before cognitive symptoms. Accurately measuring CTh can therefore offer a significant avenue of early AD diagnosis and monitoring of clinical progression. Current methods for measuring CTh, however, are time-consuming and methodologically flawed, limiting their utility in research and clinical settings. This underscores the need for a rapid, accurate and scalable CTh measurement technique suitable for both large datasets and individualised applications. The current PhD project aims to utilise recent advances in deep learning to build a tool that can generate accurate whole brain CTh estimates from MR images in under a minute. Leveraging extensive longitudinal data, we aim to map CTh and clinical trajectories over time for healthy and AD populations, ultimately personalising these trajectories to an individual level. Finally, we will evaluate the diagnostic and prognostic utility of CTh estimates for Alzheimer’s disease.
Yufeng Diao
My research aligns with the Social AI CDT as it focuses on improving how machines understand and respond to their environments, specifically in the context of autonomous systems. By enhancing communication efficiency and integrating task-oriented designs, my work contributes to more effective real-time decision-making and interaction between AI-driven systems and the physical world. These improvements are vital for the future of socially aware AI systems, particularly in areas like autonomous driving, where machine interactions with complex environments are critical.
Task-Oriented Communication for Edge Intelligence Enabled Connected Robotics Systems
The 5th and 6th generation cellular networks (5G and 6G) with impressive performance provide great potential to exchange data and skills over wireless networks. The integration of advanced communication systems with robotic technology is leading the development of connected autonomous robotic systems, such as edge-enabled autonomous driving. These advancements hold immense potential in various sectors including Industry 4.0, healthcare, and education, particularly highlighted during the COVID-19 pandemic where autonomous robots have been instrumental in mitigating the socio-economic impacts in emergency scenarios where human presence is limited. In this project, task-oriented design techniques are studied to address the challenges from the perspective of co-designing communication protocols, robotic systems, and inference engines.
Austin Dibble
My PhD project aligns with the Social AI CDT’s mission by developing AI methods that interpret complex human conditions like Alzheimer’s through neuroimaging data analysis. This research aims to advance AI’s capability in recognizing and adapting to human cognitive health and also enhance interaction between AI systems and clinicians by providing actionable insights based on minimal data. The project will foster AI’s understanding of human neuropsychological states, supporting the goal of creating socially intelligent artificial agents that respond effectively to human needs and conditions in healthcare settings.
A novel Artificial Intelligence Method for inferencing impaired cognition trajectories from MRI data in patients with Alzheimer’s dementia
The goal of the PhD project is focused on developing AI methods to predict and monitor cognitive impairment trajectories in Alzheimer’s patients using MRI data. Overall, it aims to create predictive models that use minimal initial data, such as one or two MRI scans, to map the progression of cognitive decline. This approach seeks to expand predictive understanding for neurodegenerative diseases, providing precise, proactive healthcare solutions. The project promises potential advancements in early detection and tailored treatment strategies for individuals, potentially improving outcomes for those affected by Alzheimer’s and similar conditions.
Morad Elfleet
My research aligns with the Social AI CDT by enhancing the non-verbal communication abilities of virtual agents, specifically through gaze behaviour. Gaze is crucial for conveying attention, engagement, and emotional states in human interactions, making it equally essential for embodied conversational agents. By leveraging saliency mapping and real-time user feedback, the project aims to make virtual agents more socially aware and context-sensitive, improving interactions in immersive environments like VR. This work contributes to creating agents that better mimic human social behaviours, increasing user trust, engagement, and overall social presence in human-agent interactions.
Enhancing Immersive Virtual Interactions with Real-time Behaviour-Driven Virtual Agents
My research focuses on developing and evaluating gaze behaviours in virtual agents within virtual reality (VR). Specifically, it explores two primary approaches: (1) using saliency mapping to drive the agent’s gaze towards high-interest areas in real-time and (2) user experience evaluation to study how varying gaze patterns influence user comfort, immersion, and social presence during interactions. The project aims to create more natural and socially aware agents by integrating intelligent gaze behaviours, grounded in visual cues and user feedback, improving how virtual agents engage with users in dynamic, immersive VR environments.
Bishal Ghosh
The idea of using imitation learning and developmental approach stems from the combination of innateness hypothesis, behaviourist theory, cognitive development theory and interaction theory. Perspective taking and social referencing play a major role during the development period by acting as a guide to learning and simulation theory provides vital insights into the explainability behaviours. These psychological theories define the cornerstone of this research. My intent to computationally model them leads to the amalgamation of computing science and psychology in the domain of social robotics, resulting in my clear alignment with the Social AI CDT.
Developmental Approach to Imitation Learning: Adapting Nonverbal Communication Dynamics to Human-Robot Social Interaction
Based on the current trend in gesture research we are compelled to question if the role of gesture in Human-robot social interaction is limited to a supporting role and cannot convey deeper meanings? To overcome the current impasse, I would like to explore the gesture generation problem by drawing inspiration from the emergent behaviour paradigm. The main objective of this research is to enable robots to learn how to communicate efficiently by observing human-human interaction. This objective will be achieved by (1) developing a baseline method for learning the principles of human-human interaction autonomously from visual data and (2) allowing the robot to learn and develop in an online manner during human-robot interaction sessions in the wild.
Junle Li
My research sits at the intersection of Human-Computer Interaction and Artificial Intelligence, focusing on integrating social signal processing into secure system design. I leverage approaches from the Social AI discipline to model complex human behaviours—specifically gaze and head pose—to interpret user intent within immersive environments. Furthermore, my work on LLM agent alignment explores how socially intelligent systems can be built to process sensitive behavioural signals securely. This combination of sensor-based behavioural analysis and robust AI safety frameworks drives my contribution to the field of Artificial Social Intelligence.
Leveraging Behavioural Biometrics for Secure and Usable Authentication in Immersive Virtual Environments
This research investigates how behavioural biometrics can be leveraged to enhance both usability and security in immersive virtual environments. By utilising rich user data captured by standard VR sensors—such as gaze fixations and saccades from eye-tracking and movement dynamics from Inertial Measurement Units (IMUs)—the project models fundamental social signals naturally used in human interaction. The goal is to develop continuous authentication mechanisms that verify identity implicitly, preserving the social presence and interaction flow essential for collaborative virtual spaces.
Michael Mooney
My project addresses a fundamental social justice issue: the under-diagnosis and insufficient support for dyslexic students. Combining psychology and AI, it explores the cognitive, behavioural, and physiological mechanisms underlying reading difficulties. Advanced deep learning models allows us to automate assessments and highlight specific breakdowns in reading. This allows me to create an AI specific agent for each student offering them tailored interventions. This integrated approach ensures equitable access by embedding the screening and support tool within existing educational infrastructures, reaching diverse populations and reducing barriers. The project refines and optimizes the screening process, ultimately contributing to a new paradigm where all learners receive the assistance they deserve. Ultimately, my hope is that by these artificial agents, it ensures that no one is truly left behind.
Multi-modal Approach to Screening Reading Disabilities using Deep Learning Methods in High School Aged Students
Dyslexia affects around 10–15% of the UK population, with an estimated six million individuals undiagnosed. Many students pass through the entire education system without receiving the appropriate support. Even those who are diagnosed sometimes receive only a binary label rather than personalized help, this is due to significant backlogs and high pressure to issue quick diagnoses. This project aims to develop a screening system that identifies specific cognitive breakdowns in reading within twenty minutes. By collecting a large, multimodal dataset and applying advanced deep learning models in real time, I can tailor interventions to each student’s needs, ultimately improving the educational experience for those with dyslexia.
Sundas Rafat Mulkana
My research outcomes aim to contribute significantly to the integration of safe collaborative robots in social, healthcare, and industrial environments. By addressing the critical need for safety guarantees and naturalness in human-robot interaction, this work aims to pave the way for safer and more effective human-robot collaboration. This aligns well with the social AI CDT theme as it aims to promote safety and trust in human robot interaction.
Robot Motion Planning in Dynamic Environment
The future direction of collaborative robots is shifting from having predefined rules for interaction between human and robot in structured environments, such as industrial settings, to achieving more flexibility in their actions in unstructured environments, such as households and public places. These futuristic robots, trained on learning-based methods, show promising results in simulation and controlled environments. However, when deploying these robots in the real-world ensuring human physical and cognitive safety raises major concerns. This necessitates the development of methods which provide provable safety constraints on robot motion which makes it cognisant of its proximity to humans while performing a task, thus providing formal guarantees that the robot trained on learning-based methods will not come in harmful contact with human. Additionally, studying the effect of these safety constraints on task performance and the ease of collaboration through user feedback in joint action tasks between human and robot would further help improve robot behaviour.
Amey Anil Noolkar
A key focus of my research is the social application of AI techniques that help monitor the well-being of employees, making me a great fit with the Social AI CDT. As part of my PhD, I will work on tasks such as data gathering and analysis which not only have technical value, but also advance our understanding of well-being in the workplace. In addition to being a Social AI CDT Aligned student, I am also part of the University of Glasgow’s Social AI Group.
Digital Sensing and Intervention for Well-being in Workplace
The objective of my project is to apply signal processing and machine learning techniques on data acquired from wearable sensors, such as PPGs and EDAs; and contribute towards healthcare and monitoring in a workplace setting. The first research question brings us to the analysis of the quality of signals acquired by PPG sensors and how it varies based on the configuration and location of the sensor. The final goal is to design prototypes which acquire and process such signals more effectively than state-of-the-art.
Francesco Perrone
My research is well aligned to the Social AI CDT as it focuses on those human-robot interactions which trigger the necessity of some level of moral capabilities on robot-side. My aim is to investigate, develop, and test theories and applications, to imbue robots with such capabilities, facilitating scalable and robust autonomous moral decision making in artificial agents.
Experimental Methods for Moral Behaviour Analysis in Human-Robot Interactions
TBC
Xiangmin Xu
Integrating autonomous robotics into 3D reconstruction tasks requires careful trajectory planning to optimise performance. However, when deploying these systems in human-populated environments, designers must incorporate human factors into the planning process. This approach ensures a natural and comfortable coexistence between humans and robots, paving the way for potential human-robot interaction designs in 3D reconstruction applications. My research on human-robot interaction and path planning aligns closely with the objectives of the Social AI CDT.
Real-Time Scene Reconstruction for Autonomous Systems
My research focuses on developing real-time 3D scene reconstruction pipelines for autonomous robotic platforms using cutting-edge machine learning methods. It also aims to implement communication schedulers that optimise overall reconstruction performance while balancing the trade-off between timeliness and fidelity of the final output.
Jiaming Yang
My research aligns with the Social AI CDT’s mission to improve the impact of artificial agents by integrating them into wider and more complex technological systems. By developing a task-oriented edge computing framework for autonomous vehicles, my work contributes to the advancement of artificial agents capable of operating autonomously in dynamic environments. This integration improves their interaction within complex systems, such as smart warehouses and healthcare facilities, ensuring safe and efficient operations. The framework’s adaptability and real-time processing capabilities highlight its potential to enhance the functionality and impact of AI in diverse technological infrastructures.
Task-Oriented Cooperative Data Compression and Communications Based on Edge Computing
My project focuses on advancing human-robot (AI agent) interaction, communication efficiency, and data handling for autonomous robots, such as autonomous vehicles in a warehouse, through a task-oriented edge computing framework. The project aims to design a novel framework that integrates reinforcement learning, deep learning, and data compression to control an autonomous vehicle, ensuring human safety.
Junhan Yang
Gestures that accompany speech are an essential part of natural
and efficient embodied agent communication. My research
enables social agents to make great speech and interact better in
dyadic conversations. Utilizing data-driven deep learning
approaches and linguistic knowledge can both generate and
evaluate semantic integrated co-speech gesture. This makes me
and my project aligned perfect with Social AI CDT, to make social
agents become trustworthy and effective partners in real time
interactions.
Co-speech Gesture Generation and Evaluation for Embodied Agents
My PhD research focuses on co-speech gesture generation and evaluation for embodied conversational agents. I will develop models that generate natural, context-appropriate hand gestures based on spoken language signals like audio prosody, timing, and transcripts or semantic cues. A key goal is improving semantic relevance and interaction timing, so gestures support meaning rather than merely matching rhythm. Alongside generation, I will build a rigorous evaluation framework combining objective motion/coordination metrics with human-centered assessment of appropriateness, clarity, and social impact. The outcome will be methods and benchmarks that help agents communicate more effectively in various social situations.
Rongyu Yu
My project focuses on investigating robotic mimicry attacks on human behavioural biometrics and developing more robust defence methods against such attacks. By analysing these threats and creating enhanced security measures, my research aims to secure robotic systems and improve the resilience of human-robot interactions. This aligns with the Social AI CDT’s objectives by establishing principles that govern human-AI interactions at cognitive, behavioural, and physiological levels, enhancing the integration of secure artificial agents into complex systems, developing defence strategies for trustworthy human-agent interactions, and examining human responses to intelligent agents in everyday life.
Securing Robotic Systems: Human Behavioural Biometrics and Defence Against Robot Mimicry Attacks
My PhD project focuses on applying human behavioural biometrics to secure robotic system, with a comprehensive analysis of human behaviour and interactions during human interact with robots (ai-agents). My project also investigates robotic mimicry attacks on these systems, understanding vulnerabilities, and developing defence strategies to ensure the resilience and trustworthiness of robotic system in real-world human interactions.