International Workshop on AI and Ethics

  • Organizers: Monika Harvey and Alessandro Vinciarelli (University of Glasgow)
  • Date: July 2nd, 2021
  • Time: 10.00-17.00 (UK time)

In his recent novel “Machines like Me”, trying to imagine the way an advanced AI system could live among humans, Ian McEwan identifies ethics as a major obstacle for machines to seamlessly integrate the life of their users: “You’ll need to give this mind some rules to live by. How about a prohibition against lying? […] But social life teems with harmless or even helpful untruths […] Who’s going to write the algorithm for the little white lie that spares the blushes of a friend?“. In a similar vein, Brian Cantwell Smith recognises the ability to act ethically as a major difference between machines and humans, with these latter capable of something that, at least for the moment, is not accessible to machines: “[…] a form of dispassionate deliberative thought , grounded in ethical commitment and responsible action, appropriate to the situation in which it is deployed“. The goal of this workshop is to explore ethical issues revolving around Artificial Intelligence and its real-world applications, especially when these have consequences on the life of people or involve technologies that disguise as humans to be more persuasive and appealing (e.g., personal assistants and social robots). Special attention will be on ethics as an innovation stimulus, i.e., as an “instrument” for the development of technologies that could work better because not affected by biases, prejudices and unethical motivations.

Program

  • 10.00 – 11.00. Adam Leon Smith – Bias in Artificial Intelligence;
  • 11.00 -12.00. David Bodoff – Learning and Biases, in Social Science and Machine Learning;
  • 12.00 – 13.00: Raja Chatila – Responsible Development of AI-based systems .
  • 13.00 – 14.00: Lunch Break.
  • 14.00 – 15.00. Bridgette Wessels – Developing a framework to think about, discuss and design AI informed services with and for society;
  • 15.00 – 16.00. Olivia Gambelin – Ethics as a Tool for Innovation;
  • 16.00 – 17.00: Panel and Final Remarks.

 

Bias in Artificial Intelligence

Adam Leon Smith
Dragonfly (UK)

Abstract In this talk Adam will unpack the loaded term bias, and discuss its various forms.  He will explain how bias can be injected in many ways, and practical steps to mitigate the risk of unwanted bias.  In addition the regulatory requirements in relation to bias in GDPR and the upcoming EU AI Regulation will be explained.

Bio Adam Leon Smith is Chief Technology Officer of Dragonfly, a European consultancy, training and products firm that specialises in the intersection of AI and quality.  Adam also runs regular training courses in testing, test automation and testing issues related to AI systems. In addition, he is the current Chair of the British Computer Society’s Special Interest Group in Software Testing, the biggest UK non-profit testing group.

He is very active in ISO/IECs Artificial Intelligence standardisation community, where he leads the ISO/IEC projects developing a technical report on AI bias, and a standard for extending systems quality models to cover AI.   Adam is a Fellow of the British Computer Society, and a Director of ForHumanity, an independent oversight body auditing the use of AI technology.   He also hosts the Fuzzy Quality podcast, which explores the latest research about AI and quality.

Abstract Machine-learning is a technology that makes decisions, including decisions that directly affect humans. There is concern that machine-learning decisions are biased. This concern begets numerous questions: What is bias? Can social scientists help to define and identify whether a machine-learning model is biased? How can social scientists and machine learning professionals work together, to help to make machine-learning models less biased? Done right, might machine-learning models ultimately lead to less bias than a world based solely on human decision-making? In this pep-talk I hope to encourage students with the right mix of skills – from the social sciences and machine learning – to take the lead in addressing these questions and solving them in the real world.

Bio David Bodoff is Senior Lecturer at the University of Haifa School of Business. His PhD is from the NYU Stern School of Business. David worked in symbolic AI until that field collapsed, worked for many years in text search and retrieval, and re-joined the field of AI when text retrieval discovered machine learning. His research has been published in a variety of CS, economics, and Information Systems outlets, such as ACM SIGIR, NIPS, Experimental Economics, ACM Transactions on Information Systems, MIS Quarterly, and Information Systems Research. He serves as Associate Editor of Information & Management (Elsevier).

Learning and Biases, in Social Science and Machine Learning

David Bodoff
University of Haifa (Israel)

Developing a framework to think about, discuss and design AI informed services with and for society

Bridgette Wessels
University of Glasgow (UK)

Abstract The promise of AI to enhance economic and social processes is often countered by fears of the risks it poses in areas such as reinforcing bias, invading privacy and job losses.  These types of debates are now well entrenched in their respective discourses. One of the questions raised by these positions is: ‘are the right questions being asked about AI and are there other ways to address how to use AI for good’? Following on from this question is, if the right questions are not being asked, in what ways can better questions be developed. This talk will explore possible ways of arriving at better questions using a set of tensions in the development and use of algorithms, data and AI. The tensions that will be discussed include: (a) Using algorithms to make decisions and predictions more accurate versus ensuring fair and equal treatment. (b) Reaping the benefits of increased personalisation in the digital sphere versus enhancing solidarity and citizenship. (c) Using data to improve the quality and efficiency of services versus respecting the privacy and informational autonomy of individuals. (d) Using automation to make people’s lives more convenient versus promoting self-actualisation and dignity (Whittlestone et al. 2019). The talk will draw on these to consider whether interdisciplinary and participatory research is needed, and if needed what are the characteristics of such research and participatory approaches.

(Whittlestone, J. Nyrup, R. Alexandrova, A. Dihal, K. Cave, S. (2019) Ethical and societal implications of algorithms, data, and artificial intelligence: a roadmap for research. London: Nuffield Foundation).

Bio Bridgette Wessels is Professor of Social Inequality, which focuses specifically on the digital age.  She has undertaken research in the social and cultural aspects of the innovation and use of digital technologies and services in a wide variety of contexts. Her funded research for example has addressed areas such as policing and digital technologies, telehealth, digital technologies in everyday life, digital cultures, digital systems and transport, work and technology, social media and political communication, as well as open data and the knowledge society.  She is currently working on a Nuffield Foundation project on data literacies, an AHRC project using computational ontologies and an ESRC project on digital systems and economic productivity. 

Abstract Modern AI systems based on Machine Learning techniques process data to predict outcomes and to make decisions,  They have become of widespread use in almost all sectors, embedded in systems such as interactive agents (e.g., chatbots), or used as standalone systems, e.g. for visual recognition.  However, the very methods on which these AI systems are based, are opaque, prone to bias and may produce wrong answers.  This raises several ethical, legal and societal issues, to he point that the European Commission  has proposed an AI regulation.

Properties such as transparency, explainability, technical robustness and safety, are key to build governance frameworks and to make them operational, so that to make the development and use of AI systems aligned with fundamental values and human rights.
Bio Raja Chatila is Professor Emeritus of Artificial Intelligence, Robotics and IT Ethics at Sorbonne University in Paris. He is former director of the SMART Laboratory of Excellence on Human-Machine Interactions and of the Institute of Intelligent Systems and Robotics. He contributes in several areas of Artificial Intelligence and autonomous and interactive Robotics as well as in ethics of information technologies. He is chair of The IEEE Global Initiative on ethics of and Autonomous and Intelligent Systems.  He is an IEEE Fellow and recipient of the IEEE Robotics and Automation Society Pioneer Award.

Responsible Development of AI-based systems

Raja Chatila
Sorbonne University (France)

Ethics as a Tool for Innovation

Olivia Gambelin
Ethical Intelligence (UK)

Abstract Most often ethics is described as a risk mitigator in terms of industry application. This tends to create a negative connotation of ethics in the mind, when in reality ethics was originally described as ‘the pursuit of the good life’. What this means in terms of industry specifically is that ethics can be used as a tool for innovation by channeling creativity and imbuing trust in our systems.

Bio Olivia Gambelin is the Founder of Ethical Intelligence and an AI Ethicist who works with entrepreneurs to bring ethical analysis into technology development. Originally from the Silicon Valley, Olivia began her career working in digital strategy for tech startups. This experience prompted her to pursue a BA in Philosophy and Entrepreneurship from the Honors College of Baylor University, followed by a MSc in Philosophy, concentration in AI Ethics, at the University of Edinburgh. Now, besides steering Ethical Intelligence, she is on the Founding Editorial Board of Springer Nature’s AI & Ethics Journal, Save the Children US External Ethics Committee, and is co-chair of IEEE’s AI Expert Network Criteria Committee.