"The public is both fascinated and mystified about how AI will shape our future", explained AIES Co-chair Francesca Rossi, IBM Research and University of Padova. "But no one discipline can begin to answer these questions alone. We've brought together some of the world's leading experts to imagine how AI will transform our future and how we can ensure that these technologies best serve humanity."
Conference organizers encouraged the submission of research papers on a range of topics including building ethical AI systems, the impact of AI on the workforce, AI and the law, and the societal impact of AI. Out of 200 submissions, only 61 papers have been selected and will be presented during the conference.
The programme of AIES 2018 also includes invited talks by leading scientists, panel discussions on AI ethics standards and the future AI, and the presentation of the leading professional and student research papers on AI. Co-chairs include Francesca Rossi, a computer scientist and former president of the International Joint Conference on Artificial Intelligence; Jason Furman, a Harvard economist and former Chairman of the Council of Economic Advisors (CEA); Huw Price, a philosopher and Academic Director of the Leverhulme Centre for Future of Intelligence; and Gary Marchant, Regent's Professor of Law and Director of the Center for Law, Science and Innovation at Arizona State University.
AIES 2018 highlights include:
Iyad Rahwan and Edmond Awad describe the Moral Machine, an internet-based serious game exploring the many-dimensional ethical dilemmas faced by autonomous vehicles. The game they developed enabled them to gather 40 million decisions from 3 million people in 200 countries and territories. They report the various preferences estimated from this data, and document interpersonal differences in the strength of these preferences. They also report cross-cultural ethical variation and uncover major clusters of countries exhibiting substantial differences along key moral preferences. These differences correlate with modern institutions, but also with deep cultural traits. Iyad Rahwan and Edmond Ewad discuss how these three layers of preferences can help progress toward global, harmonious, and socially acceptable principles for machine ethics.
At the dawn of this era of human-machine interaction, human beings have an opportunity to shape fundamentally the ways in which machine learning will expand or contract the human experience, both individually and collectively. As efforts to develop guiding ethical principles and legal constructs for human-machine interaction move forward, how do we address not only what we do with AI, but also the question of who gets to decide and how? Are guiding principles of 'Liberty and Justice for All' still relevant? Does a new era require new models of open leadership and collaboration around law, ethics, and AI?
When we think about the values AI should have in order to make right decisions and avoid wrong ones, there's a large but hidden third category to consider: decisions that are not-wrong but also not-right. This is the grey space of judgment calls, and just having good values might not help as much as you'd think here. Autonomous cars are used as the case study here. Lessons are offered for broader AI: such as ethical dilemmas that can arise in everyday scenarios such as lane positioning and navigation - and not just in crazy crash scenarios. This is the space where one good value might conflict with another good value, and there's no "right" answer or even broad consensus on an answer.
This talk will consider the impact of AI/robots on employment, wages and the future of work more broadly. The speaker will argue that we should focus on policies that make AI robotics technology broadly inclusive both in terms of consumption and ownership so that billions of people can benefit from higher productivity and get on the path to the coming age of intolerable abundance.
Panelists include Takashi Egawa, NEC Corporation; Simson L. Garfinkel, USACM; John C. Havens, IEEE (moderator); Annette Reilly, IEEE; and Francesca Rossi, IBM and University of Padova.
While dealing with intelligent and autonomous technologies, safety standards and standardization projects are providing detailed guidelines or requirements to help organizations institute new levels of transparency, accountability and traceability. The panelists will explore how we can build trust and maximize innovation while avoiding negative unintended consequences.
Shared between the following two papers:
For AI systems to be accepted and trusted, the users should be able to understand the reasoning process of the system and to form coherent explanations of the systems decisions and actions. This paper presents a novel and general method to provide a vizualization of internal states of deep reinforcement learning models, thus enabling the formation of explanations that are intelligible to humans.
The rhetoric of the race for strategic advantage is increasingly being used with regard to the development of AI. This paper assesses the potential risks of the AI race narrative, explores the role of the research community in responding to these risks, and discusses alternative ways to develop AI in a collaborative and responsible way.
More information is available at the AIES 2018 website.