Posts tagged AI Ethics
Human Experience and AI Regulation: What European Union Law Brings to Digital Technology Ethics

Joanna J. Bryson, Weizenbaum Journal of the Digital Society, 3(3), 2023. 

Joseph Weizenbaum is famous for quitting AI after his secretary thought his chatbot, Eliza, understood her. But his ethical concerns went well beyond that, and concern not only potential abuse but also culpable lack of use of intelligent systems. Abuse includes making inhuman cruelty and acts of war more emotionally accessible to human operators or simply solving the problems necessary to make nuclear weapons, negligent lack of use of AI includes failing to solve the social issues of inequality and resource distribution. I was honoured to be asked for the Weizenbaum centenary to explain how the EU’s new digital regulations address his concerns. I first talk about whether Europe has legitimacy or capacity to do so, and then (concluding it might) I describe how the Digital Services Acts and the General Data Protection Regulation mostly do so, though I also spare some words for the Digital Markets Act (which addresses inequality) and the AI Act — which in theory helps by labelling all AI as AI. But Weizenbaum’s secretary knew Eliza was a chatbot, so the GDPR and DSA’s lines about transparency might be more important than that.

Read More
Do We Collaborate With What We Design?

Katie D. Evans, Scott A. Robbins, and Joanna J. Bryson, Topics in Cognitive Science, 2023. 

In this paper, we critically assess both the accuracy and desirability of using the term “collaboration” to describe interactions between humans and AI systems. We begin by proposing an alternative ontology of human–machine interaction, one which features not two equivalently autonomous agents, but rather one machine that exists in a relationship of heteronomy to one or more human agents. In this sense, while the machine may have a significant degree of independence concerning the means by which it achieves its ends, the ends themselves are always chosen by at least one human agent, whose interests may differ from those of the individuals interacting with the machine. We finally consider the motivations and risks inherent to the continued use of the term “collaboration,” exploring its strained relation to the concept of transparency, and consequences for the future of work.

Read More
2023ScienceSitesAI Ethics
Patiency Is Not a Virtue: The Design of Intelligent Systems and Systems of Ethics

Joanna J. Bryson, Ethics and Information Technology, 20(1):15-26, 2018.

Both AI and Ethics are artefacts, so there is no necessary position for AI artefacts in society, rather we need to decide what we should build and how we should treat what we build. So why build something to compete for the rights we already struggle to offer 8 billion people? Gold open access paid for by Bath out of our library budget. There are also older versions of this paper which was a discussion paper for a long time, but this is the archival version.

Read More
Of, For, and By the People: The Legal Lacuna of Synthetic Persons

Joanna J. Bryson, Mihailis E. Diamantis, and Thomas D. Grant, Artificial Intelligence and Law, 25(3):273–291, Sep 2017.

Two professors of law and I argue that it would be a terrible, terrible idea to make something strictly AI (in contrast to an organisation also containing humans) a legal person. In fact, the only good thing about this is that it gives us a chance to think about where legal personhood has already been overextended (we give examples). “Gold” open access, not because I think it’s right to make universities or academics pay to do their work, but because Bath has some deal with Springer / has already been coerced into paying. Notice you can read below all my papers going back to 1993 (when I started academia); I don't think “green” open access is part of the war on science.

Read More
The Meaning of the EPSRC Principles of Robotics

Joanna J. Bryson, Connection Science, 29(2):130-136, 2017.

In honour of the EPSRC Principles of Robotics’ fifth anniversary in 2016, Tony Prescott and Michael Szollosy ran an AISB symposium which was followed up by a special issue of the journal Connection Science the following year. I explain the principles' utility as policy, and their intent: to clarify that we are responsible for the AI we build and use. Open access version.

Read More
Artificial Intelligence and Pro-Social Behaviour

Joanna J. Bryson, from the October 2015 Springer volume, Collective Agency and Cooperation in Natural and Artificial Systems: Explanation, Implementation and Simulation, derived from Catrin Misselhorn’s 2013 meeting, Collective Agency and Cooperation in Natural and Artificial Systems.

This brings together all three threads of my research: action selection, natural cognition and collective behaviour, and the mischaracterisation of AI as an active threat. In response to the apocalyptic futurism typified by Bostrom’s Superintelligence, I frame AI as an ordinary part of human culture, which for 10,000 years has included physical artefacts that enhance our cognitive capacities, and is apocalyptic enough in its own right. Open access: here’s the post-review submitted version from September 2014, or email me for the corrected final.

Read More