previous article in this issue | next article in this issue |
Preview first page |
Document Details : Title: Just Hierarchy and the Ethics of Artificial Intelligence Subtitle: Two Approaches to a Relational Ethic for Artificial Intelligence Author(s): ZHU, Qin Journal: Ethical Perspectives Volume: 30 Issue: 1 Date: 2023 Pages: 59-76 DOI: 10.2143/EP.30.1.3291696 Abstract : Dominant approaches to the ethics of artificial intelligence (AI) systems have been mainly based on individualistic, rule-based ethical frameworks central to Western cultures. These approaches have encountered both philosophical and computational limitations. They often struggle to accommodate remarkably diverse, unstable, complex contexts of human-AI interactions. Recently there has been an increasing interest among philosophers and computer scientists in building a relational approach to the ethics of AI. This article engages with Daniel A. Bell and Pei Wang’s most recent book Just Hierarchy and explores how their theory of just hierarchy can be employed to develop a more systematic account for relational AI ethics. Bell and Wang’s theory of just hierarchy acknowledges that there are morally justified situations in which social relations are not equal. Just hierarchy can exist both between humans and between humans and machines such as AI systems. Therefore, a relational ethic for AI based on just hierarchy can include two theses: (i) AI systems should be considered merely as tools and their relations with humans are hierarchical (e.g. designing AI systems with lower moral standing than humans); and (ii) the moral assessment of AI systems should focus on whether they help us realize our role-based moral obligations prescribed by our social relations with others (these relations often involve diverse forms of morally justified hierarchies in communities). Finally, this article will discuss the practical implications of such a relational ethic framework for designing socially integrated and ethically responsive AI systems. |
|