MORALITY OF ARTIFICIAL INTELLIGENCE AND ETHICS OF TRUTHS
Abstract
Ethical and legal issues related to contemporary technologies, the development, and application of artificial intelligence (AI) are becoming increasingly relevant. The article examines the problem of AI morality, particularly in the context of the ethics of truth. It is noted that legal and ethical discussions usually take place within the established moral framework, the Christian-modern ethos, which reconciles the relationship between the nation-state and the capitalist economy in the figure of a moral or conscious subject. Solving the problem of AI morality is situated within the context of imperative, normative, utilitarian, sentiocentric, and discursive ethics, which should create a perspective for a certain general or global ethics. Several lines of problematization of the morality of AI in the contemporary world are defined: the machine as a moral subject, moral epistemology, morality of the subject-programmer, and moral instrumentalism/utilitarianism. However, the ethics of truths proposed by A. Badiou creates a new ethical perspective for human interaction with AI. The problem of AI morality and the prospects for its development must take into account the important dimension of the unconscious in the context of the potential analytical interaction of social actors, which include humans and AI, as well as the understanding of the eventfulness of truth, which involves the formation of subjectivity in accordance with truth. AI, like humans, should be revealed in the perspective of correspondence to the truth, the ability to distinguish the truth from its simulacra. Such an ethics can be effective, as it corresponds to the possibility of producing truths, to which not only a human being but also a post-human machine can approach in one way or another. It is at their intersection that the subject of the new ethics of truths should be formed.
Downloads
References
Badiou, A. (2019). Ethics: An Essay on the Understanding of Evil. Kyiv: Komubook. (In Ukrainian).
Baudrillard, J. (1993). Symbolic Exchange and Death. Transl. I. H. Grant. London, Thousand Oaks, New Delhi: Sage Publications.
Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies. Oxford University Press.
Johnson, D. G. (2006). Computer Systems: Moral Entities but Not Moral Agents. Ethics and Information Technology, 8, 195-204. https://doi.org/10.1007/s10676-006-9111-5
Kissinger, H.A. (2018). How the Enlightenment Ends. The Atlantic. June. URL: https://www.theatlantic.com/magazine/archive/2018/06/henry-kissingerai-could-mean-the-end-of-human-history/559124/
Kuklin, V. (2023). ON THE QUESTION OF THE APPEARANCE OF CONSCIOUSNESS IN NEURAL NETWORKS. The Journal of V. N. Karazin Kharkiv National University, Series "Philosophy. Philosophical Peripeteias", (68), 32-38. https://doi.org/10.26565/2226-0994-2023-68-3
Ramirez, J. (2021). Machine Ethics: Ethics for Machines. Context-Based Modeling for Machines Making Ethical Decisions. April.
Rees, T. (2019). Why tech companies need philosophers – and how I convinced Google to hire them. Quarts, november, 22. URL: https://qz.com/1734381/why-tech-companies-need-to-hire-philosophers
The Bletchley Declaration by Countries Attending the AI Safety Summit, 1-2 November 2023. URL: https://www.gov.uk/government/publications/ai-safety-summit-2023-the-bletchley-declaration/the-bletchley-declaration-by-countries-attending-the-ai-safety-summit-1-2-november-2023.
Tolstov, I., & Danilian, V. (2023). INFORMATION SOCIETY AND NEW GLOBAL ETHICS. The Journal of V. N. Karazin Kharkiv National University, Series "Philosophy. Philosophical Peripeteias", (68), 39-44. https://doi.org/10.26565/2226-0994-2023-68-4
Verbeek, P-P. (2006). Materializing Morality: Design Ethics and Technological Mediation. Sci Technol Human Values, 31(3). 361–380 pp.
Wallach W., Allen C. (2009). Moral machines: teaching robots right from wrong. Oxford: Oxford University Press.
Wiegel, V. (2010). Wendell Wallach and Colin Allen: moral machines: teaching robots right from wrong. Ethics and Information Technology 12(4), 359-361. DOI:10.1007/s10676-010-9239-1
Zizek, S. (2023). ChatGPT sagt das, was unser Unbewusstes radikal verdrängt. Berliner Zeitung, 07.04. URL: https://www.berliner-zeitung.de/kultur-vergnuegen/slavoj-zizek-chatgpt-sagt-das-was-unser-unbewusstes-radikal-verdraengt-li.335938.
Copyright (c) 2024 Вероніка Храброва
This work is licensed under a Creative Commons Attribution 4.0 International License.
Authors who publish with this journal agree to the following terms:
- Authors retain copyright and grant the journal right of first publication of this work under the terms of a license Creative Commons Attribution License 4.0 International (CC BY 4.0).
- Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
- Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work.