COURSE "RESEARCH METHODS AND ANALYSIS IN PHILOSOPHICAL RESEARCH": AI AS A TOOL, CHALLENGE AND MIRROR OF HUMAN FAILURES

Keywords: AI, AI philosophy, AI ethics, research methods and analysis, AI technologies, large language models (LLM)

Abstract

The intersection of philosophical research and artificial intelligence (AI) is, at first glance, fertile ground for research, but only at first glance. After all, it is not an easy task to thoroughly delve into the fundamental philosophical principles on which AI developments are based and to realize all the consequences of this relationship fully. The article is devoted to the search for those fundamental principles that may determine the value, effectiveness, and different risks that AI may generate alongside new ideas and experiences. The article exemplifies the role of AI in teaching the course "Research Methods and Analysis in Philosophical Studies" as a study. The author also analyzes the ethical and cognitive problems of AI, using two student research works as illustrations of how the course achievements are applied to consider, in particular, the bias of AI and its limited ability to "judge" compared to humans. Deliberately bypassing the history of AI development, the author describes the closest stage to us when there is a transition to creative processes, especially writing. ChatGPT is used for searching, classifying research objects, generating data, and rewriting with editing, as well as overcoming writer's block, all of which can be accomplished in a matter of minutes. There are also more specialized versions of artificial intelligence for academic work, such as searching and processing literature.  True, one also has to deal with the consequences of such "encyclopedism" of AI. The fact is that AI finds information using the work of hundreds of thousands of academics (scientists, teachers), as well as writers. Everyone decides for themselves whether to use such a labor-intensive means of creating texts. However, the mechanism of AI borrowing literary sources requires immediate regulation and constant monitoring of copyright compliance. The article discusses the latest approaches to assessing the role of philosophy, which lays the principles of creating artificial intelligence and is, therefore, a fundamental part of effective AI. A field of AI philosophy is emerging that deepens and illuminates important topics, combining theoretical foundations with practical applications in technology. Furthermore, considering various concepts, such as explainable AI, highlights the need for users to understand, at least in general terms, the meaning of complex models, including information models. The conclusions emphasize the importance of moral responsibility in the development and use of AI, both personal and collective.

Downloads

Download data is not yet available.

Author Biography

Natalia Viatkina, H. S. Skovoroda Institute of Philosophy, NAS of Ukraine

Viatkina Natalia B.

PhD in philosophy, Senior Research Associate

Department of Logic and Methodology of Science

H. S. Skovoroda Institute of Philosophy, NAS of Ukraine

4, Triokhsviatytelska St., Kyiv, 01001, Ukraine

References

Viatkina, N. (2019). Theory of Meaning, Deference and Normativity. Philosophical thought, 5, 40-51. https://dumka.philosophy.ua/index.php/fd/article/view/382/354 (In Ukrainian)

Mayevsky, A. (2024). Epistemic Limitations of Large Language Models. Мultiversum. Philosophical almanac, 2(180), 1, 54-70. https://doi.org/10.35423/2078-8142.2024.2.1.3 (In Ukrainian)

Mayevsky, A. (2020). The Functional Success of Intelligent Automata. NaUKMA Research Papers in Philosophy and Religious Studies, 5, 15-26. https://doi.org/10.18523/2617-1678.2020.5.15-25 (In Ukrainian)

Myronenko, R. (2025). The problem of ethical attitude towards AI models: should we thank them and greet them? Plato’s Cave. 06.05. https://platoscave.com.ua/tpost/xh7ys1h9k1-problema-etichnogo-stavlennya-do-sh-mode (In Ukrainian)

Popovych, M. (1998). An Essay on the Cultural History of Ukraine. Kyiv: ArtEk. (In Ukrainian)

Tytenko, S. (2024). Ontologically-oriented content management systems for informational and educational Web portals: monograph. Kyiv: American University Kyiv. (In Ukrainian)

Aberdein, A., Inglis, M. (2019). Introduction. Advances in Experimental Philosophy of Logic and Mathematics. London: Bloomsbury Academic. pp. 1-13.

Bell, K, R. W., Noble, J., R, Rossiter, S. (2010). Social simulations: improving interdisciplinary understanding of scientific positioning and validity, Journal of Artificial Societies and Social Simulation, 13 (1) 10. DOI: 10.18564/jasss.1590 https://www.jasss.org/13/1/10.html

Benzmüller, C., Parent, X.r, van der Torre, L. (2019). Designing Normative Theories for Ethical and Legal Reasoning: LogiKEy Framework, Methodology, and Tool Support. arXiv. doi: http://arxiv.org/abs/1903.10187

Bornet, P. (2025). The Power of Random: Why Unconventional Thinking is Your Secret Weapon in the AI Age. Linkedin, 15. 01. https://www.linkedin.com/pulse/power-random-why-unconventional-thinking-your-secret-weapon-bornet-94s6e/

Boyles, R., James M. (2018). A Case for Machine Ethics in Modeling Human-Level Intelligent Agents. Kritike, 12 (1), 182–200. https://www.kritike.org/journal/issue_22/boyles_june2018.pdf

Brewster, C., Buckingham S., O’Hara, K., Simon, Fuller, S.., Shum, S.B., et al. (2004). Knowledge Representation with Ontologies: The Present and Future. IEEE Intelligent Systems, 19 (1), 72-81. doi:10.1109/MIS.2004.1265889.

De Bellis, A. F., Umbrello, S. (2018). "A Value-Sensitive Design Approach to Intelligent Agents" Artificial Intelligence Safety and Security. pp. 395-410. doi: 10.13140/RG.2.2.17162.77762

Ess, Ch. (2004). Revolution? What Revolution? Successes and limits of computing technologies in philosophy and religion. Blackwell Publishers, doi: https://doi.org/10.1002/9780470999875.CH12

Floridi, L. (2023). The Ethics of Artificial Intelligence - Principles, Challenges, and Opportunities. Oxford: Oxford University Press.

Floridi, L. (ed.) (2015). The Onlife Manifesto. Being Human in a Hyperconnected Era. Springer Open ISBN 978-3-319-04093-6 (eBook).

Institute of Ethics in AI. Oxford University's Institute for Ethics in AI. Tackling the ethical challenges posed by Artificial Intelligence. Part of @PhilFacOx and @uniofoxford.

Gerlich, M. (2025). AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking. Societies, 2025, 15(1), 6; https://doi.org/10.3390/soc15010006

Godler, Y., Miller, B., Reich, Z. (2020). Social Epistemology as a New Paradigm for Journalism and Media Studies. New Media & Society, 22(2), 213–229. https://doi.org/10.1177/1461444819856922

Gorichanaz, T., Furner, J., Ma, L., Bawden, D., Robinson, L., Dixon, D., Herold, K., Søe, S.O., Martens, B.V., & Floridi, L. (2020). Information and design: book symposium on Luciano Floridi's The Logic of Information. J. Documentation, 76, 586-616.

León, M. (2024). Fuzzy Cognitive Maps as a Bridge between Symbolic and Sub-Symbolic Artificial Intelligence. International Journal on Cybernetics & Informatics (IJCI), 13, 4, August, 57-75. https://doi.org/10.5121/ijci.2024.130406

Müller, V. C. (2021). Ethics of Artificial Intelligence". Anthony Elliott (ed.), The Routledge social science handbook of AI. London: Routledge https://core.ac.uk/download/305120702.

Nag Hammadi library. Wikipedia. The Free Encyclopedia. https://en.wikipedia.org/wiki/Nag_Hammadi_library

Páez, A. (2019). The Pragmatic Turn in Explainable Artificial Intelligence (XAI). Minds and Machines, 29, 441–459. https://link.springer.com/article/10.1007/s11023-019-09502-w

Rapaport, W. J. (2005). Philosophy of Computer Science: An Introductory Course. Teaching Philosophy, 28:4, 319-341. https://cse.buffalo.edu/~rapaport/Papers/rapaport_phics.pdf

Shampagne, M. (2025). https://www.academia.edu/?u=VBp3kE

Schrage, M., & Kiron, D. (2025). Philosophy eats AI. MIT Sloan Management Review, (Reprint#66311), 1–19. https://sloanreview.mit.edu/article/philosophy-eats-ai/

Published
2025-06-30
Cited
How to Cite
Viatkina, N. (2025). COURSE "RESEARCH METHODS AND ANALYSIS IN PHILOSOPHICAL RESEARCH": AI AS A TOOL, CHALLENGE AND MIRROR OF HUMAN FAILURES. The Journal of V. N. Karazin Kharkiv National University, Series Philosophy. Philosophical Peripeteias, (72), 107-116. https://doi.org/10.26565/2226-0994-2025-72-10
Section
Articles