ARTIFICIAL INTELLIGENCE IN HUMAN LIFE: PERSON OR INSTRUMENT
Abstract
The question of expediency and the principal possibility of machine imitation of human intellect from the point of view of evaluating the perspectives of various directions of development of artificial intelligence systems is discussed. It is shown that even beyond this practical aspect, the solution to the question about the principal possibility of creating a machine equivalent of the human mind is of great importance for understanding the nature of human thinking, consciousness and mental in general. It is noted that the accumulated experience of creating various systems of artificial intelligence, as well as the currently available results of studies of human intelligence and human consciousness in philosophy and psychology allow us to give a preliminary assessment of the prospects of creating an algorithmic artificial system, equal in its capabilities to human intelligence.
The analysis of the drawbacks revealed in the use of artificial intelligence systems by mass users and in scientific research is carried out. The key disadvantages of artificial intelligence systems are the inability to independently set goals, the inability to form a consolidated «opinion» when working with divergent data, the inability to objectively evaluate the results obtained and generate revolutionary new ideas and approaches. The disadvantages of the «second level» are the insufficiency of information accumulated by mankind for further training of artificial intelligence systems, the resulting training of models on the content partially synthesized by artificial intelligence systems themselves, which leads to «forgetting» part of the information obtained during training and increasing the cases of issuing unreliable information. This, in turn, makes it necessary to check the reliability of each answer given by the artificial intelligence system whenever critical information is processed, which, against the background of the plausibility of the data given by artificial intelligence systems and a comfortable form of their presentation, requires the user to have well-developed critical thinking.
It is concluded that the main advantage of artificial intelligence systems is that they can significantly increase the efficiency of information retrieval and primary processing, especially when dealing with large data sets. The importance of the ethical component in artificial intelligence and the creation of a regulatory framework that introduces responsibility for the harm that may be caused by the use of artificial intelligence systems is substantiated, especially for multimodal artificial intelligence systems. The conclusion is made that the risks associated with the use of multimodal artificial intelligence systems consistently increase in the case of realization in them of such functions of human consciousness as will, emotions and following moral principles.
Downloads
References
Aggarwal, N., Saxena, G. J., Singh, S., & Pundir, A. (2023). Can I say, now machines can think? (No. arXiv:2307.07526). arXiv. https://doi.org/10.48550/arXiv.2307.07526.
AI chatbot hallucinations impact legal advice accuracy, Stanford study reveals. (2024, January 12). Legal News Feed. https://legalnewsfeed.com/2024/01/12/ai-chatbot-hallucinations-impact-legal-advice-accuracy-stanford-study-reveals.
Alemohammad, S., Casco-Rodriguez, J., Luzi, L., Humayun, A. I., Babaei, H., LeJeune, D., Siahkoohi, A., & Baraniuk, R. G. (2023). Self-Consuming Generative Models Go MAD. arXiv. https://doi.org/10.48550/ARXIV.2307.01850.
Andre, D. (2024). Is Your AI Lying to You? Shocking Evidence of AI Deceptive Behavior. All About AI. https://www.allaboutai.com/resources/shocking-evidence-of-ai-deceptive-behavior/.
Artificial intelligence: Threats and opportunities. (2020, September 23). European Parliament. https://www.europarl.europa.eu/topics/en/article/20200918STO87404/artificial-intelligence-threats-and-opportunities.
Ashkinaze, J., Mendelsohn, J., Qiwei, L., Budak, C., & Gilbert, E. (2024). How AI Ideas Affect the Creativity, Diversity, and Evolution of Human Ideas: Evidence from a Large, Dynamic Experiment (No. arXiv:2401.13481). arXiv. https://doi.org/10.48550/arXiv.2401.13481.
Atillah, E. I. (2024, September 7). «AI scientist» created to run its own experiments. What will this mean for scientific discoveries? Euronews. https://www.euronews.com/next/2024/09/07/ai-scientist-created-to-run-its-own-experiments-what-will-this-mean-for-scientific-discove.
Bauder, D. (2023, November 29). Sports Illustrated found publishing AI generated stories, photos and authors. PBS News. https://www.pbs.org/newshour/economy/sports-illustrated-found-publishing-ai-generated-stories-photos-and-authors.
Beilin, M., Gnatenko, E., Zheltoborodov, A., Lysenko, A., & Pomazun, O. (2021). Media-Reality as Epiphenomenon of Digital Technologies in Media-Philosophical Discourse. European Proceedings of Social and Behavioural Sciences EpSBS, 108, 569–575. https://doi.org/10.15405/epsbs.2021.05.02.69.
Beilin, M. V. (2019). Artificial Intelligence and the Autonomy of Personal Self-Development. Problems of Personal Self-Development in Modern Society: Proceedings of the International Scientific and Practical Conference, November 15, 2019, 125–129. https://dspace.nlu.edu.ua/bitstream /123456789/16984/1/SB_15-11-2019.pdf. (In Ukrainian).
Beilin, M. V., & Goncharov, G. M. (2019). The Electronic Space of the Human Life World. The Problem of the Human in Philosophy: Materials of the XXVII Kharkiv International Skovoroda Readings (State Cultural Institution National Literary and Memorial Museum of H.S. Skovoroda, September 27–28, 2019), 63–70. https://ekhnuir.karazin.ua/server/ api/core/bitstreams/79cdc915-2235-4849-ad8d-10a6a6395726/content. (In Ukrainian).
Beilin, M. V., Zheltoborodov, A. N., & Petrusenko, N. Yu. (2020). Innovative Educational Technologies of the Information Society. In Information Society: Modern Transformations: Monograph / Edited by U. Leshko (Vinnytsia Mykhailo Kotsiubynskyi State Pedagogical University, pp. 110–118). FOP Korzun D.Yu.
Beilin, M. V., & Zheltoborodov, O. M. (2022). Human in conditions of cognitive-and-technological anthroposphere. Current Issues in Philosophy and Sociology, 39, 3–8. https://doi.org/10.32782/apfs.v039.2022.1. (In Ukrainian).
Boyte, H. C., & Ström, M.-L. (2020). Agency in an AI Avalanche: Education for Citizen Empowerment. Eidos. A Journal for Philosophy of Culture, 4(2), 142–161. https://doi.org/10.14394/eidos.jpc.2020.0023.
Castelvecchi, D. (2024). Researchers built an «AI Scientist» – what can it do? Nature, 633(8029), 266–266. https://doi.org/10.1038/d41586-024-02842-3.
Chittka, L., & Wilson, C. (2019). Expanding Consciousness. American Scientist, 107(6), 364. https://doi.org/10.1511/2019.107.6.364.
Conroy, G. (2024). Do AI models produce more original ideas than researchers? Nature, d41586-024-03070–03075. https://doi.org/10.1038/d41586-024-03070-5.
Edwards, B. (2024, August 14). Research AI model unexpectedly attempts to modify its own code to extend runtime. Ars Technica. https://arstechnica.com/information-technology/2024/08/research-ai-model-unexpectedly-modified-its-own-code-to-extend-runtime/.
Feathers, T. (2024, September 4). Porn generators, cheating tools, and «expert’ medical advice: Inside OpenAI’s marketplace for custom chatbots. Gizmodo. https://gizmodo.com/porn-generators-cheating-tools-and-expert-medical-advice-inside-openais-marketplace-for-custom-chatbots-2000494704.
Gao, J., & Wang, D. (2024). Quantifying the Benefit of Artificial Intelligence for Scientific Research (No. arXiv:2304.10578). arXiv. https://doi.org/10.48550/arXiv.2304.10578.
Gen AI: too much spend, too little benefit? (2024, July 27). Goldman Sachs. https://www.goldmansachs.com/insights/top-of-mind/gen-ai-too-much-spend-too-little-benefit.
Germain, T. (2024, November 1). The «bias machine»: How Google tells you what you want to hear. BBC. https://www.bbc.com/future/article/20241031-how-google-tells-you-what-you-want-to-hear.
Gezici, G. (2021). Biased or Not?: The Story of Two Search Engines (No. arXiv:2112.12802). arXiv. https://doi.org/10.48550/arXiv.2112.12802.
Greenblatt, R., Denison, C., Wright, B., Roger, F., MacDiarmid, M., Marks, S., Treutlein, J., Belonax, T., Chen, J., Duvenaud, D., Khan, A., Michael, J., Mindermann, S., Perez, E., Petrini, L., Uesato, J., Kaplan, J., Shlegeris, B., Bowman, S. R., & Hubinger, E. (2024). Alignment faking in large language models (Version 2). arXiv. https://doi.org/10.48550/ARXIV.2412.14093.
Hager, G. D., Drobnis, A., Fang, F., Ghani, R., Greenwald, A., Lyons, T., Parkes, D. C., Schultz, J., Saria, S., Smith, S. F., & Tambe, M. (2019). Artificial Intelligence for Social Good. arXiv. https://doi.org/10.48550/ARXIV.1901.05406.
Hanson, A. B. (2024, August 14). Wyoming reporter caught using artificial intelligence to create fake quotes and stories. AP News. https://apnews.com/article/artificial-intelligence-reporter-resigns-journalism-ed076e2f276d9811f3b9ba051a03b7ae.
Heath, A. (2024, December 4). Sam Altman lowers the bar for AGI. The Verge. https://www.theverge.com/2024/12/4/24313130/sam-altman-openai-agi-lower-the-bar.
Herel, D., & Mikolov, T. (2024). Collapse of Self-trained Language Models. arXiv. https://doi.org/10.48550/ARXIV.2404.02305.
Holmes, W., Persson, J., Chounta, I.-A., Wasson, B., & Dimitrova, V. (2023). Artificial intelligence and education: A critical view through the lens of human rights, democracy and the rule of law. Council of Europe. https://rm.coe.int/artificial-intelligence-and-education-post-conference-summary/1680aae327.
Ivanov, D., Chezhegov, A., Grunin, A., Kiselev, M., & Larionov, D. (2022). Neuromorphic Artificial Intelligence Systems. arXiv. https://doi.org/10.48550/ARXIV.2205.13037.
Jaeger, J. (2024). Artificial intelligence is algorithmic mimicry: Why artificial «agents» are not (and won’t be) proper agents (Version 4). arXiv. https://doi.org/10.48550/ARXIV.2307.07515.
Jumper, J., Evans, R., Pritzel, A., Green, T., Figurnov, M., Ronneberger, O., Tunyasuvunakool, K., Bates, R., Žídek, A., Potapenko, A., Bridgland, A., Meyer, C., Kohl, S. A. A., Ballard, A. J., Cowie, A., Romera-Paredes, B., Nikolov, S., Jain, R., Adler, J., … Hassabis, D. (2021). Highly accurate protein structure prediction with AlphaFold. Nature, 596(7873), 583–589. https://doi.org/10.1038/s41586-021-03819-2.
Krauss, P., & Maier, A. (2020). Will We Ever Have Conscious Machines? Frontiers in Computational Neuroscience, 14, 556544. https://doi.org/10.3389/fncom.2020.556544.
Krenn, M., Pollice, R., Guo, S. Y., Aldeghi, M., Cervera-Lierta, A., Friederich, P., Gomes, G. P., Häse, F., Jinich, A., Nigam, A., Yao, Z., & Aspuru-Guzik, A. (2022). On scientific understanding with artificial intelligence. Nature Reviews Physics, 4(12), 761–769. https://doi.org/10.48550/ARXIV.2204.01467.
LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521(7553), 436–444. https://doi.org/10.1038/nature14539.
Lewandowski, D. (2015). Living in a world of biased search engines. Online Information Review, 39(3), 278–280. https://doi.org/10.1108/OIR-03-2015-0089.
Lewandowski, D. (2023). Search Engines Between Bias and Neutrality. In D. Lewandowski, Understanding Search Engines (pp. 261–273). Springer International Publishing. https://doi.org/10.1007/978-3-031-22789-9_15.
Lu, C., Lu, C., Lange, R. T., Foerster, J., Clune, J., & Ha, D. (2024). The AI Scientist: Towards Fully Automated Open-Ended Scientific Discovery (Version 3). arXiv. https://doi.org/10.48550/ARXIV.2408.06292.
Lucas, J. R. (1961). Minds, Machines and Gödel. Philosophy, 36(137), 112–127. https://doi.org/10.1017/S0031819100057983.
Mahadevan, A., Schiffrin, A., Hare, K., Castillo, A., Fu, A., & LaForme, R. (2024, August 9). Wyoming reporter uncovers competitor using AI-generated quotes. Poynter Institute. https://www.poynter.org/commentary/2024/cody-enterprise-reporter-fake-quotes-artificial-intelligence.
Maillé, P., Maudet, G., Simon, M., & Tuffin, B. (2022). Are Search Engines Biased? Detecting and Reducing Bias using Meta Search Engines. Electronic Commerce Research and Applications, 101132. https://doi.org/10.1016/j.elerap.2022.101132.
Matias, Y. (2023, October 10). Project Green Light’s work to reduce urban emissions using AI. Blog Google. https://blog.google/outreach-initiatives/sustainability/google-ai-reduce-greenhouse-emissions-project-greenlight/.
Metz, C. (2023, November 6). Chatbots may «hallucinate» more often than many realize. https://www.nytimes.com/2023/11/06/technology/chatbots-hallucination-rates.html.
Ortiz, A. (2024, August 14). Wyoming reporter resigns after using A.I. to fabricate quotes. https://www.nytimes.com/2024/08/14/business/media/wyoming-cody-enterprise-ai.html.
Penrose, R. (1989). The Emperors new mind: Concerning computers, minds and the laws of physics. Oxford university press.
Planning for AGI and beyond. (2023, February 24). OpenAI. https://openai.com/index/planning-for-agi-and-beyond.
Reilly, K., Kovach, S., & Weller, C. (2017, December 29). An interview with AI robot Sophia. Business Insider. https://www.businessinsider.com/interview-ai-robot-sophia-hanson-robotics-2017-12.
Riva, G., Mantovani, F., Wiederhold, B. K., Marchetti, A., & Gaggioli, A. (2024). Psychomatics – A Multidisciplinary Framework for Understanding Artificial Minds. arXiv. https://doi.org/10.48550/ARXIV.2407.16444.
Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, No. SB-1047, California State Senate (2023). https://leginfo.legislature.ca.gov/faces/ billNavClient.xhtml?bill_id=202320240SB1047.
Shumailov, I., Shumaylov, Z., Zhao, Y., Gal, Y., Papernot, N., & Anderson, R. (2024). The Curse of Recursion: Training on Generated Data Makes Models Forget (Version 3). arXiv. https://doi.org/10.48550/ARXIV.2305.17493.
Smolensky, P., McCoy, R. T., Fernandez, R., Goldrick, M., & Gao, J. (2022). Neurocompositional computing: From the Central Paradox of Cognition to a new generation of AI systems. arXiv. https://doi.org/10.48550/ARXIV.2205.01128.
Sukhobokov, A., Belousov, E., Gromozdov, D., Zenger, A., & Popov, I. (2024). A Universal Knowledge Model and Cognitive Architecture for Prototyping AGI. https://doi.org/10.48550/ARXIV.2401.06256.
Tremayne-Pengelly, A. (2024). Ilya Sutskever warns A.I. is running out of data – here’s what will happen next. Observer. https://observer.com/2024/12/openai-cofounder-ilya-sutskever-ai-data-peak.
Villalobos, P., Ho, A., Sevilla, J., Besiroglu, T., Heim, L., & Hobbhahn, M. (2024). Will we run out of data? Limits of LLM scaling based on human-generated data (Version 2). arXiv. https://doi.org/10.48550/ARXIV.2211.04325.
Wan, Z., Liu, C.-K., Yang, H., Li, C., You, H., Fu, Y., Wan, C., Krishna, T., Lin, Y., & Raychowdhury, A. (2024). Towards Cognitive AI Systems: A Survey and Prospective on Neuro-Symbolic AI. arXiv. https://doi.org/10.48550/ARXIV.2401.01040.
West, D. M. (2023, March 23). Comparing Google Bard with OpenAI’s ChatGPT on political bias, facts, and morality. Brookings Institution. https://www.brookings.edu/articles/comparing-google-bard-with-openais-chatgpt-on-political-bias-facts-and-morality.
Will the $1 trillion of generative AI investment pay off? (2024, August 5). Goldman Sachs. https://www.goldmansachs.com/insights/articles/will-the-1-trillion-of-generative-ai-investment-pay-off.
Xu, W., & Gao, Z. (2024). An intelligent sociotechnical systems (iSTS) framework: Enabling a hierarchical human-centered AI (hHCAI) approach (Version 5). arXiv. https://doi.org/10.48550/ARXIV.2401.03223.
Zhao, J., Wu, M., Zhou, L., Wang, X., & Jia, J. (2022). Cognitive psychology-based artificial intelligence review. Frontiers in Neuroscience, 16, 1024316. https://doi.org/10.3389/fnins.2022.1024316.
Zeng, Y., Zhao, F., Zhao, Y., Zhao, D., Lu, E., Zhang, Q., Wang, Y., Feng, H., Zhao, Z., Wang, J., Kong, Q., Sun, Y., Li, Y., Shen, G., Han, B., Dong, Y., Pan, W., He, X., Bao, A., & Wang, J. (2024). Brain-inspired and Self-based Artificial Intelligence. arXiv. https://doi.org/10.48550/ARXIV.2402.18784
Copyright (c) 2025 Лідія Газнюк, Михайло Бейлін, Ірина Соїна

This work is licensed under a Creative Commons Attribution 4.0 International License.
Authors who publish with this journal agree to the following terms:
- Authors retain copyright and grant the journal right of first publication of this work under the terms of a license Creative Commons Attribution License 4.0 International (CC BY 4.0).
- Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
- Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work.


3.gif)



