RESEARCH ON THE CURRENT STATE AND PROSPECTS OF THE APPLICATION OF ARTIFICIAL INTELLIGENCE IN CYBERSECURITY

  • Yuriy Golikov DevBrother tech company, США
  • Yelyzaveta Ostrianska junior researcher, V. N. Karazin Kharkiv National University, Ukraine https://orcid.org/0000-0003-1412-8470
Keywords: artificial intelligence, cybersecurity, machine learning, SIEM, AI-based antivirus

Abstract

In the modern world, with the development of new technologies, artificial intelligence (AI) in cybersecurity has become an integral component. Therefore, studying its advantages, risks, and potential use cases is a highly relevant research topic. In today’s digital environment, where cyber threats are becoming increasingly sophisticated, the implementation of AI technologies significantly enhances the effectiveness of security systems by enabling automated threat detection and response. In this study the main applications of AI in cybersecurity were examined, including threat detection, malware analysis, cryptographic security enhancement, phishing protection, and attack prediction. One of the key aspects is the integration of AI into Security Information and Event Management (SIEM) systems, which analyze vast amounts of data and help detect anomalies. Such systems reduce the workload on security teams and improve the accuracy and speed of threat response. Special attention is given to the analysis of modern AI-powered antivirus solutions, particularly Microsoft Defender for Endpoint and Darktrace. These solutions are based on behavioral analysis algorithms and machine learning, allowing for more effective detection of complex threats and incident prevention. Microsoft Defender provide a high level of endpoint protection. Meanwhile, Darktrace utilizes self-learning models to analyze network traffic, enabling the detection of zero-day threats and internal risks within organizations. The study also learns the major risks associated with the use of AI in cybercrime. AI is increasingly leveraged by malicious actors to automate attacks, significantly increasing their effectiveness and making detection more challenging. The primary AI-based cyber threats discussed include Data Poisoning attacks, Evasion Attacks, Prompt Injection Attacks, and AI-based social engineering. To mitigate these risks, the development of robust AI models resistant to adversarial attacks, increased algorithm transparency, and the implementation of international AI regulation standards is recommended, including NIST. Additionally, raising awareness among users and cybersecurity specialists is crucial, as the human factor remains one of the most significant vulnerabilities in security systems. In conclusion, it is said that AI is a key factor in the advancement of cybersecurity, offering significant improvements in protecting information and critical systems. However, without proper regulation and protective measures, AI can become a powerful tool for cybercriminals, posing new security challenges in the digital age. Striking a balance between innovation, ethical standards, and security will be essential in shaping the future strategy for the effective use of AI.

Downloads

Download data is not yet available.

References

SNIST standardization process “Post-Quantum Cryptography: Digital Signature Schemes”. https://csrc.nist.gov/Projects/pqc-dig-sig/round-1-additional-signatures

TAO, Feng; Akhtar, Muhammad Shoaib; Jiayuan, Zhang.(2021) The future of artificial intelligence in cybersecurity: A comprehensive survey. EAI Endorsed Transactions on Creative Technologies, 8.28: e3-e3. https://doi.org/10.4108/eai.7-7-2021.170285

Leung, B. K. (2021). Security Information and Event Management (SIEM) Evaluation Report. ScholarWorks. https://scholarworks.calstate.edu/downloads/41687p49q

González-Granadillo, G., González-Zarzosa, S., Diaz, R. (2021). Security Information and Event Management (SIEM): Analysis, Trends, and Usage in Critical Infrastructures. Sensors, 21(14). https://doi.org/10.3390/s21144759

Muhammad, S., et al. (2023). Effective Security Monitoring Using Efficient SIEM Architecture. Human-centric Computing and Information Sciences, 13. https://doi.org/10.22967/HCIS.2023.13.017

What is SIEM. Security Information and Event Management Tools. (n.d.). Imperva. https://www.imperva.com/learn/application-security/siem/

IBM Security QRadar. What is security information and event management (SIEM)? https://www.ibm.com/think/topics/siem

Splunk. The Splunk SIEM. https://www.splunk.com/en_us/products/enterprise-security.html

Stellar Cyber. AI SIEM: The 6 Components of AI-Based SIEM. https://stellarcyber.ai/learn/ai-driven-siem/

ISO/IEC 27001: (2022). Information technology – Security techniques – Information security management systems – Requirements. International standard. 3 Edition.

Microsoft Defender for Endpoint. (2024). https://learn.microsoft.com/uk-ua/defender-endpoint/microsoft-defender-endpoint

Darktrace. Official website (2024). https://darktrace.com/

Mauri L., Damiani E. Modeling Threats to AI-ML Systems Using STRIDE. Sensors (2022), (17), 6662; https://doi.org/10.3390/s22176662

The near-term impact of AI on the cyber threat. https://www.ncsc.gov.uk/report/impact-of-ai-on-cyber-threat

Nihad Hassan. What is data poisoning (AI poisoning) and how does it work? Search Enterprise AI, TechTarget ,(2024). https://www.techtarget.com/searchenterpriseai/definition/data-poisoning-AI-poisoning

Tom Krantz, Alexandra Jonker. What is data poisoning? IBM. https://www.ibm.com/think/topics/data-poisoning

NIST Trustworthy and Responsible AI NIST AI 100-5. A Plan for Global Engagement on AI Standards. https://doi.org/10.6028/NIST.AI.100-5

Vassilev A, Oprea A, Fordyce A, Anderson H (2024) Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations. (National Institute of Standards and Technology, Gaithersburg, MD) NIST Artifcial Intelligence (AI) Report, NIST Trustworthy and Responsible AI NIST AI 100-2e2023. https://doi.org/10.6028/NIST.AI.100-2e2023

R. Perdisci, D. Dagon, Wenke Lee, P. Fogla, and M. Sharif. (2006) Misleading worm signature generators using deliberate noise injection. In 2006 IEEE Symposium on Security and Privacy (S&P’06), Berkeley/Oakland, CA, IEEE

Blaine Nelson, Marco Barreno, Fuching Jack Chi, Anthony D. Joseph, Benjamin I.P. Rubinstein, Udam Saini, Charles Sutton, and Kai Xia.(2008) Exploiting machine learning to subvert your spam flter. In First USENIX Workshop on Large-Scale Exploits and Emergent Threats (LEET 08), San Francisco, CA, USENIX Association

Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus.(2014) Intriguing properties of neural networks. In International Conference on Learning Representations

Alesia Chernikova and Alina Oprea.(2022) FENCE: Feasible evasion attacks on neural networks in constrained environments. ACM Transactions on Privacy and Security (TOPS) Journal

Ryan Sheatsley, Blaine Hoak, Eric Pauley, Yohan Beugin, Michael J. Weisman, and Patrick McDaniel (2021). On the robustness of domain constraints. In Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security, CCS ’21, page 495–515, New York, NY, USA. Association for Computing Machinery

Fabio Pierazzi, Feargus Pendlebury, Jacopo Cortellazzi, and Lorenzo Cavallaro(2020). Intriguing properties of adversarial ML attacks in the problem space. In 2020 IEEE Symposium on Security and Privacy (S&P), pages 1308–1325. IEEE Computer Society

Daniel Kang, Xuechen Li, Ion Stoica, Carlos Guestrin, Matei Zaharia, and Tatsunori Hashimoto (2023).Exploiting programmatic behavior of llms: Dual-use through standard security attacks. arXiv preprint arXiv:2302.05733

Kai Greshake, Sahar Abdelnabi, Shailesh Mishra, Christoph Endres, Thorsten Holz, and Mario Fritz (2023). Not what you signed up for: Compromising realworld llm-integrated applications with indirect prompt injection. arXiv preprint arXiv:2302.12173

Kai Greshake, Sahar Abdelnabi, Shailesh Mishra, Christoph Endres, Thorsten Holz, and Mario Fritz (2023). Not what you signed up for: Compromising realworld llm-integrated applications with indirect prompt injection. arXiv preprint arXiv:2302.12173

Published
2024-12-30
Cited
How to Cite
Golikov, Y., & Ostrianska, Y. (2024). RESEARCH ON THE CURRENT STATE AND PROSPECTS OF THE APPLICATION OF ARTIFICIAL INTELLIGENCE IN CYBERSECURITY. Computer Science and Cybersecurity, (2), 51-65. https://doi.org/10.26565/2519-2310-2024-2-05
Section
Статті