XAI Optimization for Low-Latency Neural-Based Intrusion Detection Systems in Network Environments
Abstract
Relevance. In contemporary network environments, deep learning-based intrusion detection systems (IDS) provide significant improvements in detecting complex and evolving cyber threats. However, their practical deployment in real-time applications is severely limited by computational complexity, latency, and a lack of interpretability, commonly referred to as the "black-box" problem. Integrating eXplainable Artificial Intelligence (XAI) methods into IDS is crucial for enhancing the transparency, trustworthiness, and operational effectiveness of security systems. Goal. The aim of this research is to explore and optimize XAI methods to achieve low-latency, explainable neural-based intrusion detection systems suitable for real-time network traffic analysis, thus balancing interpretability with computational efficiency and detection accuracy. Research methods. The study conducted a systematic review and comparative analysis of existing deep learning (DL) models (CNN, LSTM, GRU, Autoencoders, CNN-LSTM hybrids) and prominent XAI techniques (SHAP, LIME, Integrated Gradients, DeepLIFT, Grad-CAM, Anchors). Optimization strategies were proposed, including hardware acceleration, lightweight gradient-based attribution methods, hybrid architectures, and selective explanation strategies. Empirical validation was performed on standard datasets (CICIDS2017, NSL-KDD, UNSW-NB15). The results. The analysis revealed that gradient-based attribution methods (DeepLIFT, Integrated Gradients) are optimal for real-time IDS due to minimal latency and high fidelity. Hybrid explainable-by-design frameworks, specifically CNN-LSTM models enhanced with attention mechanisms (ELAI framework), demonstrated significant performance gains with detection accuracy exceeding 98% and inference times below 10 ms. Optimized methods notably improved zero-day attack detection rates up to 91.6%. Conclusions. The research successfully demonstrated practical methods for integrating explainability into real-time neural-based IDS, significantly enhancing both detection performance and decision transparency. Future research should focus on standardizing evaluation metrics, refining attention-based models, and extending these optimization approaches to other cybersecurity applications.
Downloads
References
/References
Otoum Y., Nayak A. AS-IDS: Anomaly and Signature Based IDS for the Internet of Things. Journal of Network and Systems Management. 2021. Vol. 29, no. 3. URL: https://doi.org/10.1007/s10922-021-09589-6 [in English] (date of access: 14.06.2025).
Securing financial data storage: A review of cybersecurity challenges and solutions / Chinwe Chinazo Okoye et al. International Journal of Science and Research Archive. 2024. Vol. 11, no. 1. P. 1968–1983. URL: https://doi.org/10.30574/ijsra.2024.11.1.0267 [in English] (date of access: 15.06.2025).
Federated Learning for intrusion detection system: Concepts, challenges and future directions / S. Agrawal et al. Computer Communications. 2022. URL: https://doi.org/10.1016/j.comcom.2022.09.012 [in English] (date of access: 16.06.2025).
Deep Learning vs. Machine Learning for Intrusion Detection in Computer Networks: A Comparative Study / M. L. Ali et al. Applied Sciences. 2025. Vol. 15, no. 4. P. 1903. URL: https://doi.org/10.3390/app15041903 [in English] (date of access: 16.06.2025).
Deep Learning Approach for Intelligent Intrusion Detection System / R. Vinayakumar et al. IEEE Access. 2019. Vol. 7. P. 41525–41550. URL: https://doi.org/10.1109/access.2019.2895334 [in English] (date of access: 19.06.2025).
Gaspar D., Silva P., Silva C. Explainable AI for Intrusion Detection Systems: LIME and SHAP Applicability on Multi-Layer Perceptron. IEEE Access. 2024. P. 1. URL: https://doi.org/10.1109/access.2024.3368377 [in English] (date of access: 19.06.2025).
Federated XAI IDS: An Explainable and Safeguarding Privacy Approach to Detect Intrusion Combining Federated Learning and SHAP / K. Fatema et al. Future Internet. 2025. Vol. 17, no. 6. P. 234. URL: https://doi.org/10.3390/fi17060234 [in English] (date of access: 21.06.2025).
Arreche O., Guntur T., Abdallah M. XAI-IDS: Toward Proposing an Explainable Artificial Intelligence Framework for Enhancing Network Intrusion Detection Systems. Applied Sciences. 2024. Vol. 14, no. 10. P. 4170. URL: https://doi.org/10.3390/app14104170 [in English] (date of access: 21.06.2025).
Enhancing intrusion detection: a hybrid machine and deep learning approach / M. Sajid et al. Journal of Cloud Computing. 2024. Vol. 13, no. 1. URL: https://doi.org/10.1186/s13677-024-00685-x [in English] (date of access: 24.06.2025).
Stacking Ensemble Deep Learning for Real-Time Intrusion Detection in IoMT Environments / E. Alalwany et al. Sensors. 2025. Vol. 25, no. 3. P. 624. URL: https://doi.org/10.3390/s25030624 [in English] (date of access: 25.06.2025).
Laxmi, Chauhan K. AI-Based Intrusion Detection Systems for Novel Attacks in IoT and APTs: A Deep Learning-Centric Review. International Journal of Computer Science and Information Security (IJCSIS), Vol. 23, No. 3, May-June. URL: https://www.academia.edu/130243382/AI_Based_Intrusion_Detection_Systems_for_Novel_Attacks_in_IoT_and_APTs_A_Deep_Learning_Centric_Review?bulkDownload=true [in English] (date of access: 25.06.2025).
A high performance hybrid LSTM CNN secure architecture for IoT environments using deep learning / P. Sinha et al. Scientific Reports. 2025. Vol. 15, no. 1. URL: https://doi.org/10.1038/s41598-025-94500-5 [in English] (date of access: 27.06.2025).
Ribeiro M. T., Singh S., Guestrin C. "Why Should I Trust You?". KDD '16: The 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco California USA. New York, NY, USA, 2016. URL: https://doi.org/10.1145/2939672.2939778 [in English] (date of access: 27.06.2025).
Lundberg S. M., Lee S.-I., ”A unified approach to interpreting model predictions,” in Advances in Neural Information Processing Systems (NIPS), vol. 30, 2017. URL: https://arxiv.org/abs/1705.07874v2 [in English] (date of access: 27.06.2025).
Mohale V. Z., Obagbuwa I. C. A systematic review on the integration of explainable artificial intelligence in intrusion detection systems to enhancing transparency and interpretability in cybersecurity. Frontiers in Artificial Intelligence. 2025. Vol. 8. URL: https://doi.org/10.3389/frai.2025.1526221 [in English] (date of access: 04.07.2025).
Explainable Intrusion Detection Systems (X-IDS): A Survey of Current Methods, Challenges, and Opportunities / S. Neupane et al. IEEE Access. 2022. Vol. 10. P. 112392–112415. URL: https://doi.org/10.1109/access.2022.3216617 [in English] (date of access: 04.07.2025).
Alomari Y., Andó M. SHAP-based insights for aerospace PHM: Temporal feature importance, dependencies, robustness, and interaction analysis. Results in Engineering. 2024. Vol. 21. P. 101834. URL: https://doi.org/10.1016/j.rineng.2024.101834 [in English] (date of access: 06.07.2025).
Explainable Artificial Intelligence for Intrusion Detection System / S. Patil et al. Electronics. 2022. Vol. 11, no. 19. P. 3079. URL: https://doi.org/10.3390/electronics11193079 [in English] (date of access: 06.07.2025).
Visani G. LIME: explain Machine Learning predictions. Medium. URL: https://medium.com/data-science/lime-explain-machine-learning-predictions-af8f18189bfe [in English] (date of access: 07.07.2025).
Leveraging Grad-CAM to Improve the Accuracy of Network Intrusion Detection Systems / F. P. Caforio et al. Discovery Science. Cham, 2021. P. 385–400. URL: https://doi.org/10.1007/978-3-030-88942-5_30 [in English] (date of access: 08.07.2025).
Evaluating Explainable AI for Deep Learning-Based Network Intrusion Detection System Alert Classification / R. Kalakoti et al. 11th International Conference on Information Systems Security and Privacy, Porto, Portugal, 20–22 February 2025. 2025. P. 47–58. URL: https://doi.org/10.5220/0013180700003899 [in English] (date of access: 09.07.2025).
Ribeiro M. T., Singh S., Guestrin C. Anchors: High-Precision Model-Agnostic Explanations. Proceedings of the AAAI Conference on Artificial Intelligence. 2018. Vol. 32, no. 1. URL: https://doi.org/10.1609/aaai.v32i1.11491 [in English] (date of access: 11.07.2025).
Explainable AI for Comparative Analysis of Intrusion Detection Models / P. M. Corea et al. 2024 IEEE International Mediterranean Conference on Communications and Networking (MeditCom), Madrid, Spain, 8–11 July 2024. 2024. P. 585–590. URL: https://doi.org/10.1109/meditcom61057.2024.10621339 [in English] (date of access: 13.07.2025).
Bhagyashree D Shendkar. Explainable Machine Learning Models for Real-Time Threat Detection in Cybersecurity. Panamerican Mathematical Journal. 2024. Vol. 35, no. 1s. P. 264–275. URL: https://doi.org/10.52783/pmj.v35.i1s.2313 [in English] (date of access: 13.07.2025).
Rahmati M. Towards Explainable and Lightweight AI for Real-Time Cyber Threat Hunting in Edge Networks. URL: https://doi.org/10.48550/arXiv.2504.16118 [in English] (date of access: 14.07.2025).
Yagiz M. A., Goktas P. LENS-XAI: Redefining Lightweight and Explainable Network Security through Knowledge Distillation and Variational Autoencoders for Scalable Intrusion Detection in Cybersecurity. URL: https://doi.org/10.48550/arXiv.2501.00790 [in English] (date of access: 15.07.2025).
Otoum Y., Nayak A. AS‑IDS: Anomaly and Signature Based IDS for the Internet of Things / Journal of Network and Systems Management. – 2021. – Vol. 29, no. 3. – URL: https://doi.org/10.1007/s10922-021-09589-6 (дата звернення: 14.06.2025).
Okoye Chinwe C., Nwankwo Ezinwa E., Usman Favour O., Mhlongo N. Z., Odeyemi O., Ike C. U. Securing financial data storage: A review of cybersecurity challenges and solutions / International Journal of Science and Research Archive. – 2024. – Vol. 11, no. 1. – С. 1968–1983. – URL: https://doi.org/10.30574/ijsra.2024.11.1.0267 (дата звернення: 15.06.2025).
Agrawal S., Sarkar S., Aouedi O., Yenduri G., Piamrat K., Alazab M., Bhattacharya S., Maddikunta P. K. R., Gadekallu T. R. Federated Learning for intrusion detection system: Concepts, challenges and future directions / Computer Communications. – 2022. – URL: https://doi.org/10.1016/j.comcom.2022.09.012 (дата звернення: 16.06.2025).
Ali M. L. et al. Deep Learning vs. Machine Learning for Intrusion Detection in Computer Networks: A Comparative Study / Applied Sciences. – 2025. – Vol. 15, no. 4. – P. 1903. – URL: https://doi.org/10.3390/app15041903 (дата звернення: 16.06.2025).
Vinayakumar R., Alazab M., Soman K. P., Poornachandran P., Al‑Nemrat A., Venkatraman S. Deep Learning Approach for Intelligent Intrusion Detection System / IEEE Access. – 2019. – Vol. 7. – С. 41525–41550. – URL: https://doi.org/10.1109/access.2019.2895334 (дата звернення: 19.06.2025).
Gaspar D., Silva P., Silva C. Explainable AI for Intrusion Detection Systems: LIME and SHAP Applicability on Multi‑Layer Perceptron / IEEE Access. – 2024. – С. 1. – URL: https://doi.org/10.1109/access.2024.3368377 (дата звернення: 19.06.2025).
Fatema K. et al. Federated XAI IDS: An Explainable and Safeguarding Privacy Approach to Detect Intrusion Combining Federated Learning and SHAP / Future Internet. – 2025. – Vol. 17, no. 6. – P. 234. – URL: https://doi.org/10.3390/fi17060234 (дата звернення: 21.06.2025).
Arreche O., Guntur T., Abdallah M. XAI‑IDS: Toward Proposing an Explainable Artificial Intelligence Framework for Enhancing Network Intrusion Detection Systems / Applied Sciences. – 2024. – Vol. 14, no. 10. – P. 4170. – URL: https://doi.org/10.3390/app14104170 (дата звернення: 21.06.2025).
Sajid M. et al. Enhancing intrusion detection: a hybrid machine and deep learning approach / Journal of Cloud Computing. – 2024. – Vol. 13, no. 1. – URL: https://doi.org/10.1186/s13677-024-00685-x (дата звернення: 24.06.2025).
Alalwany E. et al. Stacking Ensemble Deep Learning for Real‑Time Intrusion Detection in IoMT Environments / Sensors. – 2025. – Vol. 25, no. 3. – P. 624. – URL: https://doi.org/10.3390/s25030624 (дата звернення: 25.06.2025).
Laxmi, Chauhan K. AI‑Based Intrusion Detection Systems for Novel Attacks in IoT and APTs: A Deep Learning‑Centric Review / International Journal of Computer Science and Information Security. – Vol. 23, no. 3, May–June. – URL: https://www.academia.edu/130243382/AI_Based_Intrusion_Detection_Systems_for_Novel_Attacks_in_IoT_and_APTs_A_Deep_Learning_Centric_Review?bulkDownload=true (дата звернення: 25.06.2025).
Sinha P. et al. A high performance hybrid LSTM CNN secure architecture for IoT environments using deep learning / Scientific Reports. – 2025. – Vol. 15, no. 1. – URL: https://doi.org/10.1038/s41598-025-94500-5 (дата звернення: 27.06.2025).
Ribeiro M. T., Singh S., Guestrin C. “Why Should I Trust You?” / Proceedings of the 22nd ACM SIGKDD Int. Conf. on Knowledge Discovery and Data Mining (KDD). – 2016. – New York, NY, USA. – URL: https://doi.org/10.1145/2939672.2939778 (дата звернення: 27.06.2025).
Lundberg S. M., Lee S.-I. A unified approach to interpreting model predictions / Advances in Neural Information Processing Systems. – 2017. – Vol. 30. – URL: https://arxiv.org/abs/1705.07874v2 (дата звернення: 27.06.2025).
Mohale V. Z., Obagbuwa I. C. A systematic review on the integration of explainable artificial intelligence in intrusion detection systems to enhancing transparency and interpretability in cybersecurity / Frontiers in Artificial Intelligence. – 2025. – Vol. 8. – URL: https://doi.org/10.3389/frai.2025.1526221 (дата звернення: 04.07.2025).
Neupane S. et al. Explainable Intrusion Detection Systems (X‑IDS): A Survey of Current Methods, Challenges, and Opportunities / IEEE Access. – 2022. – Vol. 10. – P. 112392–112415. – URL: https://doi.org/10.1109/access.2022.3216617 (дата звернення: 04.07.2025).
Alomari Y., Andó M. SHAP-based insights for aerospace PHM: Temporal feature importance, dependencies, robustness, and interaction analysis / Results in Engineering. – 2024. – Vol. 21. – P. 101834. – URL: https://doi.org/10.1016/j.rineng.2024.101834 (дата звернення: 06.07.2025).
Patil S. et al. Explainable Artificial Intelligence for Intrusion Detection System / Electronics. – 2022. – Vol. 11, no. 19. – P. 3079. – URL: https://doi.org/10.3390/electronics11193079 (дата звернення: 06.07.2025).
Visani G. LIME: explain Machine Learning predictions / Medium. – URL: https://medium.com/data-science/lime-explain-machine-learning-predictions-af8f18189bfe (дата звернення: 07.07.2025)
Caforio F. P. et al. Leveraging Grad‑CAM to Improve the Accuracy of Network Intrusion Detection Systems / Discovery Science. – Cham, 2021. – P. 385–400. – URL: https://doi.org/10.1007/978-3-030-88942-5_30 (дата звернення: 08.07.2025).
Kalakoti R. et al. Evaluating Explainable AI for Deep Learning‑Based Network Intrusion Detection System Alert Classification / 11th Int. Conf. on Info Systems Security and Privacy, Porto, Portugal. – 2025. – P. 47–58. – URL: https://doi.org/10.5220/0013180700003899 (дата звернення: 09.07.2025).
Ribeiro M. T., Singh S., Guestrin C. Anchors: High‑Precision Model‑Agnostic Explanations / Proceedings of the AAAI Conference on Artificial Intelligence. – 2018. – Vol. 32, no. 1. – URL: https://doi.org/10.1609/aaai.v32i1.11491 (дата звернення: 11.07.2025).
Corea P. M. et al. Explainable AI for Comparative Analysis of Intrusion Detection Models / 2024 IEEE Int. Mediterranean Conf. on Communications and Networking (MeditCom), Madrid, Spain, 8–11 July 2024. – 2024. – P. 585–590. – URL: https://doi.org/10.1109/meditcom61057.2024.10621339 (дата звернення: 13.07.2025).
Shendkar B. D. Explainable Machine Learning Models for Real‑Time Threat Detection in Cybersecurity / Panamerican Mathematical Journal. – 2024. – Vol. 35, no. 1s. – P. 264–275. – URL: https://doi.org/10.52783/pmj.v35.i1s.2313 (дата звернення: 13.07.2025).
Rahmati M. Towards Explainable and Lightweight AI for Real‑Time Cyber Threat Hunting in Edge Networks / arXiv. – URL: https://doi.org/10.48550/arXiv.2504.16118 (дата звернення: 14.07.2025)
Yagiz M. A., Goktas P. LENS-XAI: Redefining Lightweight and Explainable Network Security through Knowledge Distillation and Variational Autoencoders for Scalable Intrusion Detection in Cybersecurity / arXiv. – URL: https://doi.org/10.48550/arXiv.2501.00790 (дата звернення: 15.07.2025).