Enhancing Security Against Adversarial Attacks Using Robust Machine Learning
Himanshu Tripathi1, Chandra Kishor Pandey2
1Himanshu Tripathi, Department of Computer Applications, Babu Banarasi Das University, Lucknow (Uttar Pradesh), India.
2Dr. Chandra Kishor Pandey, Department of Computer Applications, Babu Banarasi Das University, Lucknow (Uttar Pradesh), India.
Manuscript received on 24 December 2024 | First Revised Manuscript received on 29 December 2024 | Second Manuscript Accepted on 07 January 2025 | Manuscript Accepted on 15 January 2025 | Manuscript published on 30 January 2025 | PP: 1-4 | Volume-12 Issue-1, January 2025 | Retrieval Number: 100.1/ijaent.A048512010125 | DOI: 10.35940/ijaent.A0485.12010125
Open Access | Editorial and Publishing Policies | Cite | Zenodo | OJS | SSRN | Indexing and Abstracting
© The Authors. Published By: Blue Eyes Intelligence Engineering and Sciences Publication (BEIESP). This is an open access article under the CC-BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)
Abstract: Adversarial attacks pose a significant threat to machine learning models, particularly in applications involving critical domains such as autonomous systems, cybersecurity, and healthcare. These attacks exploit vulnerabilities in the models by introducing carefully crafted perturbations to input data, leading to incorrect predictions and system failures. This research focuses on strengthening machine learning systems by employing robust methodologies, including input normalization, randomization, outlier detection, manual dataset curation, and adversarial training. The study highlights how these strategies collectively enhance the resilience of models against adversarial manipulations, ensuring their reliability and security in real-world scenarios. Experimental evaluations demonstrate notable improvements in robustness, with attack success rates reduced significantly while maintaining high accuracy levels. The findings emphasize the importance of a comprehensive, multi-pronged approach to safeguard machine learning systems, paving the way for secure and trustworthy AI applications in dynamic environments.
Keywords: Adversarial Attacks, Robust Machine Learning, Input Normalization, Randomization Techniques, Outlier Detection, Dataset Curation, Adversarial Training, Model Security, Resilient AI Systems, Defensive Strategies, Robustness Evaluation, Cybersecurity in AI, Secure AI Applications, ML Defence Mechanisms, Attack Mitigation Strategies.
Scope of the Article: Computer Science and Applications