Secure Code Generation with LLMs: Risk Assessment and Mitigation Strategies

Guardat en:
Dades bibliogràfiques
Publicat a:IUP Journal of Telecommunications vol. 17, no. 1 (Feb 2025), p. 75
Autor principal: Bar, Kaushik
Publicat:
IUP Publications
Matèries:
Accés en línia:Citation/Abstract
Full Text
Full Text - PDF
Etiquetes: Afegir etiqueta
Sense etiquetes, Sigues el primer a etiquetar aquest registre!

MARC

LEADER 00000nab a2200000uu 4500
001 3207242357
003 UK-CbPIL
022 |a 0975-5551 
024 7 |a 10.71329/IUPJTC/2025.17.1.75-95  |2 doi 
035 |a 3207242357 
045 2 |b d20250201  |b d20250228 
084 |a 210450  |2 nlm 
100 1 |a Bar, Kaushik 
245 1 |a Secure Code Generation with LLMs: Risk Assessment and Mitigation Strategies 
260 |b IUP Publications  |c Feb 2025 
513 |a Journal Article 
520 3 |a Artificial intelligence (AI)-powered code generation tools, such as GitHub Copilot and OpenAI Codex, have revolutionized software development by automating code synthesis. However, concerns remain about the security of AI-generated code and its susceptibility to vulnerabilities. This study investigates whether AI-generated code can match or surpass human-written code in security, using a systematic evaluation framework. It analyzes AIgenerated code samples from state-of-the-art large language models (LLMs) and compares them against human-written code using static and dynamic security analysis tools. Additionally, adversarial testing was done to assess the robustness of LLMs against insecure code suggestions. The findings reveal that while AI-generated code can achieve functional correctness, it frequently introduces security vulnerabilities, such as injection flaws, insecure cryptographic practices, and improper input validation. To mitigate these risks, securityaware training methods and reinforcement learning techniques were explored to enhance the security of AI-generated code. The results highlight the key challenges in AI-driven software development and propose guidelines for integrating AI-assisted programming safely in real-world applications. This paper provides critical insights into the intersection of AI and cybersecurity, paving the way for more secured AI-driven code synthesis models. 
610 4 |a OpenAI 
653 |a Software 
653 |a Malware 
653 |a Large language models 
653 |a Automation 
653 |a Artificial intelligence 
653 |a Machine learning 
653 |a Synthesis 
653 |a Cybersecurity 
653 |a Software development 
653 |a Telecommunications 
773 0 |t IUP Journal of Telecommunications  |g vol. 17, no. 1 (Feb 2025), p. 75 
786 0 |d ProQuest  |t Advanced Technologies & Aerospace Database 
856 4 1 |3 Citation/Abstract  |u https://www.proquest.com/docview/3207242357/abstract/embedded/L8HZQI7Z43R0LA5T?source=fedsrch 
856 4 0 |3 Full Text  |u https://www.proquest.com/docview/3207242357/fulltext/embedded/L8HZQI7Z43R0LA5T?source=fedsrch 
856 4 0 |3 Full Text - PDF  |u https://www.proquest.com/docview/3207242357/fulltextPDF/embedded/L8HZQI7Z43R0LA5T?source=fedsrch