Skip to main content
Articles

Large Language Models for Intelligent Software Engineering, Code Analysis, and Automation Applications

Abstract

Large Language Models (LLMs), particularly transformer-based architectures trained on code corpora, are rapidly transforming software engineering by enabling automation, intelligent code analysis, automated code synthesis, and autonomous software maintenance. These models have shown emerging competence in tasks ranging from syntax and semantic code understanding to program improvement, debugging, and test generation. However, they also face significant limitations, including hallucination, security vulnerabilities in generated code, and challenges in deep semantic comprehension. This paper reviews recent developments in LLM-based software engineering, synthesizing results from surveys, benchmarks, and experimental frameworks that highlight both progress and pain points. We analyze how LLMs support static and dynamic analysis, autonomous bug fixing, automated test generation, and code quality improvement, and consider trends toward multi-agent systems and autonomous program improvement workflows. We also discuss practical considerations, such as integration into DevOps pipelines, reliability concerns, and evaluation metrics for real-world applicability. By providing a unified perspective on state-of-the-art techniques and challenges, this work aims to guide future research in applying LLMs to software engineering tasks that require both high accuracy and robust automation. arXiv+2arXiv+2

References

No references available for this article