Comparative Analysis of Styles in LLM-Generated Code for LeetCode Problems: A Preliminary Study
IEEE 49th Annual Computers, Software, and Applications Conference (COMPSAC), July 2025
Yifan Zhang, Tsong Yueh Chen, Rubing Huang, Matthew Pike, Dave Towey, Zhihao Ying, Zhi Quan Zhou. 2025. Comparative Analysis of Styles in LLM-Generated Code for LeetCode Problems: A Preliminary Study. In IEEE 49th Annual Computers, Software, and Applications Conference (COMPSAC). DOI:https://doi.org/10.1109/COMPSAC65507.2025.00219
Yifan Zhang and Tsong Yueh Chen and Rubing Huang and Matthew Pike and Dave Towey and Zhihao Ying and Zhi Quan Zhou. (2025). Comparative Analysis of Styles in LLM-Generated Code for LeetCode Problems: A Preliminary Study. IEEE 49th Annual Computers, Software, and Applications Conference (COMPSAC). https://doi.org/10.1109/COMPSAC65507.2025.00219
Yifan Zhang and Tsong Yueh Chen and Rubing Huang and Matthew Pike and Dave Towey and Zhihao Ying and Zhi Quan Zhou. "Comparative Analysis of Styles in LLM-Generated Code for LeetCode Problems: A Preliminary Study." IEEE 49th Annual Computers, Software, and Applications Conference (COMPSAC), 2025. https://doi.org/10.1109/COMPSAC65507.2025.00219
Yifan Zhang, Tsong Yueh Chen, Rubing Huang, Matthew Pike, Dave Towey, Zhihao Ying, Zhi Quan Zhou. 2025. Comparative Analysis of Styles in LLM-Generated Code for LeetCode Problems: A Preliminary Study. IEEE 49th Annual Computers, Software, and Applications Conference (COMPSAC). doi:10.1109/COMPSAC65507.2025.00219
Yifan Zhang and Tsong Yueh Chen and Rubing Huang and Matthew Pike and Dave Towey and Zhihao Ying and Zhi Quan Zhou, "Comparative Analysis of Styles in LLM-Generated Code for LeetCode Problems: A Preliminary Study," IEEE 49th Annual Computers, Software, and Applications Conference (COMPSAC), 2025. doi: 10.1109/COMPSAC65507.2025.00219
@inproceedings{compsac-2025-1,
title={Comparative Analysis of Styles in LLM-Generated Code for LeetCode Problems: A Preliminary Study},
author={Yifan Zhang and Tsong Yueh Chen and Rubing Huang and Matthew Pike and Dave Towey and Zhihao Ying and Zhi Quan Zhou},
booktitle={IEEE 49th Annual Computers, Software, and Applications Conference (COMPSAC)},
year={2025},
doi={10.1109/COMPSAC65507.2025.00219}
}
LLM, Code Generation, Code Style, Software Engineering, Education, ChatGPT, Gemini, LeetCode
Abstract
Large language models (LLMs) have rapidly become a powerful tool in automated code generation, yet most research has focused on their correctness and efficiency rather than the stylistic patterns of their outputs. In this preliminary study, we analyze the code patterns generated by five popular LLMs—ChatGPT, Gemini, Claude, Grok, and DeepSeek—in their free versions, across three LeetCode problems, one top-ranking each from the easy, medium, and hard categories. Our evaluation employs key metrics including inline comment density, naming conventions, and edge case handling, highlighting both similarities and differences in verbosity, comprehensibility, and robustness among the codes generated by models. The findings of this study have important implications for software engineering and education, suggesting that LLM-generated code can serve as both a tool for rapid prototyping and an effective learning resource for beginners. Our future work will extend this analysis to a broader set of coding challenges and compare LLM outputs with human-written code to develop robust criteria for evaluating automated code generation.