Datasets
Standard Dataset
CodeMind Dataset

- Citation Author(s):
- Submitted by:
- Changshu Liu
- Last updated:
- Tue, 04/15/2025 - 01:18
- DOI:
- 10.21227/ey9m-cg53
- License:
- Categories:
- Keywords:
Abstract
Large Language Models (LLMs) have been widely used to automate programming tasks. Their capabilities have been evaluated by assessing the quality of generated code through tests or proofs. The extent to which they can reason about code is a critical question revealing important insights about their true capabilities. This paper introduces CodeMind, a framework designed to gauge the code reasoning abilities of LLMs through the following explicit and implicit code reasoning tasks: Independent Execution Reasoning (IER), Specification Reasoning (SR) and Dynamic Semantics Reasoning (DSR). The first evaluates the abilities of LLMs to simulate the execution of given inputs to a code and predict the output (IER). The second assesses the abilities of LLMs to incorporate the simulation of test data in the specification into code generation (SR). Finally, CodeMind evaluates LLMs’ abilities to understand overall code semantics only given a specific input/output (DSR). Our extensive evaluation of ten LLMs across four widely used benchmarks using CodeMind shows that LLMs, depending on their size and training strategy, can reason about some dynamic aspects of code. However, their performance drops for code with higher complexity, non-trivial logical and arithmetic operators, non-primitive types, and API calls. We show that these reasoning tasks evaluate LLMs differently, and a comprehensive evaluation of code reasoning requires them all. Finally, we show that the performance of LLMs in bug repair is not correlated with any of the code reasoning tasks, and except for advanced frontier models, other LLMs do not incorporate code reasoning when performing bug repair. Given that program repair requires execution reasoning (to determine where the behavior of buggy code differs from specified behavior to localize the bug) as well as specification and dynamic semantics reasoning (to re-write the code such that the patch keeps correct semantics but fixes semantic mismatch with the specification), this observation raises the question of to what extent we can trust these models for programming tasks that require code understanding and analysis.
Instructions can be found in https://github.com/Intelligent-CAT-Lab/CodeMind