Recent research indicates that fine-tuning smaller parameter language models using reasoning samples generated by large languages models (LLMs) can effectively enhance the performance of small models in complex reasoning tasks. However, after fine-tuning the small model using the existing Zero-shot-CoT method, there are still shortcomings in problem understanding, mathematical calculations, logical reasoning, and missing steps when handling problems.

Dataset Files

You must be an IEEE Dataport Subscriber to access these files. Subscribe now or login.

[1] Jian Luo, "Human Thinking Data", IEEE Dataport, 2024. [Online]. Available: http://dx.doi.org/10.21227/b4rv-3121. Accessed: Feb. 11, 2025.
@data{b4rv-3121-24,
doi = {10.21227/b4rv-3121},
url = {http://dx.doi.org/10.21227/b4rv-3121},
author = {Jian Luo },
publisher = {IEEE Dataport},
title = {Human Thinking Data},
year = {2024} }
TY - DATA
T1 - Human Thinking Data
AU - Jian Luo
PY - 2024
PB - IEEE Dataport
UR - 10.21227/b4rv-3121
ER -
Jian Luo. (2024). Human Thinking Data. IEEE Dataport. http://dx.doi.org/10.21227/b4rv-3121
Jian Luo, 2024. Human Thinking Data. Available at: http://dx.doi.org/10.21227/b4rv-3121.
Jian Luo. (2024). "Human Thinking Data." Web.
1. Jian Luo. Human Thinking Data [Internet]. IEEE Dataport; 2024. Available from : http://dx.doi.org/10.21227/b4rv-3121
Jian Luo. "Human Thinking Data." doi: 10.21227/b4rv-3121