Dataset is intended for studying how student programming styles and usage of IDE differs between students who plagiarise their homework and students who solve them honestly.Dataset includes homeworks submitted by students during two introductory programming courses (A and B) delivered during two years (2016 and 2017). A is delivered in C programming language, while B is delivered in C++. In addition to homeworks, dataset includes full traces of all student activity and keystrokes during homework development.


The archive provided consists of three parts:SOURCE CODES:Actual submitted homeworks by students (i.e. their source codes) are stored in folder "src". Subfolders of this folder are named after courses: A2016, A2017, B2016 and B2017. This further contain subfolders for individual assignments. On each course students were required to solve 16-22 assignments labeled "Z1/Z1", "Z1/Z2", "Z2/Z1" etc. Finally, in each folder are actual C or C++ files named after student (anonymized, so actual student names were replaced by strings in form "student1393").TRACES:IDE usage traces are stored in folder named "stats". Again, this folder is organized into subfolders named after courses. These folders contain files named after student (anonymized) with extension .stats and are in JSON format. Format of JSON files is described in readme.txt file.GROUND TRUTH:Ground truth lists students and groups of students that are considered to have involved in plagiarism due to code similarity and failure to deliver an "oral defense". There are three ground truth files. ground-truth-anon.txt contains full list of plagiarisms, ground-truth-static-anon.txt only those based on source code similarity, and ground-truth-dynamic-anon.txt only those based on failure to do an "oral defense". There is some overlap between the last two files. The format of the file is: homework assignment in the format:- A2016/Z1/Z1(dash, space, course name, slash, assignment name), followed by lists of anonymized names of students (such as "student3241") or groups of students who are mutually plagiarised separated by comma.


Once the three courses for the three learning scenarios - C# OOP programming, Sphero Edu visual programming and VEDILS authoring tool - were taught, the three student groups were asked to indicate using a scale between one and four - to avoid the selection of neutral options - their perception of the clarity and the interest of the exposition (CL and IT indicators), as well as the time spent studying the course contents (ST indicator).