OU Portal
Log In
Welcome
Applicants
Z6_60GI02O0O8IDC0QEJUJ26TJDI4
Error:
Javascript is disabled in this browser. This page requires Javascript. Modify your browser's settings to allow Javascript to execute. See your browser's documentation for specific instructions.
{}
Close
Publikační činnost
Probíhá načítání, čekejte prosím...
publicationId :
tempRecordId :
actionDispatchIndex :
navigationBranch :
pageMode :
tabSelected :
isRivValid :
Record type:
stať ve sborníku (D)
Home Department:
Katedra matematiky (31100)
Title:
Understanding and Supporting Student Problem Solving in Mathematics Exams with Artificial Intelligence
Citace
Konečná, P. a Ferdiánová, V. Understanding and Supporting Student Problem Solving in Mathematics Exams with Artificial Intelligence.
In:
24th European Conference on e‑Learning - ECEL 2025: Proceedings of the 24th European Conference on e‑Learning 2025-10-23 Kodaň.
s. 113-121.
Subtitle
Publication year:
2025
Obor:
Number of pages:
9
Page from:
113
Page to:
121
Form of publication:
Elektronická verze
ISBN code:
ISSN code:
Proceedings title:
Proceedings of the 24th European Conference on e‑Learning
Proceedings:
Mezinárodní
Publisher name:
neuvedeno
Place of publishing:
neuvedeno
Country of Publication:
Sborník vydaný v zahraničí
Název konference:
24th European Conference on e‑Learning - ECEL 2025
Conference venue:
Kodaň
Datum zahájení konference:
Typ akce podle státní
příslušnosti účastníků:
Celosvětová akce
WoS code:
EID:
Key words in English:
Mathematics Education, National high-stakes assessment, Error Analysis, Student Misconceptions, Artificial Intelligence in Education, Diagnostic Feedback, Conceptual Understanding, Secondary Education, Test Validation, Educational Technology
Annotation in original language:
This paper presents the findings of a pilot study aimed at gaining deeper insights into student errors in solving mathematics tasks from the Czech national school-leaving examination (maturita), while also exploring the potential of artificial intelligence (AI) to support error analysis and provide targeted feedback. The study began with an analysis of publicly available CERMAT data, focusing on tasks that have consistently shown low success rates over the years. Based on this analysis, a subset of tasks was selected and further tested on students preparing for the exam. The results were compared with national statistics to validate the relevance of the identified difficulties. A revised version of the test was then developed and administered to a new cohort of students, enabling the collection of a dataset of real student solutions for qualitative error analysis. The study adopted a nuanced framework for error classification, distinguishing between “slips” (minor, often procedural errors) and “true errors” stemming from a lack of conceptual understanding. Emphasis was placed on understanding the nature and origin of these errors, their recurrence, and implications for learning. Student work was analysed in all phases of the error-handling process, including detection, diagnosis, explanation, and correction. At the same time, the study evaluated selected AI tools, primarily ChatGPT 4.0—for their potential to solve exam-level mathematics tasks at the university level and identify errors in student solutions. Multiple test items were processed through the AI system, and its responses were compared with those of students. Particular attention was given to the AI's behaviour when confronted with incorrect or incomplete answers. The results revealed both the promise and limitations of current AI models in supporting formative assessment, particularly with respect to misinterpretation of task wording, difficulty in recognising alternative valid strategies, and occasional inconsistency in the quality of feedback.The findings contribute to the broader discussion on how AI can be effectively integrated into educational practice—not as a replacement for teacher judgment, but as a supplementary tool to enhance student understanding, develop metacognitive skills, and improve preparation for high-stakes assessments such as the maturita exam.
Annotation in english language:
This paper presents the findings of a pilot study aimed at gaining deeper insights into student errors in solving mathematics tasks from the Czech national school-leaving examination (maturita), while also exploring the potential of artificial intelligence (AI) to support error analysis and provide targeted feedback. The study began with an analysis of publicly available CERMAT data, focusing on tasks that have consistently shown low success rates over the years. Based on this analysis, a subset of tasks was selected and further tested on students preparing for the exam. The results were compared with national statistics to validate the relevance of the identified difficulties. A revised version of the test was then developed and administered to a new cohort of students, enabling the collection of a dataset of real student solutions for qualitative error analysis. The study adopted a nuanced framework for error classification, distinguishing between “slips” (minor, often procedural errors) and “true errors” stemming from a lack of conceptual understanding. Emphasis was placed on understanding the nature and origin of these errors, their recurrence, and implications for learning. Student work was analysed in all phases of the error-handling process, including detection, diagnosis, explanation, and correction. At the same time, the study evaluated selected AI tools, primarily ChatGPT 4.0—for their potential to solve exam-level mathematics tasks at the university level and identify errors in student solutions. Multiple test items were processed through the AI system, and its responses were compared with those of students. Particular attention was given to the AI's behaviour when confronted with incorrect or incomplete answers. The results revealed both the promise and limitations of current AI models in supporting formative assessment, particularly with respect to misinterpretation of task wording, difficulty in recognising alternative valid strategies, and occasional inconsistency in the quality of feedback.The findings contribute to the broader discussion on how AI can be effectively integrated into educational practice—not as a replacement for teacher judgment, but as a supplementary tool to enhance student understanding, develop metacognitive skills, and improve preparation for high-stakes assessments such as the maturita exam.
References
Reference
R01:
Complementary Content
Deferred Modules
${title}
${badge}
${loading}
Deferred Modules