Abstract
Diagnostic classification modelling (DCM) is a statistical methodology used to estimate students’ mastery of different skills (Rupp et al., 2010). In typical assessment practice, a test produces only a single, overall score for each student. This has limited value in providing direction for teaching and learning. In contrast, DCM can be designed to evaluate multiple skills, and therefore provide valuable information to direct future practice (Sessoms & Henson, 2018).
DCM are typically tied to cognitive diagnostic assessment. In the context of education, cognitive diagnostic assessment is an approach which aims to create tests which can measure specific skills so that feedback can be given about students strengths and weaknesses (Leighton & Gierl, 2007). This requires that tests are designed using a cognitive model of the knowledge and processes required to successfully complete each task (Leighton & Gierl, 2007; Zhang et al., 2024). DCM can then be used to analyze student responses to the test and estimate the skills they have mastered.
Although the primary function of DCM is to provide feedback for individuals, DCM have been used to analyze the learning across different countries by analyzing data from international large-scale assessments (ILSAs). The results produced by these studies demonstrate a potential value of analyzing ILSA data using the DCM methodology. However, ILSAs are not cognitive diagnostic assessments: they are not created based on a cognitive model. As a result, important issues can arise when analyzing ILSAs with DCM.
This study draws on previous research and an analysis of TIMSS 2023 grade 4 mathematics data to illustrate the challenges that can arise when using DCM to analyze large-scale assessment data. It identifies weaknesses in previous research that can be overcome (e.g. accurate estimation of error) and more fundamental issues that may not be able to be resolved. Research objects and hypotheses
DCM are typically tied to cognitive diagnostic assessment. In the context of education, cognitive diagnostic assessment is an approach which aims to create tests which can measure specific skills so that feedback can be given about students strengths and weaknesses (Leighton & Gierl, 2007). This requires that tests are designed using a cognitive model of the knowledge and processes required to successfully complete each task (Leighton & Gierl, 2007; Zhang et al., 2024). DCM can then be used to analyze student responses to the test and estimate the skills they have mastered.
Although the primary function of DCM is to provide feedback for individuals, DCM have been used to analyze the learning across different countries by analyzing data from international large-scale assessments (ILSAs). The results produced by these studies demonstrate a potential value of analyzing ILSA data using the DCM methodology. However, ILSAs are not cognitive diagnostic assessments: they are not created based on a cognitive model. As a result, important issues can arise when analyzing ILSAs with DCM.
This study draws on previous research and an analysis of TIMSS 2023 grade 4 mathematics data to illustrate the challenges that can arise when using DCM to analyze large-scale assessment data. It identifies weaknesses in previous research that can be overcome (e.g. accurate estimation of error) and more fundamental issues that may not be able to be resolved. Research objects and hypotheses
| Original language | English |
|---|---|
| Pages | 158-160 |
| Publication status | Published - 21 Nov 2025 |
| Event | X SEMINAR "Data from and for educational system: tools for research and teaching" - Ostia, Italy Duration: 19 Nov 2025 → 21 Nov 2025 |
Conference
| Conference | X SEMINAR "Data from and for educational system: tools for research and teaching" |
|---|---|
| Country/Territory | Italy |
| City | Ostia |
| Period | 19/11/25 → 21/11/25 |