Scaling of competence tests in the National Educational Panel Study – Many questions, some answers, and further challenges
Competence measurement in (longitudinal) large-scale assessments, such as in the National Educational Panel Study (NEPS), imposes specific demands on the scaling of the competence data. These challenges include scaling issues such as the question on how to deal with different response formats as well as with missing values in the scaling model. They also include design aspects, as for example, the incorporation of adaptive testing and the inclusion of students with special educational needs in the assessment. Especially in longitudinal designs the question of linking of competence scores across different cohorts and measurement occasions arises. With this article we aim at pointing out some of the challenges one has to meet when scaling competence data of (longitudinal) large-scale assessments, at giving an overview of research we have conducted within the NEPS to find solutions to these questions, and at pointing out some directions for future research. While for most of the topics we give an overview of the research that has been conducted in NEPS, we more thoroughly describe the research we have conducted on investigating the assumptions necessary for linking of different cohorts in NEPS. We specifically target the question whether the same competence may be measured coherently across the whole lifespan. The results show that for linking the Grade 9 reading test to the adult reading test, measurement invariance does not hold for all items. The implementation of the research results into the scaling of NEPS competence data is described and the applicability of the research results to other large-scale studies is discussed.
Competence tests; Item Response Theory; Scaling; Linking; Lifespan
Copyright Waxmann 2009-2018 - Imprint
Journal for Educational Research Online/Journal für Bildungsforschung Online (ISSN 1866-6671)