We are seeking submissions of automated scoring models to score constructed response items for the National Assessment of Educational Progress’s reading assessment. The purpose of the challenge is to help NAEP determine the existing capabilities, accuracy metrics, the underlying validity evidence of assigned scores, and costs and efficiencies of using automated scoring with the NAEP reading assessment items. The Challenge requires that submissions demonstrate interpretability of models, provide score predictions using these models, analyze models for potential bias based on student demographic characteristics, and provide cost information for putting an automated scoring system into operational use.
- All participants must confirm that they are able to meet NCES Confidential Data security requirements and complete the required documentation (available in the Github repository as “application_documents.zip” before they will be provided access to the response data, to ensure the confidentiality of student responses.
- A webinar will held on 10/4/2021 @ 12:00 ET to describe the challenge and answer any questions that potential participants may have. To register for the RFI, please email [email protected] to join the meeting. Questions may also be sent via Github “issues” or via email to [email protected].
- Applications to participate must be submitted by 10/20/21, and will be processed (and data access provided) as they are received. Responses must be submitted by 11/28/21.
Awards:- $30,000
Deadline:- 28-11-2021