NC-TEAN, supported by Taskstream-Tk20, will be providing a one-day workshop on September 20, 2017 focusing on the CAEP Evaluation Framework for EPP-Created Assessments. Dr. Margaret Crutchfield will spend the day with faculty going over each section of the Framework providing examples and multiple opportunities for faculty to work collaboratively to improve their assessments and ensure that they are meeting CAEP’s criteria.
The Council for Accreditation of Educator Preparation (CAEP) has significantly increased its criteria for the quality of program-created assessments used to demonstrate that an Educator Preparation Program (EPP) is meeting CAEP’s standards. CAEP’s criteria are delineated in the CAEP Evaluation Framework for EPP-Created Assessments (available on the CAEP web site). The Framework will be used by CAEP Site Visitors during their Formative Review and their feedback will be included in the Formative Feedback Report.
It is important to remember that the CAEP Framework defines the minimally sufficient criteria that will be used to judge the quality of the submitted assessments—it is not intended to define the “gold standard” of assessment development. EPP’s are always encouraged, when appropriate, to go beyond the sufficient level criteria.
The Framework is divided into seven sections:
- Section 1
Administration and Purpose: In order to effectively evaluate an assessment, reviewers need to know when and how often it is applied, its purpose and how it is used for monitoring of candidate progress, and how well it aligns with the appropriate set of standards.
- Section 2
Content of Assessment: This section examines the quality of the indicators, which is used as a generic term for assessment items—what construct, behavior, or question is being evaluated. Most indicators should be directly related to the appropriate standards and written at the same level of difficulty as this standard. Indicators should unambiguously describe the proficiencies to be evaluated and not just be listed as a category (e.g. “assessment,” differentiation,).The quality of the indicators has a direct bearing on the validity of the instrument (See Section 5).
- Section 3
Scoring: This section revers to the proficiency level descriptors – the basis for judging candidate performance. These levels need to form a developmental sequence from level to level and be clearly defined in actionable, performance-based or observable behavioral terms. The quality of the proficiency level descriptors has a direct relationship to the EPPs ability to evaluate the reliability of the assessment and its raters (See Section 4).
- Section 4
Data Reliability: At a minimum, CAEP requires that EPPs evaluate the inter-rater reliability of the scorers of the assessment. EPPs must provide information on how scorers are trained and how their ratings are tests for inter-rater reliability—and how their scoring is re-calibrated systematically.
- Section 5
Data Validity: At a minimum, CAEP expects EPPs to establish the content validity of each assessment using an appropriate methodology that meets accepted research standards—Lawshe’s methodology is one example.
- Sections 6 and 7
Both focus on surveys: CAEP does not have the same expectation for validity and reliability for surveys as it does for other kinds of assessments but does have criteria for the survey content and data quality.
The CAEP Framework is a useful tool for faculty to use as they develop new assessments and evaluate assessments currently in the program. The NC-TEAN workshop will give faculty a hands-on opportunity to do just that.
The post Taskstream-Tk20 Supports NC Workshop on CAEP Assessment Criteria appeared first on Watermark.