U.S. Department of Education: Promoting Educational Excellence for all Americans - Link to ED.gov Home Page
OSEP Ideas tha Work-U.S. Office of Special Education Programs
Ideas that work logo
  Home  Contact us
Technical Assistance Products: Assessment
Instructional Practices
 Information About PDF

 Printer Friendly Version (pdf, 1.9MB)

Massachusetts: One State's Approach to Setting Performance Levels on the Alternate Assessment

Alternate Assessment in Massachusetts

In Massachusetts, about 5,000 students, or one percent of all students being assessed, submit portfolios for the Massachusetts Comprehensive Assessment System (MCAS) Alternate Assessment. In creating portfolios, their teachers must first identify challenging outcomes for each student based on the standards in each content area being assessed. Many states, including Massachusetts, use an "expanded" version of their standards that describes academic outcomes that are appropriate for students with significant disabilities. Teachers then collect "evidence" of their students’ performance on those standards during targeted instructional activities or structured student observations. Portfolios may contain an array of work samples, instructional data sheets, audio- and videotapes, and other evidence organized into "portfolio strands" in each content area.

Once MCAS Alternate Assessments are submitted to the state, these are scored and a performance level assigned in each content area so parents and teachers have information on how well these students are learning the general curriculum relative to their past performance and the performance of other students. The process used by the Massachusetts Department of Education to assign performance levels to alternate assessments is the focus of this report. This technical phase, called standard setting, reflects several steps that typically occur between scoring and reporting (Quenemoen, Rigney, & Thurlow, 2002). However, the process reflects theoretical debates and decisions that occurred much earlier in the development process of the alternate assessment, sometimes years before the first portfolio was compiled and submitted. Several of these earlier conversations and their consequences are also described in this report since the recommendations form the philosophical basis of much that followed. First among these conceptual discussions was defining who should take an alternate assessment.

A Diverse Group of Advisors

Late in 1998, the Massachusetts Department of Education began convening regular task force meetings comprised of DOE staff (from Special Education and Assessment units), the contractor team (Measured Progress and the ILSSA group at the University of Kentucky), and the Massachusetts Alternate Assessment Advisory Committee (a diverse stakeholders group) who provided recommendations to the Department on a range of assessment issues, including:

  • how to provide guidance to IEP teams about which students to consider for alternate assessments;
  • what alternate assessments should look like;
  • how alternate assessments should be scored;
  • which scores should "count" toward overall performance; and
  • how to describe and report the performance of students who take alternate assessments.

Guidelines for IEP Teams: Who Should Take Alternate Assessments?

It was assumed from the beginning that students who needed alternate assessments were, for the most part, those who could not take paper-and-pencil tests and whose academic performance was based on the expanded standards appropriate for students with significant disabilities. However, the task force also identified students whose disabilities were not primarily cognitive whom they felt should also be considered for alternate assessments by their IEP and 504 teams. Generally, this smaller group of identified students had disabilities that presented them with "unique and significant challenges" to participation in standardized statewide testing regardless of the accommodations they could use on those tests. They recommended, for example, that students with severe behavioral and emotional disabilities, or those with cerebral palsy, sensory impairments (deaf, blind, or deaf and blind), or fragile health and medical conditions should also be considered for alternate assessments, regardless of their levels of academic performance since taking on-demand statewide tests could present them with insurmountable barriers to their participation, and therefore deny them access to the assessment (Massachusetts Department of Education, 1999).

Based on guidelines provided to Massachusetts IEP Teams since 1999, students across the full spectrum of academic performance, then, are eligible to take alternate assessments, even when they are able to demonstrate the same (or higher) levels of performance as a tested student. They simply require an alternate assessment format to demonstrate their knowledge and skills. Therefore, the MCAS reporting system required sufficient flexibility and integrity to provide meaningful feedback on students who demonstrate a "comparable performance" to a student who scores at the highest levels on the standard tests. It also became necessary to incorporate a method by which a student could meet the state’s graduation requirement through an alternate assessment. The task force strongly advised that the alternate assessment be a different, though not easier, pathway to demonstrate the same performance as a tested student.

Scoring Alternate Assessments

The task force next considered and selected criteria on which to base the scores of alternate assessment portfolios. They advised the Department to develop criteria based primarily on student performance, since that is what the standard assessment measured, rather than assessing how well the student’s program provided opportunities to learn this material. Some on the task force, however, felt that student achievement could not be separated from program effectiveness. In the end, a scoring rubric was developed in which four out of six categories are based on student performance, and two reflect the effectiveness of the student’s program:

  • Completeness of the portfolio
  • Level of complexity: the difficulty of academic tasks and knowledge attempted by the student
  • Demonstration of Skills and Concepts: the accuracy of the student’s performance
  • Independence: cues, prompts, and other assistance required by the student to perform the tasks or activities
  • Self-evaluation: the extent to which opportunities are provided to reflect, set goals, evaluate, and monitor the student’s own performance
  • Generalized Performance: the number of contexts and instructional approaches provided to the student to perform tasks and demonstrate knowledge

Scores are determined and reported in each of the rubric areas listed above. Once numerical scores are obtained for a portfolio in these rubric areas, raw scores must somehow be combined to identify an overall performance level in the content area. Before performance levels can be determined, however, several important questions must be answered:

  • What will each performance level be called; how many performance levels will there be; and how will each be defined?
  • Which numerical scores in which rubric areas will be counted in determining the overall performance level?
  • How will numerical scores in those rubric areas be combined to yield a performance level?
  • What range or combination of scores will yield a particular performance level?

 Previous  |  Next