U.S. Department of Education: Promoting Educational Excellence for all Americans - Link to ED.gov Home Page
OSEP Ideas tha Work-U.S. Office of Special Education Programs
Ideas that work logo
  Home  Contact us
Technical Assistance Products: Assessment
TOOL KIT HOME
OVERVIEW
MODELS FOR LARGE-SCALE ASSESSMENT
TECHNICAL ASSISTANCE PRODUCTS
Assessment
Instructional Practices
Behavior
Accommodations
RESOURCES
 
 
 Information About PDF
 
 

 Printer Friendly Version (pdf, 70K)

Distribution of Proficient Scores that Exceed the 1% Cap: Four Possible Approaches

Synthesized from a variety of sources by Tiffany Martinez and Ken Olsen

Alliance for Systems Change
Mid-South Regional Resource Center: Helping Agencies Make A Difference

Interdisciplinary Human Development Institute – University of Kentucky

The passage of the No Child Left Behind Act and subsequent regulations have challenged state education agencies (SEAs) in many ways and ensuring the appropriate inclusion of students with disabilities has been a unique challenge. The recent publication of regulations on alternate assessments with alternate achievement standards (34 CFR Part 200) reinforce and extend the regulations under the Individuals with Disabilities Education Act (IDEA) dealing with alternate assessments (see 34 CFR § 300.138 (b)). Under the new regulation, states have special requirements regarding:

  • Standards – States must define how the alternate achievement standards are aligned with the State's content standards and use validated and documented methods to establish proficiency levels [§ 200.1].
  • Guidelines –States must provide clearly defined guidelines for student participation in alternate assessment based on alternate achievement standards [§ 200.6].
  • 1% Cap – Scores must be included in the calculations of adequate yearly progress (AYP) and there is a cap on scores that can be counted as proficient using alternate achievement standards. At the State and district level, all proficient scores exceeding 1% of the total enrollment in grades tested must be counted as non-proficient against grade-level standards unless an exception has been approved [§ 200.13].
  • Exceptions – If unique circumstances exist, States and LEAs may apply for exceptions in order to exceed slightly the 1% cap [§ 200.13 (c) (2) and Raymond Simon letter, March 2, 2004].
  • Score Distribution – If an LEA exceeds the cap, then the State must determine which proficient scores are to be counted as non-proficient and distribute these scores [§ 200.13 (c)(4)].

Scored Distribution Challenges

Distribution of scores exceeding the cap presents some special challenges. The regulation states that if the percentage of proficient and advanced scores on alternate assessments using alternate achievement standards exceeds the authorized cap at the State or LEA level, the state must:

  1. Include all scores of students with the most significant cognitive disabilities;
  2. Count as non-proficient the proficient and advanced scores exceeding the cap;
  3. Determine which proficient and advanced scores to count as non-proficient in schools and LEAs responsible for students who take an alternate assessment based on alternate achievement standards;
  4. Include those proficient and non-proficient scores in each applicable subgroup (e.g. economically disadvantaged, ethnicity, language minority) at the school, LEA, and state level;
  5. Ensure that parents are informed of the actual achievement levels of their children. [§ 200.13 (c) (4)]

Potential Methods

Determining how to fairly and consistently distribute proficient and advanced scores over the cap non-proficient and include those scores in each applicable subgroup is one of the challenges of implementing the 1% rule. This paper outlines the pros and cons of four potential methods for distribution of such scores. These methods, summarized in Table 1 and described below, were gleaned from conversations with Federal and State personnel and from assessment experts. They are presented as illustrations, not as a comprehensive menu. Most importantly, these options are based on the assumption that all appropriate exceptions have been sought and granted.

  • Random assignment – This method randomly distributes scores, but only across schools that tested students against alternate achievement standards. This means that all schools that tested students against alternate achievement standards should have an equal chance of receiving distributed scores and schools that did not test students against alternate achievement standards would not receive any. This method should be impartial and fair over time if students with significant cognitive disabilities are evenly distributed across schools and schools conscientiously apply participation guidelines. It is easy to use software to apply this method, however, it is seldom regarded as fair when the distribution is uneven in a particular year and is sometimes difficult to implement when student populations are very small. Therefore, it might be important to have a special distribution rule when a school has a small number of children.
  • Proportional – A proportional distribution method distributes non-proficient scores across schools in proportion to the number of students tested against alternate achievement standards. This means that schools that tested a larger number of students against alternate achievement standards would receive a larger number of non-proficient scores. This might be the best method for deterring inappropriate assignment of students to alternate achievement standards because the schools that test the most students against alternate achievement standards would receive the most non-proficient scores. However, this method might penalize schools that have a large number of students with significant cognitive disabilities who are appropriately tested against alternate achievement standards and attain proficiency due to excellent instruction relative to other schools in the LEA.

    One adaptation to a general proportional approach might be to distribute scores only in proportion to the overall percentage of students taking the alternate assessment with alternate achievement standards who scored proficient and advanced. For example, consider a district with 5000 students in which 100 students took the alternate assessment with alternate achievement standards. If 40 of those 100 students scored below proficient and 60 scored proficient and above, the district would have only 60% of 1% counted as proficient (i.e. 30 scores), not the full 1% (i.e. 50 of the 60 scores). While this might appear to be overly punitive, it might help ensure that the scores of students with the most significant cognitive disabilities are assessed and included.
  • Strategic – A strategic method identifies for distribution the scores of students who will result in maximum benefit for each school. For example, the scores of students who are members of the fewest subgroups or the scores of students who are members of subgroups that exceed the annual measurable objectives may be distributed. As a result, the school would have a better chance at meeting AYP. However, this method is very difficult to implement, since it requires a school-by-school analysis of the demographic distribution and unless the decision rules were carefully documented and followed, consistency would be hard to maintain. More importantly, it can be seen as unethical and as displaying favoritism. Finally, this method is based on the potentially false assumption that the state participation guidelines are specific, that decision-making is monitored, and that a state is comfortable that the "correct" students are in the alternate assessment with alternate achievement standards.
  • Pre-determined School Cap – A State might also establish a cap or formula for each school based on that school's historical percentage of students with significant cognitive disabilities. Thus, the maximum number or proportion of proficient scores based on alternate achievement standards would be pre-determined for each school. Therefore, a school that has historically served a proportionally large number of such students would have greater leeway than a school with a sudden spike in such numbers. Pre-determining numbers for each school would probably be most effective in LEAs with stable populations and special education services and when use of alternate achievement standards has been applied conservatively. However, small annual population changes over time may result in an imbalance among schools. Also, this method may also be seen as perpetuating historical problems, e.g. separate programs might be retained in order to keep the school cap high.

Table 1: Methods for Distributing Scores Exceeding Caps

Model

Pros

Cons

1. Random
  • Should be impartial and fair over time.
  • Easy to computerize.
  • Easy to understand/communicate.
  • Seldom regarded as fair when distribution is uneven in a particular year.
  • Might be hard to implement in small districts.
2. Proportional
  • Might deter inappropriate assignment of students to alternate achievement standards.
  • Might penalize a school that has a large number of students with significant cognitive disabilities appropriately tested and instructed.
3. Strategic
  • Might be perceived as providing the maximum benefit for schools.
  • Difficult to implement.
  • Can be perceived as unethical or as using favoritism.
  • Consistency might be hard to maintain over time.
  • Assumes "correct" students assessed.
4. Pre-determined School Cap
  • Might be effective in LEAs with stable population and special education services when use of alternate achievement standards have been applied conservatively.
  • Small population changes may result in an imbalance among schools.
  • May perpetuate historical problems.

Summary and Cautions

This paper has outlined only four possible ways of distributing scores when an approved cap (usually 1%) has been exceeded. Although pros and cons were suggested for each method, these were only speculative and they might not hold up to scrutiny. Experience and research will be essential in order to assess actual effects. Also, these are by no means the only possible ways to deal with the distribution of scores. States are encouraged to find the method that works best for them while meeting the Federal requirements.

 

The information in this document:

  • incorporates ideas from State Staff in the nine state Mid-South Regional Resource Center Region and staff of the National Center on Educational Outcomes and
  • was reviewed by staff at the U.S. Office of Elementary and Secondary Education and determined to be consistent with 34 CFR 200.
 

This publication was developed under a grant from the Office of Special Education Programs, U.S. Department of Education. Opinions expressed herein are those of the authors and do not necessarily reflect the position of the U.S. Department of Education, and such endorsements should not be inferred.