# What Is Scientifically-Based Research on Progress Monitoring?
Lynn S. Fuchs and Douglas Fuchs
** Abstract**
When teachers use systematic progress monitoring to track their students progress in reading, mathematics, or spelling, they are better able to identify students in need of additional or different forms of instruction, they design stronger instructional programs, and their students achieve better. This document first describes progress monitoring procedures for which experimental evidence demonstrates these effects. Then, an overview of the research is presented.
** Introduction**
Progress monitoring is when teachers assess students' academic performance on a regular basis (weekly or monthly) for two purposes: to determine whether children are profiting appropriately from the typical instructional program and to build more effective programs for the children who benefit inadequately from typical instruction.
This document describes research on progress monitoring in the areas of reading, spelling, and mathematics at grades 1-6. Experimental research, which documents how teachers can use progress monitoring to enhance student progress, is available for one form of progress monitoring: Curriculum-Based Measurement (CBM). More than 200 empirical studies published in peer-review journals (a) provide evidence of CBM's reliability and validity for assessing the development of competence in reading, spelling, and mathematics and (b) document CBM's capacity to help teachers improve student outcomes at the elementary grades.
Most classroom assessment relies on mastery measurement. With mastery measurement, teachers test for mastery of a single skill and, after mastery is demonstrated, they assess mastery of the next skill in a sequence. So, at different times of the school year, different skills are assessed. Because the nature and difficulty of the tests keep changing with successive mastery, test scores from different times of the school cannot be compared (e.g., scores earned in September cannot be compared to scores earned in November or February or May). This makes it impossible to quantify or describe rates of progress. Furthermore, mastery measurement has unknown reliability and validity, and it fails to provide information about whether students are maintaining the previously mastered skills.
CBM avoids these problems because, instead of measuring mastery of a series of single short-term objectives, each CBM test assesses all the different skills covered in the annual curriculum. CBM samples the many skills in the annual curriculum in such a way that each weekly test is an alternate form (with different test items, but of equivalent difficulty). So, in September, a CBM mathematics test assesses all of the computation, money, graphs/charts, and problem-solving skills to be covered during the entire year. In November or February or May, the CBM test samples the annual curriculum in exactly the same way (but with different items). Therefore, scores earned at different times during the school year can be compared to determine whether a student's competence is increasing.
CBM also differs from mastery measurement because it is standardized; that is, the progress monitoring procedures for creating tests, for administering and scoring those tests, and for summarizing and interpreting the resulting database are prescribed. By relying on standardized methods and by sampling the annual curriculum on every test, CBM produces a broad range of scores across individuals of the same age. The rank ordering of students on CBM corresponds with rank orderings on other important criteria of student competence (1). For example, students who score high (or low) on CBM are the same students who score high (or low) on the annual state tests. For these reasons, CBM demonstrates strong reliability and validity (2). At the same time, because each CBM test assesses the many skills embedded in the annual curriculum, CBM yields descriptions of students' strengths and weaknesses on each of the many skills contained in the curriculum. These skills profiles also demonstrate reliability and validity (3). The measurement tasks within CBM are as follows:
** Pre-reading **
**Phoneme segmentation fluency:** For 1 minute, the examiner says words; in response to each word, the child says the sounds that constitute the word.
**Letter sound fluency:** The examiner presents the student with a sheet of paper showing the 26 lower case letters displayed in random order; the student has 1 minute to say the sound identified with each letter.
** Reading**
**Word identification fluency:** The examiner presents the student with a list of words, randomly sampled (with replacement) from a list of high-frequency words; the student reads words aloud for 1 minute; the score is the number of words read correctly. (Word identification fluency is appropriate for first graders until the score reaches 40 words read correctly per minute.)
**Passage reading fluency:** The examiner presents the student with a passage of the difficulty expected for year-end competence; the student reads aloud for 1 minute; the score is the number of words read correctly. (Passage reading fluency is appropriate through the fourth-grade instructional level.)
**Maze fluency:** The examiner presents the student with a passage of the difficulty expected for year-end competence for 2.5 minutes; from this passage, every seventh word has been deleted and replaced with three possible choices; the student reads the passage while selecting the meaningful choice for every seventh word; the score is the number of correct replacements.
** Mathematics**
**Computation:** The examiner presents the student with items systematically sampling the problems covered in the annual curriculum (adding, subtracting, multiplying, dividing whole numbers, fractions, and decimals, depending on grade); the student has a fixed time (depending on grade) to write answers; the score is the number of correct digits written in answers.
**Concepts and applications:** The examiner presents the student with items systematically sampling the problems covered in the annual curriculum (measurement, money, charts/graphs, problem solving, numeration, number concepts); the student has a fixed time (depending on grade) to write answers; the score is the number of correct answers written.
** Spelling**
Each test comprises 20 words randomly sampled from the pool of words expected for mastery during the year; the examiner dictates a word while the student spells on paper; the next item is presented after the student completes his/her spelling or after 10 seconds, whichever occurs sooner; the test lasts 2 minutes; the score is the number of correct letter sequences (pairs of letters) spelled correctly.
** Written Expression**
In response to a story starter (i.e., a short topic sentence or phrase to begin the written piece), the student writes for a fixed amount of time (3-10 minutes). The score is the number of correct word sequences.
CBM produces two kinds of information. The *overall CBM score* (i.e., total score on the test) is an overall indicator of competence. The *CBM skills profile* describes strengths and weaknesses on the various skills assessed on each CBM test.
Teachers use the **overall CBM score** in three ways.
First, overall CBM scores are used in *universal screening* to identify students in need of additional or different forms of instruction. For example, CBM can be administered to all students in a class, school, or district at one point in time (e.g., October or January). Then, children in need of additional attention are identified using (a) normative standards (i.e., identifying students who score low compared to other students in the class, school, or nation) or (b) CBM benchmarks (i.e., identifying students whose scores fall below a specific cut-point that predicts future success on state tests).
The second way teachers use overall CBM scores is to *monitor students' development of academic competence*. That is, students are measured weekly or monthly, with each student's CBM scores graphed against time. This graph shows the student's progress toward achieving competence on the annual curriculum. If the graphed scores are going up, then the student is developing competence on the annual curriculum; if the scores are flat, then the student is failing to benefit from the instructional program. The rate of weekly improvement is quantified as slope. Research provides estimates of the amount of CBM progress (or slope) students typically make. So, a teacher can compare the slope of her/his own class to the slope of large numbers of typically developing students to determine whether his/her instructional program is generally successful or requires adjustment. Teachers can also examine the slopes of individual students to determine which children are failing to make the amount of progress other children in the class (or nation) are demonstrating and therefore require additional help.
The third way teachers use overall CBM scores is to *improve instructional programs*. For students who are failing to profit from the standard instructional program (as demonstrated via universal CBM screening or via inadequate CBM progress-monitoring slopes), teachers use CBM to "experiment" with different instructional components. As teachers adjust instructional programs, in an attempt to enhance academic progress for these children, the teachers continue to collect CBM data. They then compare CBM slopes for different instructional components to identify which components optimize academic growth. In this way, teachers use CBM to build effective programs for otherwise difficult-to-teach children.
Teachers use the **CBM skills profiles **to identify which skills in the annual curriculum require additional instruction and which students are experiencing problems with maintaining skills after initial mastery was demonstrated. This kind of information can be accessed via CBM because every test assesses every skill covered in the annual curriculum. So, mastery status on every skill can be described directly from each CBM test.
Next |