NCEO Logo
Bookmark and Share

States Challenged to Meet Special Education Targets for Assessment Indicator

Technical Report 55

Jason Altman • Christopher Rogers • Chris Bremer • Martha Thurlow

February 2010

All rights reserved. Any or all portions of this document may be reproduced and distributed without prior permission, provided the source is cited as:

Altman, J., Rogers, C., Bremer, C., & Thurlow, M. (2010). States challenged to meet special education targets for assessment indicator (Technical Report 55). Minneapolis, MN: University of Minnesota, National Center on Educational Outcomes.


Table of Contents


Executive Summary

All states are required by the Individuals with Disabilities Education Act (IDEA) to submit Annual Performance Reports (APRs) to the federal government. The purpose of this report is to reflect the progress made, according to state APRs submitted in 2008, toward meeting targets for components of APR Indicator 3 on assessments administered in school year 2006–2007. An additional purpose is to present an analysis of improvement activities used by states to facilitate progress in their assessment systems. Results show that states continue to improve in meeting reporting requirements for this Indicator. However, the number of states meeting their targets for Adequate Yearly Progress (AYP), as well as student proficiency, is waning. It appears that this phenomenon is due in many cases not to decreases in performance but rather to not being able to meet the targets set for improvement. Three categories of improvement activities were strongly associated with states’ success in meeting AYP goals: (a) production of document, video, or Web-based development/dissemination/framework; (b) improvements in data collection and reporting; and (c) clarification, examination, or development of policies and procedures. The report offers recommendations for states to consider for improving their assessment systems so that students with disabilities may better show what they know and can do, and so that data demonstrating learning may be better reported.


Introduction

Public education reform efforts of the past two decades have been driven by the notion that the implementation of statewide rigorous academic standards, through improved instructional approaches and access to a challenging curriculum, will yield higher levels of achievement for all students and reduce the achievement gap for students having difficulty reaching grade-level proficiency (Elmore & Rothman, 1999; Shepard, Hannaway, & Baker, 2009). Central to this process is accountability at the school, district, and state level for the educational progress of all students, including those who are typically difficult to educate. Recent policy has made use of standards-based assessments in accountability to advance access to the general curriculum based on the same goals and standards for all students, and improved achievement for all students (Downing, 2008; Kearns, Burdge, Clayton, Denham, & Kleinert, 2006; Nolet & Mclaughlin, 2005; Thurlow, 2000; Thurlow, Quenemoen, Lazarus, Moen, Johnstone, Liu, Christensen, Albus, & Altman, 2008).

States have been required to assess and publicly report on the participation and performance of students with disabilities in large-scale assessments since the 1994 reauthorization of the federal Elementary and Secondary Education Act (ESEA), called the Improving America’s Schools Act (IASA). The reauthorization of the Individuals with Disabilities Education Act (IDEA) in 1997 initiated an alignment with ESEA, and specified that each state must report assessment data for children with disabilities “with the same frequency and in the same detail as it reports on the assessment of nondisabled children” (IDEA, 1997). The Individuals with Disabilities Education Improvement Act of 2004 (IDEA, 2004) reinforced the reporting provisions of the 1997 IDEA amendments and systematically aligned these requirements with the mandates of No Child Left Behind (NCLB), the 2001 reauthorization of the Elementary and Secondary Education Act (ESEA). Nevertheless, some subgroups of students historically were excluded from large-scale assessments, including students with disabilities (Thurlow & Ysseldyke, 2002). There is evidence that exclusion of students with disabilities has decreased over time, in part spurred by the NCLB assessment participation requirements (Thurlow et al., 2008). NCLB requires that states annually assess all students in grades 3 through 8, and once during high school, in mathematics and reading. Further, they must report aggregates of the results of these assessments to the public (Thurlow, Bremer, & Albus, 2008; Thurlow et al., 2008), as well as to administrators at the office of the U.S. Secretary of Education (Thurlow, Altman, Cormier, & Moen, 2008). Most students are assessed using a regular assessment based on grade-level achievement standards, either with or without accommodations. Accommodations are test changes designed to provide access and improve the validity of assessment results for students with disabilities (Bolt & Roach, 2009; Thurlow, 2007; Thurlow, Lazarus, & Christensen, 2008). A small percentage of students with disabilities, those with significant cognitive disabilities, are tested using an alternate assessment based on alternate achievement standards. In some states, students with disabilities participate in an alternate assessment based on grade-level achievement standards. In a small but growing number of states, students with disabilities participate in an alternate assessment based on modified achievement standards. The analyses of these assessment results, when broken down by content area and grade level, provide a yearly snapshot of the learning progress made by all students (Altman, Cormier, & Crone, in press).

IDEA 2004 and NCLB set requirements for public reporting of participation and performance for students with disabilities in statewide assessments, including assessments with accommodations and alternate assessments. These laws require public reporting of participation of all students, as well as disaggregated reporting on several key subgroups, including students with disabilities. In addition, NCLB requires states to report on student groups in relation to their criteria for Adequate Yearly Progress (AYP) used for accountability purposes, through state annual report cards. Similarly, IDEA 2004 requires states to establish performance goals for students with disabilities and to report annually on progress toward meeting these performance goals through annual performance reports (Thurlow, Altman, & Vang, 2009).

Information about assessment participation and performance is important to students, teachers, parents, and policymakers because it shows stakeholders how well students with disabilities are performing and can identify areas where improvements need to be made (Altman et al., in press; Fast & ASR SCASS, 2002). State assessment data must be reported to the public and generally are disaggregated by content area and grade level for students with disabilities to ensure that their participation and performance are transparent (Thurlow et al., 2008). Often reported on state Web sites or sent to stakeholders as a school or district “report card,” state-reported data have been analyzed annually by the National Center on Educational Outcomes (NCEO) going back to the 2000–2001 school year (Thurlow, Langenfeld, Nelson, Shin, & Coleman, 1998; Thurlow, Nelson, Teelucksingh, & Ysseldyke, 2000; Thurlow, Wiley, & Bielinski, 2003; Ysseldyke, Thurlow, Langenfeld, Nelson, Teelucksingh, & Seyfarth, 1998).

State assessment data for students with disabilities also must be reported each year to the Secretary of Education, or a designated agency within the U.S. Department of Education, as part of an Annual Performance Report (APR). States submit APRs each year for 14 indicators for children with disabilities ages 3–6 (Part C), and 20 indicators for students ages 6–22 (Part B). Some examples of the Part B indicators include graduation rates, dropout rates, disproportionality, and secondary transition, as well as statewide assessment data for students with disabilities. As part of the information provided for each indicator, states are required to document progress toward targets set forth in State Performance Plans (SPPs) in 2006, and to list and describe activities undertaken in the state to improve progress toward the targets. SPP targets were set independently by each state, through the involvement of stakeholder groups. In many states, targets for students with disabilities match the NCLB targets set for all students.

States must follow several guidelines for the APR Part B Indicator 3 data that they submit. These guidelines include reporting: (a) the percentage of districts (of those meeting minimum cell size requirements for NCLB) that meet the state’s AYP goals for students with disabilities; (b) participation rates of students in various assessment options (regular assessment without accommodations, regular assessment with accommodations, alternate assessment based on grade-level achievement standards and alternate assessments based on alternate achievement standards); and (c) performance of students with disabilities on these assessments. Data on the “alternate assessment based on modified academic achievement standards” option were not requested from the states until February 2009, the 2007–2008 data reporting year. For each of these, states are to compare their data to targets declared in the SPP (or modified with explanation in a subsequent year), and progress or slippage in progress toward targets must be explained. States also report on improvement activities in their APRs.

The purpose of the current report is to highlight 2006–2007 Indicator 3 data, progress toward targets, and improvement activities identified by states. This report summarizes data and progress toward targets for Indicator 3 components: percent of districts meeting AYP, state assessment participation, and state assessment performance, as well as analysis of state-reported improvement activities. This report expands on information that was summarized and submitted to the Office of Special Education Programs (see the 2008 NCEO APR analysis located at http://www.nceo.info/indicator3/default.html).

This report provides an overview of our methodology, followed by findings for each component of Part B Indicator 3: AYP, Participation, and Performance. For each component we include findings, challenges in analyzing the data, and examples of well-presented data. In addition, we address Part B assessment targets and improvement activities as well as their interaction with AYP and student performance. Finally, we conclude by summarizing current assessment trends and identifying overarching characteristics of the collected data.


Methods

State Data

APRs used for this report were obtained from the Regional Resource and Federal Centers (RRFC) Web site in March, April, and May 2008. In addition to submitting information in their APRs for Part B Indicator 3 (Assessment), states were requested to attach Table 6 from their 618 (Child Count) submission. Table 6 requires detailed entries for participation and performance by grade for reading and mathematics. Although AYP data are not included in Table 6, participation and performance data requested in the APR for Part B Indicator 3 should be reflected in each state’s Table 6. For the analyses in this report, we used only the information that states reported for assessments in their APRs.

Three components comprise the data in Part B Indicator 3 that are summarized here:

  • 3A is the percentage of districts that meet the state’s Adequate Yearly Progress (AYP) objectives for the disability subgroup (based on those with a disability subgroup meeting the state’s minimum “n” size)
  • 3B is the participation numbers and rates for children with Individualized Education Programs (IEPs) who participate in the various assessment options (Participation)
  • 3C is the performance of children with IEPs, in terms of the numbers and rates of students proficient and above, for the general assessment, and for alternate assessments based on grade-level or alternate achievement standards (Proficiency)

3B (Participation) and 3C (Performance) each have five subcomponents:

  • Number of students with Individualized Education Programs (IEPs)
  • Number of students in a regular assessment with no accommodations
  • Number of students in a regular assessment with accommodations
  • Number of students in an alternate assessment based on grade level achievement standards
  • Number of students in an alternate assessment based on alternate achievement standards

States also provided information on their use of technical assistance centers and instructional or curricular models or programs. Information about improvement activities included in APRs comprised those activities undertaken during 2006–2007 as well as projected activities for upcoming years. The current analyses centered on 2006–2007 activities only.

Analysis Procedures

State AYP, participation, and performance data were entered into a Microsoft Excel spreadsheet and verified. For this report data for each component are reported overall, by whether the target was met, for regular and unique states, and by Regional Resource Center (RRC) for regular states. Due to the emerging nature of assessments in most unique states, and the fact that most unique states are in Region 6, the unique states were not included in the regional analyses.

We provide data summaries in terms of averages across states. It is important to remember that these averages are sums of the percentages reported by states divided by the number of those states included in the analysis. In this approach, the data provided by each state are given equal weight regardless of state population. This is different from reporting weighted averages by summing raw numbers across states to produce averages. In that approach, states with larger populations, such as California and Texas, have a larger impact on the averages than states with smaller populations, such as Alaska and Wyoming.

The analysis of 2006–2007 improvement activities used the OSEP coding scheme consisting of letters A–J, with J being “other” activities. Coders used 12 subcategories under J to capture specific information about the other types of state activities. The list of states was randomized and each of two coders independently coded five states to determine inter-rater agreement. The coders discussed their differences in coding and came to agreement on criteria for each category. An additional five states were coded independently by each rater and compared. After reaching greater than 80% inter-rater agreement, the two coders independently coded the remaining states and then met to compare codes and reach agreement on final codes for each improvement activity in each state. Many improvement activities were coded in more than one category. Coders were able to reach agreement in every case.

Frequency of use of technical assistance from the Regional Resource and Federal Centers (RRFC) network and national technical assistance centers was computed for both regular and unique states. Coders identified instructional or curricular models or programs from the improvement activities listed by the states. In some instances, coders inferred the use of a model or program based on the terminology used by states to describe their improvement activities.


Results

APR data are summarized here for 2006–2007 school year information in the following order: AYP results (Indicator 3 Section A), participation of students with disabilities in state assessments (Indicator 3 Section B), performance of students with disabilities in state assessments (Indicator 3 Section C). We also summarize state assessment targets and improvement activities, as well as use of technical assistance centers and instructional or curricular models or programs identified by states. In addition, we examine the relationship between reported use of improvement activities and states’ AYP and student performance results.

Adequate Yearly Progress (Indicator 3, Section A)

Component 3A (AYP) was defined for states as:

Percent = [(number of districts meeting the State’s AYP objectives for progress for the disability subgroup (i.e., children with IEPs)) divided by (total number of districts that have a disability subgroup that meets the State’s minimum “n” size in the State)] times 100.

Figure 1 shows the ways in which regular states provided AYP data in their APRs. Forty-eight regular states had data available; of the two states not providing AYP data, one state is a single district and thus is not required to provide data for this component (i.e., AYP does not apply) and another state did not report AYP data because it used a new test and had obtained permission not to compare results from the new test to previous results. However, only 33 states reported AYP data in their APRs in such a way that the data could be combined with data from other states—that is, they provided overall AYP data. Fourteen states provided only data broken down by content area, and one state provided data that could not be used because it was not reported correctly.

Figure 1. Ways in which Regular States Provided AYP Data for 2006–2007

Figure 1 Pie Chart

AYP determinations were not provided for the unique states. It was unclear how many of the unique states were required to set and meet the AYP objectives of NCLB, either because they are single districts or because they are not subject to all of the requirements of NCLB.

Table 1 shows information about states’ 2006–2007 AYP data in relation to baseline and target data reported in their SPPs, or as subsequently revised. Six of the 33 regular states that had usable 2006–2007 AYP data lacked either baseline (n = 3) or target data (n = 3). Thus, Table 1 shows data only for the 27 states with complete AYP data. No unique states had complete data for reporting in Table 1.

As shown on the first row of Table 1, the 27 states with sufficient data had an average baseline of 43.7% of eligible districts (those meeting minimum n) making AYP; their average target for 2006–2007 was 51.5%. Actual AYP data for 2006–2007 showed an average of 54.4% of districts in these 27 states making AYP. Thus, across those states for which data were available, the average percentage of districts making AYP was slightly higher than the average target. This is a change from past years when the average percentage was slightly lower than the target. Twelve of the 27 states met their AYP targets. Fifteen states did not meet their targets for the AYP indicator for the 2006–2007 school year. This reduction in the number of states not meeting AYP targets is likely the result of a number of states modifying their targets from initial SPPs. In 2006–2007, nine regular states and one unique state modified their targets. In 2005–2006, 14 regular states and one unique state had modified their targets.

Table 1. Average Percentage of Districts Making AYP in 2006–2007 for States that Provided Baseline, Target, and Actual Data

 

N

Baseline
(Mean %)

Target
(Mean %)

Actual Data
(Mean %)

DATA AVAILABLE

Regular States

27

43.7%

51.5%

54.4%

Unique States

0

---

---

---

TARGET (Regular States)

Met

12

43.8%

46.6%

67.7%

Not Met

15

43.6%

55.4%

43.8%

TARGET (Unique States)

Met

0

---

---

---

Not Met

0

---

---

---

The comparison of data for states that met their targets and those that did not meet their targets revealed a finding that was first documented for the 2005–2006 school year: the 12 states that met their AYP targets showed an average target just slightly above their average baseline (46.6% vs. 43.8%). Their actual 2006–2007 data showed an average of 67.7% of districts making AYP, which was well above the baseline and target percentages. In contrast, the 15 states that did not meet their targets had an average baseline of 43.6%, target of 55.4%, and actual data of 43.8%. For 2005–2006, the difference in targets between the two groups of states was at least 5%; for the 2006–2007 year it was 12%. In general, states that did not meet their targets for districts meeting AYP had a lower baseline, on average, but set a higher average target. Further examination of these data is warranted.

Regional Findings. Table 2 shows AYP data RRC region for regular states. Variation in baseline data is evident, as is variation in whether regions, on average, met their targets. Overall, in four of the six regions, average actual data equaled or exceeded targets set for 2006–2007.

Table 2. By Region: Percentage of Districts Making AYP within Regular States that Provided Data across Baseline, Target, and Actual Data

RRC
Region

N

Baseline
(Mean %)

Target
(Mean %)

Actual Data
(Mean %)

Region 1

4

27.8%

58.3%

60.0%

Region 2

5

33.8%

43.6%

34.8%

Region 3

3

55.7%

60.4%

82.1%

Region 4

5

64.6%

64.8%

68.6%

Region 5

5

42.6%

45.1%

52.1%

Region 6

5

39.4%

41.8%

41.2%

Note: No unique states were included in Region 2 and Region 6.

Challenges in Analyzing AYP Data. Many states did not provide AYP data in the manner recommended by the U.S. Department of Education. The major challenge for 2006–2007, which was also evident in past years, was providing overall AYP data, rather than only disaggregated data (e.g., by content or grade). For a district to meet AYP, it must meet AYP for all grade levels and content areas. Meeting AYP is summative across grade levels and content areas, and an overall number for the district cannot be derived from numbers provided by grade or content area. Fourteen states provided data by grade or content area rather than overall. This suggests that state confusion about which data to report for AYP remains a major challenge to be addressed by technical assistance.

In contrast, states generally used the minimum “n” instruction in the correct manner for 2006–2007 data. Few states calculated an overall AYP using the incorrect denominator. Also, no states provided only the percent of districts for which AYP was not met. Generally states provided the AYP data in a table rather than embedding the data in text; using a table improves the clarity of the data.

Example of Well-presented AYP Data. Examples of well-presented AYP data are displayed in a table or list in a way that clarifies (a) the number of districts in the state overall, (b) the number of districts meeting the state designated minimum “n” for the disability subgroup, and (c) the number of those districts meeting the minimum “n” that met the state’s AYP objectives. States that provided reading and mathematics AYP information, or AYP information by grade, could be included in the analyses only if they provided the overall data requested by the data template.

A number of states provided very effective presentations of AYP data that had all the desired information. Table 3 is a template of an AYP table similar to what these states presented. Important characteristics reflected in the table are:

  • Number of districts overall
  • Number of districts meeting the minimum “n” designated by the state
  • Number of districts meeting AYP

The clear presentation of AYP data in Table 3 indicates whether actual data met the target for the year in question. It is important to note that if the table or text does not include overall AYP data (i.e., districts meeting AYP on both reading/English language arts and mathematics), it is not possible to calculate this critical information. Separate content area information cannot be added together or averaged to obtain an overall AYP number.

Table 3. Example of Potential AYP Table Listing All Important Elements

Total Number
of Districts
or LEAs in State

Number
of Districts/LEAs
Meeting “n”
Size

Number
of Districts/LEAs
Making AYP
for Reading1

Number
of Districts/LEAs
Making AYP
for Mathematics1

Number
of Districts/LEAs
Making AYP
Overall

*

*

*

*

Not a sum

1It is not necessary for AYP purposes to provide information by academic content area. However, states may find this information useful.

 

Participation of Students with Disabilities in State Assessments
(Indicator 3, Section B)

The participation rate for students with IEPs includes children who participated in the regular assessment with no accommodations, in the regular assessment with accommodations, in the alternate assessment based on grade-level achievement standards, and in the alternate assessment based on alternate achievement standards. Component 3B (participation rates) was defined for states by first identifying the numbers required for calculations and then the calculations to be made:

a. # of children with IEPs in assessed grades;

b. # of children with IEPs in regular assessment with no accommodations (percent = [(b) divided by (a)] times 100);

c. # of children with IEPs in regular assessment with accommodations (percent = [(c) divided by (a)] times 100);

d. # of children with IEPs in alternate assessment against grade level achievement standards (percent = [(d) divided by (a)] times 100); and

e. # of children with IEPs in alternate assessment against alternate achievement standards (percent = [(e) divided by (a)] times 100).

In addition to providing the above numbers, states were asked to:

  • Account for any children included in ‘a,’ but not included in ‘b,’ ‘c,’ ‘d,’ or ‘e’
  • Provide an Overall Percent: (‘b’ + ‘c’ + ‘d’ + ‘e’) divided by ‘a’

Forty-nine regular states reported 2006–2007 assessment participation data in some way. Forty-four of these states either provided appropriate data by content area or provided adequate raw data to allow for content area calculations; this number is up from 43 a year ago. Five states provided data broken down by content area and grade level but did not provide raw numbers. One state did not provide participation data of any kind, down from three in 2005–2006. Nine of the 10 unique states reported 2006–2007 assessment participation data.

Table 4 shows the participation data for mathematics and reading, summarized for all states and for those states that met and did not meet their participation targets. A total of 42 regular states and eight unique states provided adequate participation data for baseline, target, and actual target data—shown in the table as “actual data”—for 2006–2007. These states provided appropriate overall data which were not broken down by grade for mathematics and reading, or raw numbers that allowed NCEO to derive an overall number for actual data. For participation, but not for performance, it was acceptable to provide one target participation rate for both mathematics and reading content areas. This was the approach taken by a number of states. For both mathematics and reading, average targets for participation for all states were the same (96.3%) and average baseline data for all states were similar (96.6% for mathematics, 97.1% for reading). Actual data reported by these states were 97.8% for mathematics and 97.7% for reading, both of which were slightly above baseline. It should be noted that on average states established targets that were below baseline values.

Table 4. Average Participation Percentages in 2006–2007 for States that Provided Baseline, Target, and Actual Data

N

Mathematics

Reading

Baseline
(Mean %)

Target
(Mean %)

Actual Data
(Mean %)

Baseline
(Mean %)

Target
(Mean %)

Actual Data
(Mean %)

DATA AVAILABLE

Regular States

42

96.6%

96.3%

97.8%

97.1%

96.3%

97.7%

Unique States

8

85.5%

90.5%

85.2%

85.4%

90.3%

83.9%

TARGET (Regular States)

Met

30

96.7%

95.7%

98.3%

96.9%

95.7%

98.3%

Not Met

12

96.5%

98.2%

96.2%

97.8%

98.0%

96.1%

TARGET (Unique States)

Met

2

88.5%

93.5%

101.7%

89.0%

93.5%

101.5%

Not Met

6

84.5%

89.5%

79.7%

84.2%

89.3%

78.1%

The eight unique states that provided all necessary data points saw slippage from an average baseline of 85.5% for mathematics and 85.4% for reading to a 2006–2007 average rate of 85.2% for mathematics and 83.9% for reading. Both rates fell below the average target participation rate of 90.5% for mathematics and 90.3% for reading.

An analysis of state data by target status, whether met or not met, was completed. States that met their target for BOTH content areas were classified as “met.” States that did not meet their target for either target area and states that met their target for one content area but not the other were classified as “not met.” Thirty regular states and two unique states met their participation targets in both mathematics and reading in 2006–2007; 12 regular states and six unique states did not meet their targets for participation. The remaining states did not provide appropriate baseline data, did not provide target data, or did not provide actual data. These states were not classified for either the participation or performance subcomponents.

Across regular states that met their targets in both content areas, an average of 98.3% of students participated in mathematics and reading assessments. In states that did not meet their targets, 96.2% of students with disabilities participated in both content area tests. States that did not meet their targets had higher targets (98.2% for mathematics and 98.0% reading), on average, than states that met their targets (95.7% for both). This is the second consecutive year that this finding of different targets was identified. For both content areas, states that met their targets had a lower average value for baseline data.

Eight unique states provided adequate participation information for determination of whether they met targets. An average of 101.6% of students with disabilities participated in the state mathematics and reading assessments for the two unique states that met their targets in participation. A participation rate of more than 100% is possible if the denominator count was not performed on the day of testing, and there was a decrease in the number of students with IEPs by the time testing occurred, or if students were counted more than once for sitting for more than one assessment. In the six unique states that did not meet their targets, 79.7% of students with disabilities participated in the mathematics assessment and 78.1% in reading. The targets set by the two unique states that met their targets in 2006–2007 were more challenging than those for states that did not meet their targets for the year.

Regional Findings. Data presented by RRC region for regular states are shown in Table 5. For both mathematics and reading, the average 2006–2007 participation rates varied little, ranging from 96.1% to 99.5%. Regions 3 and 6 showed participation rates in the 96% range, slightly trailing averages seen in the other regions. Region 3 was the only region to show average actual data that were lower than the average target for the region; this was true for both mathematics and reading. For one of the six regions, the average 2006–2007 targets for the states within the region surpassed the average baseline data for those states.

Table 5. By Region: Average Participation Percentages in 2006–2007 for Regular States that Provided Baseline, Target, and Actual Data

RRC
Region

N

Mathematics

Reading

Baseline
(Mean %)

Target
(Mean %)

Actual Data
(Mean %)

Baseline
(Mean %)

Target
(Mean %)

Actual Data
(Mean %)

Region 1

5

92.4%

97.2%

98.0%

95.4%

97.2%

98.0%

Region 2

6

96.8%

95.8%

99.5%

97.0%

95.8%

99.3%

Region 3

7

97.7%

97.3%

96.1%

97.7%

97.3%

96.1%

Region 4

7

97.3%

95.5%

98.0%

97.1%

95.5%

97.7%

Region 5

10

96.5%

96.5%

98.5%

97.1%

96.4%

98.6%

Region 6

7

98.0%

95.8%

96.8%

97.9%

95.9%

96.5%

Note: No unique states were included in Region 2 and Region 6.

Challenges in Analyzing Participation Data. The data submitted by states for the Participation component were improved over those submitted for 2004–2005, and moderately improved over the data included in APR 2005–2006 submissions. It appears that states used the correct denominator in calculating participation rates—that is, number of children with IEPs who are enrolled in the assessed grades—and did not report participation rates of exactly 100% without information about invalid assessments, absences, and other reasons why students might not be assessed.

One continuing challenge that was noted in the 2005–2006 reports is the failure of some states to provide targets by content area. States should report targets by content area so that readers are not left to make assumptions about state intentions. Another challenge is to ensure that states report raw numbers as well as percentages derived from calculations. Only in this way are the numbers clear and understandable to others who read the report. Providing information this way also allows others to average across grades or content areas, if desired, by going back to the raw numbers.

Example of Well-presented Participation Data. Participation data that were presented in tables, with raw numbers, and that accounted for students who did not participate were considered exemplary. In this format and with this information it was easy to determine that the data had been cross-checked so that rows and columns added up appropriately, and it was easy to determine what the numerator and denominator were in various calculations.

Table 6 is an adaptation of a state table showing the desired information. Numbers are presented for the mathematics content area for each of the subcomponents (A-E) in each of the grade levels 3–8 and high school, with overall totals near the bottom and on the right. This table also presents in a clear and usable manner information regarding those students who were not tested on the state assessment in mathematics and the reasons for non-participation.

Table 6. Example Presentation of Participation Dataa

Statewide Assessment
2007-2008

Reading Assessment Participation

Total

Grade 3

Grade 4

Grade 5

Grade 6

Grade 7

Grade 8

Grade HS

No.

%

A

Children with IEPs

B

IEPs in regular assessment with no accommodations

C

IEPs in regular assessment with accommodations

D

IEPs in alternate assessment against grade-level standards

(AA-GLAAS)

E

IEPs in alternate assessment against alternate standards

(AA-AAAS)

Overall (b + c + d + e) Baseline

Overall Percent

Students Included in IEP Count but not Included in Assessments Above

Students whose assessment results were invalid

Students who took an out of level test

Parental exemptions

Absent

Did not take for other reasons

a This table represents a combination of best practices seen within states; state data has been removed from this table

Note: Shaded portions of this table are superfluous and would include unnecessary data, or data already shown elsewhere.

 

Performance of Students with Disabilities on State Assessments
(Indicator 3, Section C)

The performance of children with IEPs is based on the rates achieving proficiency on the regular assessment with no accommodations, the regular assessment with accommodations, the alternate assessment based on grade-level achievement standards, and the alternate assessment based on alternate achievement standards. Component 3C (proficiency rates) was defined for states by first identifying the numbers required for calculations, and then the calculations to be made:

a. number of children with IEPs enrolled in assessed grades;

b. number of children with IEPs in assessed grades who are proficient or above as measured by the regular assessment with no accommodations (percent = [(b) divided by (a)] times 100);

c. number of children with IEPs in assessed grades who are proficient or above as measured by the regular assessment with accommodations (percent = [(c) divided by (a)] times 100);

d. number of children with IEPs in assessed grades who are proficient or above as measured by the alternate assessment against grade level achievement standards (percent = [(d) divided by (a)] times 100); and

e. number of children with IEPs in assessed grades who are proficient or above as measured against alternate achievement standards (percent = [(e) divided by (a)] times 100).

In addition to providing the above numbers, states were asked to:

  • Provide an Overall Percent = (‘b’ + ‘c’ + ‘d’ + ‘e’) divided by ‘a’

Forty-eight regular states reported 2006–2007 assessment proficiency data in some way. Two regular states did not provide performance data. Four of the states that reported data provided only overall performance percentages for the two content areas and did not provide raw numbers of any kind. Seven of the 10 unique states also reported performance data.

Table 7 shows proficiency data for mathematics and reading for the 33 regular states and four unique states that provided usable baseline, target, and actual 2006–2007 proficiency data. Data are disaggregated also for those states that met and those states that did not meet their performance targets. Average targets for mathematics and reading for the 33 regular states were 42.8% and 46.3% respectively. These targets were more than five percentage points higher for both mathematics and reading in 2006–2007 than they were one year earlier. The actual data that states reported were, on average, 38.8% for mathematics and 40.7% for reading.

Table 7. Average Proficiency Percentages for States that Provided Baseline, Target, and Actual Data

Mathematics

Reading

N

Baseline
(Mean %)

Target
(Mean %)

Actual Data
(Mean %)

Baseline
(Mean %)

Target
(Mean %)

Actual Data
(Mean %)

DATA AVAILABLE

Regular States

33

34.7%

42.8%

38.8%

36.5%

46.3%

40.7%

Unique States

4

13.3%

28.3%

5.5%

13.3%

28.3%

8.5%

TARGET (Regular States)

Met

8

32.8%

33.9%

42.3%

35.1%

36.9%

41.8%

Not Met

25

35.3%

45.6%

37.7%

36.9%

49.3%

40.3%

TARGET (Unique States)

Met

0

---

---

---

---

---

---

Not Met

4

13.3%

28.3%

5.5%

13.3%

28.3%

8.5%

Average targets were 28.3% for mathematics and 28.3% for reading across the four unique states that provided analyzable data points for baseline, target, and actual data. The percent proficient for these four unique states reported were, on average, 5.5% for mathematics and 8.5% for reading. These four unique states did not meet their performance targets.

An analysis of state data by target status, whether met or not met, was also completed for the 33 regular states. States that met their targets for both content areas were classified as “met.” States that did not meet their target for either target area and states that met their target for one content area but not the other were classified as “not met.” Eight regular states met their targets in mathematics and reading for proficiency in 2006–2007; 25 regular states and 4 unique states did not meet their targets for proficiency in either or both content areas. The remaining states either did not provide appropriate baseline data or did not provide actual target data.

Across the eight regular states that met their targets in both content areas, an average of 42.3% of students scored proficient on mathematics assessments and 41.8% of students scored proficient on reading assessments. In states that did not meet their targets, 37.7% of students were proficient in mathematics, and 40.3% were proficient in reading. States meeting and states not meeting their targets appeared to be showing similar percentage increases in proficiency. Regular states that did not meet their targets had higher targets (45.6% for mathematics and 49.3% for reading), on average, than those that met their targets (33.9% for mathematics and 36.9% for reading). It appears that states with lower target values more frequently met their targets.

Regional Findings. Table 8 shows data summarized for regular states by RRC region for mathematics and reading. As is evident in this table, considerable variability existed in the average baselines and in the targets that were set for both content areas. The range in baseline and target data was about 20 percentage points across regions, and the actual data differed by 13 percentage points. It is likely that the widely differing baseline information is in some way responsible for the continued variance in targets. As for actual performance, the average of states in two of the six regions for mathematics, and none of the six regions for reading, met 2006–2007 performance targets. This is possible even if a majority of the states in the region met their targets, as the values in the table are averages of equal weight for each state. For all six regions, the average 2006–2007 targets for the states within the region surpassed the average baseline data for those states.

Table 8. By Region: Average Proficiency Percentages in 2006–2007 for Regular States That Provided Baseline, Target, and Actual Data

Mathematics

Reading

RRC Region

N

Baseline
(Mean %)

Target
(Mean %)

Actual
(Mean %)

Baseline
(Mean %)

Target
(Mean %)

Actual
(Mean %)

Region 1

4

29.3%

55.3%

36.3%

26.5%

55.3%

36.5%

Region 2

4

48.8%

49.8%

43.2%

54.0%

54.5%

48.6%

Region 3

8

36.3%

40.0%

41.1%

38.1%

43.1%

41.5%

Region 4

6

28.8%

39.1%

39.2%

30.3%

43.9%

36.1%

Region 5

7

34.7%

43.0%

39.9%

37.9%

46.4%

44.6%

Region 6

4

31.8%

34.1%

30.1%

32.3%

39.0%

35.5%

Note: No unique states were included in Region 2 and Region 6.

Challenges in Analyzing Assessment Performance Data. The data submitted by states for the performance component were greatly improved over data submitted for 2004–2005, and moderately improved over data reported in the 2005–2006 APR. Still, not all states used the correct denominator in calculating proficiency rates—that is, number of children with IEPs who are enrolled in the assessed grades. Several states made the mistake of using the number of students assessed as the denominator for proficiency rate calculation. The denominator used in all calculations performed by NCEO for these states was the number of enrolled students with IEPs.

A factor limiting our analysis was that several states presented only overall performance data for mathematics and reading. Several states did not provide data for subcomponents—that is, ‘a’–‘e,’ in the instructions, which covered the different types of assessments.

One challenge that remains for proficiency data, as for participation data, is the failure of some states to report overall targets and actual proficiency rates by content area as well as by grade. Targets cannot be averaged across grades to an overall number—unless the numbers of enrolled students with IEPs is identical in each grade level—because there are different denominators for each grade level. Reporting proficiency rates for mathematics and reading for grades 3–8 and high school is needed to ensure that the numbers are clear and understandable. Reporting this way allows numbers to be added and averaged appropriately.

Example of Well-presented Proficiency Data. Well-presented proficiency data are those provided in tables, with both raw numbers and percentages, and that account for all students participating in assessments. Table 9 is an adaptation of a performance table showing all of the appropriate raw numbers and percentages for one content area. In this table, raw numbers and percentages for all performance indicators are presented by grade level, with totals on the right. Overall proficiency is clearly indicated in the bottom row.

Table 9. Example Presentation of Performance Dataa

Statewide Assessment
2007-2008

Reading Assessment Performance

Total

Grade 3

Grade 4

Grade 5

Grade 6

Grade 7

Grade 8

Grade HS

No.

%

A

Children with IEPs

B

IEPs in regular assessment with no accommodations

C

IEPs in regular assessment with accommodations

D

IEPs in alternate assessment against grade-level standards
(AA-GLAAS)

IEPs in alternate assessment against modified standards
(AA-MAAS)

E

IEPs in alternate assessment against alternate standards
(AA-AAAS)

Overall (b + c + d + e) baseline

Overall Percent

a This table represents a combination of best practices seen within states; state data has been removed from this table

Note: Shaded portions of this table are superfluous and would include unnecessary data, or data already shown elsewhere.

 

Assessment Targets

States set targets for AYP, Participation, and Performance in their State Performance Plans which are submitted in 2006. In each APR, states assess their progress toward meeting their targets.

AYP Targets. Information on AYP targets was provided by 45 regular states and two unique states. Table 10 shows the range in the targets set by states and the median average annual increase across these states. For both regular states and unique states some states actually have a targeted decrease in districts meeting AYP. Averaged across all states, the average increase reflected in the targets was 2% for regular states and just under 6% for unique states.

Table 10. Target for Annual Increase in Percent of Districts Meeting AYP

Provided Target Information

Range of Average Annual Change

Median Average Annual Increase

Low

High

45 Regular States

Annual Decrease of 9%

Annual Increase of 13%

2.0%

2 Unique States

Annual Increase of 2%

Annual Increase of 9%

5.5%

Participation Targets. Targets for participation are summarized in Table 11. This information was provided by 46 regular states and nine unique states. Nearly every state was aiming for at least 95 percent participation of students with disabilities by 2010–2011. However, for some states, this was actually a decrease from their current participation levels. Sixteen states indicated that they are aiming for 100 percent participation of students with IEPs in their statewide testing system by 2010–2011, and six unique states had 100 percent as their goal.

Table 11. Targets for Participation Rates

Provided Target Information

Range of Average Annual Change

Median Average Annual Value

Low

High

46 Regular States

90% participation

100% participation

99%

9 Unique States

93% participation

100% participation

100%

Performance Targets. States were asked to provide annual targets for proficiency. These data are provided in Table 12, converted to average annual targets for comparison. A total of 44 regular states and nine unique states provided these data.

Table 12. Targets for Proficiency Rates

Provided Target Information

Range of Average Annual Change

Median Average Annual Increase

Low

High

44 Regular States

Annual increase of 1%

Annual increase of 10%

5%

9 Unique States

Annual increase of 1%

Annual increase of 11%

5%

 

Summary

In general, the data that states provided were not conducive to inclusion in an analysis with data from other states. For all of the subcomponents of Indicator 3 (AYP, Participation, Performance), the numbers of regular and unique states included in our analysis has remained fairly static from year to year, with a few more states included in participation analyses compared to AYP and performance analyses. Just more than 60% of regular states, and near 50% of unique states have presented data that can be analyzed for performance. Just more than half of the regular states have presented data that can be analyzed for AYP (AYP does not apply to all unique states).

A more in-depth look at the data reveals no trends across time indicating that more states are meeting targets (see Table 13). In fact, for performance, eight fewer states (50%) met their targets for the 2006–2007 school year as compared to the year before. The trends for unique states are more difficult to analyze due to the small numbers in the analyses. Still, no unique state met its target for performance in 2006–2007. This information will be important to keep in mind as one reads further into this analysis of state targets from the SPP 2006, APR 2007, and APR 2008 reports.

Table 13. Target Analysis Overview: Number of States in Analysis

2005–2006 AYP

2006–2007 AYP

2005–2006 Participation

2006–2007 Participation

2005–2006 Performance

2006–2007 Performance

DATA AVAILABLE

Regular States

27

27

40

42

32

33

Unique States

---

---

8

8

5

4

TARGET ANALYSIS (Regular States)

Met

13

12

30

30

16

8

Not Met

14

15

10

12

16

25

TARGET ANALYSIS (Unique States)

Met

---

---

2

2

2

0

Not Met

---

---

6

6

3

4

Table 14 shows AYP data. There has been a slight increase in the number of states providing these data over time. In addition, regular states have increased the percentage of districts meeting targets. Therefore, AYP does not apply to all unique states.

Table 14. Percentage of Districts with Minimum “n” Meeting AYP

Baseline

2005–2006 Target

2006–2007 Target

2005–2006 Actual Data

2006–2007 Actual Data

DATA AVAILABLE

Regular States

44%

54%

52%

53%

54%

Unique States

---

---

---

---

---

TARGET ANALYSIS (Regular States)

Met

44%

51%

47%

65%

68%

Not Met

44%

56%

55%

42%

44%

TARGET ANALYSIS (Unique States)

Met

---

---

---

---

---

Not Met

---

---

---

---

---

AYP targets across time reveal an average lowering of targets from 2005–2006 to 2006–2007, especially for those states that met their target. It is also apparent that states meeting their target for AYP have set a lower target than those who have not met their target: in 2006–2007, these targets averaged 47% vs. 55%. However, it is also true that the states meeting their targets reported higher actual data than states that did not meet their targets. In fact, states that did not meet their targets in 2008 actually reported the same percentage or lower of districts meeting AYP in 2006–2007 as met AYP in the baseline year of 2004–2005. Since the baseline year the average state that met targets for AYP in 2006–2007 had an average increase in the districts meeting AYP of 24 percentage points.

It is important to keep in mind that 23 states were not included in this analysis in both 2005–2006 and 2006–2007 for a variety of reasons. For example, the state of Hawaii encompasses one district and AYP does not apply. Many other states provided data only for each content area and not overall data. A state must meet AYP for both mathematics and reading to meet AYP for overall, meaning that NCEO could not compute an overall number using these content area averages. Similar analyses was completed for 2007–2008 target participation and performance data in reading and mathematics. Results of these analyses are available in Appendix A.

 

Improvement Activities

State reporting of improvement activities varies considerably in appropriateness, specificity, and length. In the most informative and useful reports, states make an effort to ensure that the activities reported include all those related to Indicator 3 and no activities unrelated to the indicator. In addition, each activity is associated with a clear timeframe, expressed as school years rather than calendar years. Finally, improvement activities are described in enough detail to allow the reader to understand key features of the activity, the audience for the activity, and the rationale for implementing it.

States identified improvement activities for Part B Indicator 3, revising them if needed from those that were listed in their previous SPPs and APRs. These were analyzed using OSEP-provided codes, along with additional codes within category J (Other). Although states generally listed their improvement activities in the appropriate section of their APRs, sometimes we found them elsewhere. When this was the case, we identified the activities in the other sections and coded them.

A summary of improvement activities is shown in Table 15. The data reflect the number of states that indicated they were undertaking at least one activity that would fall under a specific category. A state may have mentioned several specific activities under the category or merely mentioned one activity that fit into the category. Some activities fit into multiple categories.

Table 15. State Improvement Activities Identified in 2007–2008

Description (Category Code)

Number Indicating Activity

Regular States
(N = 50)

Unique States
(N = 10)

Improve data collection and reporting—improve the accuracy of data collection and school district/service agency accountability via technical assistance, public reporting/dissemination, or collaboration across other data reporting systems. Develop or connect data systems. (A)

17

6

Improve systems administration and monitoring—refine/revise monitoring systems, including continuous improvement and focused monitoring. Improve systems administration. (B)

21

4

Provide training/professional development—provide training/professional development to State, local education agency and/or service agency staff, families, and/or other stakeholders. (C)

42

9

Provide technical assistance—provide technical assistance to LEAs and/or service agencies, families, and/or other stakeholders on effective practices and model programs. (D)

37

5

Clarify/examine/develop policies and procedures—clarify, examine, and/or develop policies or procedures related to the indicator. (E)

19

3

Program development—develop/fund new regional/statewide initiatives. (F)

20

2

Collaboration/coordination—collaborate/coordinate with families/agencies/initiatives. (G)

15

6

Evaluation—conduct internal/external evaluation of improvement processes and outcomes. (H)

10

1

Increase/Adjust FTE—add or re-assign staff hours at state level. Assist with the recruitment and retention of local education agency and service agency staff. (I)

6

2

Other (J) See J1-J12

3

0

Data analysis for decision making. (J1)

19

1

Scientifically-based or research-base practices. (J2)

13

1

Implementation/development of new/revised test (Performance or diagnostic). (J3)

20

5

Pilot project. (J4)

14

3

Grants, state to local. (J5)

13

0

Document, video, or Web-based development/dissemination/framework. (J6)

32

2

Standards development/revision/dissemination. (J7)

7

4

Curriculum/instructional activities development/dissemination (e.g., promulgation of RtI, Reading First, UDL, etc.). (J8)

31

3

Data or best practices sharing, highlighting successful districts, conferences of practitioners, communities of practice, district-to-district linking or mentoring. (J9)

16

1

Participation in national/regional organizations, looking at other states’ approaches, participation in Technical Assistance Center workgroups. (J10)

6

3

State working with low-performing districts. (J11)

28

0

Implement required elements of NCLB accountability. (J12)

21

3

Note: text in italics represents an addition to category descriptions for the current year.

The activities reported most often by a majority of regular states were training/professional development (C); technical assistance (D); document, video, or Web-based development/dissemination/framework (J6); curriculum/instructional activities development/dissemination (J8); and state working with low-performing districts (J11).

The activity reported most often by a majority of unique states was implementation/development of new/revised test (J3). This category included either performance-based or diagnostic assessments.

State-reported improvement activities that were coded as curriculum/instructional activities development/dissemination (J8) revealed that many states were identifying specific curricula and instructional approaches in an effort to improve student performance and meet AYP. In several instances, these were explicitly identified as scientifically-based practices. Among the more frequently reported curricula and instructional approaches were: Response to Intervention (RtI), Positive Behavioral Supports (PBS/PBIS), Reading First, Universal Design for Learning (UDL), Strategic Instructional Modeling (SIM), Kansas Learning, and various state-developed interventions. Table 16 provides examples of improvement activities for each category.

Table 16. Examples of Improvement Activities

Code

Category

Example

A

Improve data collection and reporting

Implement new data warehousing capabilities so that Department of Special Education staff have the ability to continue publishing LEA profiles to disseminate educational data, increase the quality of educational progress, and help LEAs track changes over time.

B

Improve systems administration and monitoring

The [state] Department of Education has instituted a review process for schools in need of improvement entitled Collaborative Assessment and Planning for Achievement (CAPA). This process has established performance standard for schools related to school leadership, instruction, analysis of state performance results, and use of assessment results to inform instruction for all students in the content standards.

C

Provide training/professional development

Provide training to teachers on differentiating instruction and other strategies relative to standards.

D

Provide technical assistance

Technical assistance at the local level about how to use the scoring rubric [for the alternate test].

E

Clarify/examine/develop policies and procedures

Establish policy and procedures with Department of Education Research and Evaluation Staff for the grading of alternate assessment portfolios.

F

Program development

The [state] Department of Education has identified mathematics as an area of concern and has addressed that by implementing a program entitled “[State] Counts” to assist districts in improving mathematics proficiency rates. “Counts” is a three-year elementary mathematics initiative focused on implementing research based instructional practices to improve student learning in mathematics.

G

Collaboration/coordination

A cross-department team led by the Division of School Standards, Accountability, and Assistance from the [state] Department of Education in collaboration with stakeholders (e.g. institutions of higher education, families) will plan for coherent dissemination, implementation, and sustainability of Response to Intervention.

H

Evaluation

Seventeen [LEAs] that were monitored during the 2006–2007 school year were selected to complete root cause analyses in the area of reading achievement in an effort to determine what steps need to be taken to improve the performance of students with disabilities within their agency.

I

Increase/Adjust FTE

Two teachers on assignment were funded by the Divisions. These teachers provided professional learning opportunities for district educators on a regional basis to assist them in aligning activities and instruction that students receive with the grade-level standards outlined in the state performance standards.

J1

Data analysis for decision making (at the state level)

State analyzed aggregated (overall state SPED student data) of student participation and performance results in order to determine program improvement strategies focused on improving student learning outcomes.

J2

Data provision/verification state to local

The Department of Education maintains a Web site with updated state assessment information. The information is updated at least annually so the public as well as administrators and teachers have access to current accountability results.

J3

Implementation/ development of new/revised test (performance or diagnostic)

State Department of Education developed a new alternative assessment this year.

J4

Pilot project

Training for three pilot districts that implemented a multi-tiered system of support were completed during FFY2006. Information regarding the training was expanded at the secondary education level. Project SPOT conducted two meetings for initial secondary pilot schools with school district teams from six districts. Participants discussed the initial development of improvement plans.

J5

Grants, state to local

Forty-seven [state program] incentive grants were awarded, representing 93 school districts and 271 elementary, middle, and high schools. Grants were awarded to schools with priorities in reading and mathematics achievement, social emotional and behavior factors, graduation gap, and disproportionate identification of minority students as students with disabilities.

J6

Document, video, or Web-based development/ dissemination/framework

The Web-based Literacy Intervention Modules to address the five essential elements of literacy developed for special education teachers statewide were completed.

J7

Standards development/ revision/dissemination

Align current grade level standard with alternate assessment portfolio process.

J8

Curriculum/instructional activities development/ dissemination

Provide information, resources, and support for Response to Intervention model and implementation.

J9

Data or best practices sharing, highlighting successful districts, conferences of practitioners, communities of practice, mentoring district to district

Content area learning communities were developed SY 06–07 as a means to provide updates on [state/district] initiatives and school initiatives/workplans in relation to curriculum, instruction, assessment, and other topics.

J10

Participation in national/regional organizations, looking at other states’ approaches, participating in TA Center workgroups (e.g., unique state PB)

The GSEG PAC6 regional institute provided technical support to all the jurisdictions in standard setting, rubric development, and scoring the alternate assessment based on alternate achievement standards. During the one-week intensive institute, [state] was able to score student portfolios gathered for the 2006–2007 pilot implementation, as reported in this year’s assessment data.

J11

State working with low-performing districts

The Department of Education has developed and implemented the state Accountability and Learning Initiative to accelerate the learning of all students, with special emphasis placed on districts with Title I schools that have been identified as “in need of improvement.”

J12

Implement required elements of NCLB accountability

Many strategies are continually being developed to promote inclusion and access to the general education curriculum.

Challenges in Analyzing Improvement Activities. Many states’ descriptions of improvement activities were vague. Summarizing them required a “best guess” about what the activity actually entailed. Sometimes descriptions of activities were too vague to categorize. In addition, in some cases it was difficult to determine whether an activity actually occurred in 2006–2007 or was in a planning phase for the future.

Many activities fell into two or more categories. These were coded and counted more than once. For example, a statewide program to provide professional development and school-level implementation support on the Strategic Instruction Model would be coded as professional development, technical assistance, and curriculum/instructional strategies dissemination. When there was doubt, data coders gave the state the benefit of the doubt about having accomplished an activity. As in previous examinations of improvement activities, counting states as having activities in a category did not allow for differentiation among those that had more or fewer activities in the category. For example, if one state completed five technical assistance activities and another had one, both states were simply identified as having technical assistance among their improvement activities. An analysis taking into account the frequency of each improvement activity might result in different conclusions about relationships between activities and meeting targets. As the level of detail provided in the reports varies widely, such differences in frequency would be difficult to ascertain with confidence. Some states seemed to refer to the same activity in multiple statements, and others noted details within activities that triggered coding in additional categories. Because of the wide range in level of detail and repetition, the coders did not have confidence that an analysis based on frequency of each improvement activity within a state would be more informative than the approach that was taken.

 

State Use of Technical Assistance from RRFC Network/National Centers

Several states—13 in all—reported on technical assistance they sought and received from various entities. Due to the nature of assessment, it could be expected that many types of technical assistance (TA) may be helpful in efforts toward improvement. Indeed, twenty-one technical assistance providers were identified by states (see Table 17).

Table 17. Providers of Technical Assistance

TA Providers

  • Access Center
  • Arc1
  • Center for Applied Special Technology (CAST)
  • Center for Assessment (NCIEA)
  • Center on Assessment and Accountability2
  • Center for Data-Driven Reform in Education (CDDRE)
  • Inclusive Large Scale Standards and Assessment (ILSSA) center
  • IRIS Center for Faculty Enhancement
  • Mountain Plains Regional Resource Center
  • National Alternate Assessment Center (NAAC)
  • National Center for Culturally Responsive Educational Systems (NCCRESt)
  • National Center on Educational Outcomes (NCEO)
  • National Center on Response to Intervention
  • National Center on Student Progress Monitoring
  • National Early Childhood Technical Assistance Center (NECTAC)
  • National Instructional Materials Access Center (NIMAC)
  • National Research Center on Learning Disabilities (NRCLD)
  • North Central Regional Resource Center
  • PACER Center
  • Technical Assistance Center on Positive Behavioral Interventions and Supports (PBIS)
  • Western Regional Resource Center

1This TA provider does not receive primary funding from the federal Department of Education
2This TA provider (Assessment and Accountability Comprehensive Center) receives its primary funding from the Office of Elementary and Secondary Education

In almost all cases, the funding source of these technical assistance providers was the Office of Special Education Programs (OSEP). Typically, each state utilized technical assistance from one provider. Figure 2 highlights states’ use of services from TA providers.

Figure 2. States’ Use of Technical Assistance Providers

Figure 2 Bar Chart

1 This state identified a center receiving primary funding from OESE
2 This state identified a provider not receiving funding from the federal Department of Education

In addition, most of the unique states—five in all, including American Samoa, the Commonwealth of the Northern Mariana Islands, the Federated States of Micronesia, the Republic of Palau, and the Republic of the Marshall Islands—reported on the technical assistance they have sought and received from various entities. Altogether, the unique states used technical assistance from five different providers, and in all cases, each unique state used technical assistance from more than one provider. Figure 3 demonstrates unique states’ use of services of TA providers.

Figure 3. Unique States’ Use of Technical Assistance Providers

Figure 3 Bar Chart

 

State Use of Instructional or Curricular Models or Programs

In summarizing data from state reports, coders noted specific programs mentioned by states. This analysis identified six models or programs that were reported by more than one state. In some cases, coders inferred the use of a model or program based on the specific language used by a state to describe their improvement activities.

Overall, 34 of the 50 regular states noted use of one or more of these models or programs, while 16 did not mention any specific instructional or curricular model or program. Table 18 includes descriptions of models or programs used by more than one state as well as their Web sources. The most frequently reported model or program was Response to Intervention (RtI), which was used by 20 states. Use of Reading First was reported by 17 states. Nine states reported using either Positive Behavioral Interventions and Supports (PBIS) or Positive Behavioral Support (PBS), which coders combined into a single PBIS category. Five states reported using either Universal Design or Universal Design for Learning; these were combined into a single Universal Design category. Four states reported using the Strategic Instruction Model, or SIM, and two reported using Kansas Learning. SIM is one of the Kansas Learning Strategies, but these two categories were kept separate by coders because of the wider scope implied by the Kansas Learning label. Two states reported using Differentiated Instruction. Several additional models or programs were noted by regular states, but each was noted only once. Among the unique states, the District of Columbia reported using PBIS and RtI, and both Palau and the Commonwealth of the Northern Mariana Islands reported using the 4-Step Process of Instruction.

Table 18. Models and Programs Used by More than One State

Model or Program

Description

Source

Number of RegularStates

Response to Intervention (RtI)

“Response to intervention integrates assessment and intervention within a multi-level prevention system to maximize student achievement and to reduce behavior problems. With RtI, schools identify students at risk for poor learning outcomes, monitor student progress, provide evidence-based interventions and adjust the intensity and nature of those interventions depending on a student’s responsiveness, and identify students with learning disabilities.”

National Center on Response to Intervention:

http://www.rti4success.org/

20

Reading First

“This program focuses on putting proven methods of early reading instruction in classrooms. Through Reading First, states and districts receive support to apply scientifically based reading research—and the proven instructional and assessment tools consistent with this research—to ensure that all children learn to read well by the end of third grade. The program provides formula grants to states that submit an approved application.”

U.S. Department of Education: http://www.ed.gov/programs/readingfirst/index.html

17

Positive Behavioral Interventions and Supports (PBIS)

The goal of PBIS is “to prevent the development and intensifying of problem behaviors and maximize academic success for all students.”

U.S. Department of Education, Office of Special Education Programs: http://www.pbis.org/main.htm

9

Universal Design (UD) or Universal Design for Learning (UDL)

UD: “According to the Center for Universal Design: The intent of universal design (UD) is to simplify life for everyone by making products, communications, and the built environment more usable by as many people as possible at little or no extra cost. Universal design benefits people of all ages and abilities. (1997 NC State University)”

UDL: “The Center for Universal Design is a national research, information, and technical assistance center that evaluates, develops, and promotes universal design in housing, public and commercial facilities, and related products.”

U.S. Department of Education,
Office of Vocational and Adult Education: http://www.ed.gov/about/offices/list/ovae/pi/
AdultEd/disaccess.html

5

Strategic Instruction Model (SIM)

“SIM is about promoting effective teaching and learning of critical content in schools. SIM strives to help teachers make decisions about what is of greatest importance, what we can teach students to help them to learn, and how to teach them well.”

Center for Research on Learning: http://www.kucrl.org/sim/

4

Kansas Learning Strategies

“Educators at the University of Kansas, Center for Research on Learning, have validated an instructional sequence in which students learn each strategy following these teacher-directed steps: (a) pretest, (b) describe, (c) model, (d) verbal practice, (e) controlled practice, (f) grade-appropriate practice, (g) posttest, (h) generalization (Schumaker & Deshler, 1992).”

ERIC Clearinghouse on Disabilities and Gifted Education, Reston, VA: http://www.ericdigests.org/2000-2/learning.htm

2

Differentiated Instruction

“To differentiate instruction is to recognize students varying background knowledge, readiness, language, preferences in learning, interests, and to react responsively. Differentiated instruction is a process to approach teaching and learning for students of differing abilities in the same class. The intent of differentiating instruction is to maximize each student’s growth and individual success by meeting each student where he or she is, and assisting in the learning process.”

Center for Applied Special Technology: http://www.cast.org/publications/ncac/ncac_diffinstruc.html

2

4-Step Process of Instruction

The 4-Step Process links standards, instructional outcomes, instructional activities, and IEP objectives.

The Pacific Assessment Consortium: http://www.pac6.org/

0

 

Improvement Activities’ Interaction with AYP

An analysis of the relationship of the identified improvement activities with states meeting AYP was conducted using data from the 27 regular states that provided information on whether their targets were met. This analysis failed to find any significant relationship using Fisher’s exact test (p-values). Table 19 summarizes this information.

Table 19. Association between Improvement Activities and AYP

Targets Met

Activity

p-value

Odds Ratio

A

0.13

3.90

B

1.00

1.02

C

0.49

-

D

0.60

2.91

E

0.15

3.45

F

0.72

1.58

G

0.69

0.61

H

0.70

1.60

I

0.38

2.64

J1

0.47

0.53

J2

0.42

2.27

J3

0.48

2.05

J4

1.00

0.92

J5

1.00

0.92

J6

0.02

11.53

J7

0.66

0.51

J8

0.71

1.48

J9

0.70

1.60

J10

0.65

1.84

J11

0.31

2.19

J12

0.48

2.09

However, an odds ratio analysis designed to measure the direction and magnitude of association between activities and meeting AYP goals was conducted, and this analysis identified the following categories of activities as most strongly associated with states’ success in meeting their AYP goals:

  • Document, video, or Web-based development/dissemination/framework (J6)
  • Improve data collection and reporting (A)
  • Clarify/examine/develop policies and procedures (E)

Although a causal claim cannot be made, this analysis suggests that states engaging in these three categories of activities generally were more effective than other states in their efforts to establish and meet their targets.

An unexpected finding was that states using improvement activities categorized as “C—providing training/professional development” were not more likely to meet 2006–2007 performance targets as they had in 2005–2006. However, this finding could be related to the fact that most states used this improvement activity, so there was little variation among states.

 

Improvement Activities’ Interaction with Performance

An analysis of the relationship of specific instructional or curricular models or programs identified in the improvement activities with states meeting their own identified reading and mathematics performance goals was conducted using data from the 49 regular states that provided information on whether their goals were met. The instructional programs specified in this analysis included (in order of frequency): Response to Intervention, Reading First, Positive Behavioral Interventions and Supports/Positive Behavioral Supports, Universal Design/Universal Design for Learning, Strategic Instructional Modeling, and Kansas Learning. These specific instructional programs were the only programs explicitly identified as part of states’ improvement activities in the 2006–2007 APR.

The analysis utilized a simple correlational approach, Chi-square analysis, to ascertain whether states using any of these instructional programs reached their performance goals in either reading or mathematics at a rate different from, and higher than, states which did not identify any of these instructional programs as improvement activities. It is important to note that a majority of states—34 in all—reported using at least one of these instructional programs; however, it is unclear whether those 15 states not reporting use of specific identifiable instructional programs were or were not actually using these programs. It is possible that some simply did not report implementing them. Two states did not provide 2006–2007 performance information.

For mathematics performance, Chi-square is .163 (df = 1, p > .05). Therefore, there was no relationship between using any instructional program and improvement on mathematics performance (see Table 20), as exemplified by these six identifiable programs.

Table 20. Relation of States with at Least One Instructional Program and the Improvement of Mathematics Performance

 

Improvement on Mathematics Performance

YES

Total

Have at least one program

NO

8

17

25

YES

9

15

24

Total

17

32

49

Note: Two states did not provide 2006–2007 participation information

For reading performance, Chi-square is .698 (df = 1, p > .05). Therefore, there was no relationship between using any instructional program and improvement on reading performance (see Table 21), as exemplified by these six identifiable programs.

Table 21. Relation of States with at Least One Instructional Program and the Improvement of Reading Performance

Improvement on Reading Performance

NO

YES

Total

Have at least a program

NO

9

16

25

YES

6

18

24

Total

15

34

49

Note: Two states did not provide 2006–2007 participation information

This analysis was an initial effort to gain insight into patterns across states regarding use of specific instructional interventions and their relationship to performance gains. As data gathering becomes more standardized—that is, when all states report more explicitly their instructionally-oriented programmatic decisions—we may be able to more successfully detect patterns.


Conclusion

States continue to improve in meeting reporting requirements for Part B Indicator 3. Still, there remain indications that not all states understand the importance of clearly communicating information in their APRs. There is also some indication that some states still are not clear about exactly how to prepare their data (e.g., what is the appropriate denominator) for inclusion in their APRs. Future research of the relationship between APR data and Table 6 of 618 data will be helpful in possibly pinpointing the sources of some of the lack of understanding about how to prepare data for the APR. It is possible that some states may still have difficulty obtaining the required information because it is collected and stored by different divisions in their education agencies.

For AYP data, only 27 regular states provided all the elements needed to examine the data. Unique states did not provide AYP data; this is consistent with the fact that most of these states are not required to comply with AYP requirements, although some are. Of the 27 regular states that provided all elements, over half did not meet their AYP targets. The difference between these states in baseline was negligible; in terms of targets, those states that did not meet their AYP targets had on average a target that was considerably higher than those states that did meet their AYP targets.

As in the past, most states providing data are meeting their participation targets. On the whole, both regular states and unique states are providing the data needed to determine whether participation targets are being met. Unique states are not meeting their targets as often as regular states. This finding is based on only those states that had baseline, target, and actual data in their reports. This included 42 regular states and 8 unique states.

For performance data, many fewer states provided all the elements needed to examine the data. Only 33 regular states and 4 unique states provided baseline, target, and actual data in their reports for this component. The majority of states did not meet their performance targets in both content areas; more than 75% of regular states and all of the unique states that provided all data elements did not meet their targets.

The relationship between baselines and targets for those states that met or did not meet their targets appeared to vary by component. For AYP, states that met their targets tended to have lower targets, but the average target value was above the average baseline value. For participation, those states that met their targets tended to set targets that were below their baselines. For performance, states that met their targets tended to have lower average values for baseline and targets (these were above the average baseline value). The findings do not appear to be as straightforward as they did for 2005–2006 when there was a general finding that states that met their targets often had higher baselines, and lower targets, yet exceeded those targets by a considerable amount, while states that did not meet their targets generally had set higher targets. Continued attention to these relationships in future APRs will be important. Particularly important is the need to explore the nature of changes that states are making to their targets. This will help us to understand better the relationships in findings.

In considering the relationships between improvement activities and whether targets were met, Fisher’s exact test and the odds ratio were used. These showed that three categories of activities were strongly associated with the state’s success in meeting AYP goals: document, video, or Web-based development/dissemination/framework; improve data collection and reporting; and clarify/examine/develop policies and procedures. For 2005–2006, different improvement activity categories were identified: training/professional development; regional/statewide program development; and increase/adjust FTE. It is not clear why the previously identified categories no longer emerge as associated with meeting targets, or why the currently identified categories of improvement activities have taken their place. Continued attention to the improvement activities that seem related to meeting targets is nevertheless important.

The data provided in 2006–2007 for the Annual Performance Reports were much more consistent and clear than those provided for 2005–2006, which in turn were clearer than those provided in the 2004–2005 State Performance Plans. With improved data, it is possible for NCEO to better summarize the data to provide a national picture of 2006–2007 AYP, participation, and performance indicators as well as states’ improvement activities.


References

Altman, J. R., Cormier, D., & Crone, M. (in press). Large scale assessment and high stakes decisions: Guidelines for educators. In A. Canter, L. Paige, & S. Shaw (Eds.) Helping Children at Home and School (Third Edition). Bethesda, MD: National Association of School Psychologists.

Bolt, S. E., & Roach, A. T. (2009). Inclusive assessment and accountability: A guide to accommodations for students with diverse needs. New York: Guilford Press.

Downing, J. E. (2008). Are they making progress? In J. Downing (Ed.) Including students with severe and multiple disabilities in typical classrooms. Baltimore, MD: Paul H. Brooks.

Elmore, R. F., & Rothman, R. (1999). Testing, teaching, and learning: A guide for states and school districts. Washington, DC: National Academy Press.

Fast, E. F., & ASR SCASS. (2002). A guide to effective accountability reporting. Washington, DC: Council of Chief State School Officers, Accountability Systems and reporting (ASR) Consortium.

Individuals with Disabilities Act. (1997). Public Law 105-117. Washington, DC: U. S. Government Printing Office.

Individuals with Disabilities Act. (2004). Public Law 108-446. Washington, DC: U. S. Government Printing Office.

Kearns, J., Burdge, M. D., Clayton, J., Denham, A. P., Kleinert, H. L. (2006). How students demonstrate academic performance in portfolio assessment. In D. Browder & F. Spooner (Eds.) Teaching language arts, math, & science to students with significant cognitive disabilities. Baltimore, MD: Paul H. Brooks.

No Child Left Behind Act. (2001). Public Law 107-110. Washington, DC: U. S. Government Printing Office.

Nolet, V., & Mclaughlin, M. J. (2005). Accessing the general curriculum (2nd ed.). Thousand Oaks, CA: Corwin Press.

Shepard, L., Hannaway, J., & Baker, E. (2009). Standards, assessments, and accountability (Education Policy White Paper). Washington, DC: National Academy of Education.

Thurlow, M. L. (2007). State policies and accommodations: Issues and implications. In C.C. Laitusis & L. L. Cook (Eds.), Large-scale assessment and accommodations: What works? Arlington, VA: Council for Exceptional Children.

Thurlow, M. L., Altman, J. R., Cormier, M., & Moen, R. (2008). Annual Performance Reports: 2005–2006 state assessment data. Minneapolis, MN: University of Minnesota, National Center on Educational Outcomes.

Thurlow, M., Altman, J., & Vang, M. (2009). Annual performance report: 2006–2007 state assessment data. Minneapolis, MN: University of Minnesota, National Center on Educational Outcomes.

Thurlow, M., Bremer, C., & Albus, D. (2008). Good news and bad news in disaggregated subgroup reporting to the public on 2005–2006 assessment results (Technical Report 52). Minneapolis, MN: University of Minnesota, National Center on Educational Outcomes.

Thurlow, M. L., Langenfeld, K. L., Nelson, J. R., Shin, H., & Coleman, J. E. (1998). State accountability reports: What are states saying about students with disabilities? (Technical Report 20). Minneapolis, MN: University of Minnesota, National Center on Educational Outcomes.

Thurlow, M. L., Lazarus, S. S., & Christensen, L. L. (2008). Role of assessment accommodations in accountability. Perspectives on Language and Learning, 34(4), 17–20.

Thurlow, M. L., Nelson, J. R., Teelucksingh, E., & Ysseldyke, J. E. (2000). Where’s Waldo? A third search for students with disabilities in state accountability reports (Technical Report 25). Minneapolis, MN: University of Minnesota, National Center on Educational Outcomes.

Thurlow, M. L., Quenemoen, R. F., Lazarus, S. S., Moen, R. E., Johnstone, C. J., Liu, K. K., Christensen, L. L., Albus, D. A., & Altman, J. (2008). A principled approach to accountability assessments for students with disabilities (Synthesis Report 70). Minneapolis, MN: University of Minnesota, National Center on Educational Outcomes.

Thurlow, M., Wiley, H. I., & Bielinski, J. (2003). Going public: What 2000–2001 reports tell us about the performance of students with disabilities (Technical Report 35). Minneapolis, MN: University of Minnesota, National Center on Educational Outcomes.

Thurlow, M., & Ysseldyke, J. (2002). Including students with disabilities in assessment. National Education Association.

Ysseldyke, J. E., Thurlow, M. L., Langenfeld, K., Nelson, J. R., Teelucksingh, E., & Seyfarth, A. (1998). Educational results for students with disabilities: What do the data tell us? (Technical Report 23). Minneapolis, MN: University of Minnesota, National Center on Educational Outcomes.


Appendix A: Trends in Target Participation and Performance Data

Table A1. Target Participation in Mathematics

Baseline

2007 Target

2008 Target

2007 Actual Data

2008 Actual Data

Regular States

97%

96%

96%

97%

98%

Unique States

86%

84%

91%

75%

85%

TARGET (Regular States)

Met

97%

95%

96%

98%

98%

Not Met

97%

99%

98%

95%

96%

TARGET (Unique States)

Met

89%

84%

94%

87%

102%

Not Met

85%

84%

90%

71%

80%

 

Table A2. Target Participation in Reading

Baseline

2007 Target

2008 Target

2007 Actual Data

2008 Actual Data

Regular States

97%

96%

96%

97%

98%

Unique States

85%

84%

90%

74%

84%

TARGET (Regular States)

Met

97%

95%

96%

98%

98%

Not Met

98%

99%

98%

95%

96%

TARGET (Unique States)

Met

89%

84%

94%

87%

102%

Not Met

84%

84%

89%

70%

78%

 

Table A3. Target Performance in Mathematics

 

Baseline

2007 Target

2008 Target

2007 Actual Data

2008 Actual Data

Regular States

35%

37%

43%

36%

39%

Unique States

13%

23%

28%

23%

6%

TARGET (Regular States)

Met

33%

32%

34%

37%

42%

Not Met

35%

41%

46%

34%

38%

TARGET (Unique States)

Met

---

19%

---

51%

---

Not Met

13%

27%

28%

4%

6%

 

Table A4. Target Performance in Reading

Baseline

2007 Target

2008 Target

2007 Actual Data

2008 Actual Data

Regular States

37%

41%

46%

38%

41%

Unique States

13%

23%

28%

22%

9%

TARGET (Regular States)

Met

35%

37%

37%

39%

42%

Not Met

37%

45%

49%

38%

40%

TARGET (Unique States)

Met

---

19%

---

50%

---

Not Met

13%

27%

28%

4%

9%

© 2013 by the Regents of the University of Minnesota.
The University of Minnesota is an equal opportunity educator and employer.

Online Privacy Statement
This page was last updated on January 03, 2013

NCEO is supported primarily through a Cooperative Agreement (#H326G050007, #H326G110002) with the Research to Practice Division, Office of Special Education Programs, U.S. Department of Education. Additional support for targeted projects, including those on LEP students, is provided by other federal and state agencies. Opinions expressed in this Web site do not necessarily reflect those of the U.S. Department of Education or Offices within it.