NCEO Logo
Bookmark and Share

2012 Survey of States: Successes and Challenges During a Time of Change

Rebekah Rieke, Sheryl Lazarus, Martha Thurlow & Lauren Dominguez

September 2013

All rights reserved. Any or all portions of this document may be reproduced and distributed without prior permission, provided the source is cited as:

Rieke, R., Lazarus, S. S., Thurlow, M. L., & Dominguez, L. M. (2013). 2012 survey of states: Successes and challenges during a time of change. Minneapolis, MN: University of Minnesota, National Center on Educational Outcomes.


Table of Contents


The Mission of the National Center on Educational Outcomes

NCEO Staff

Deb Albus
Manuel Barrera
Laurene Christensen
Linda Goldstone
James Hatten
Christopher Johnstone
Jane Krentz
Sheryl Lazarus
Kristi Liu
Ross Moen
Michael Moore
Rachel Quenemoen
Rebekah Rieke
Christopher Rogers
Vitaliy Shyyan
Yi Chen-Wu
Miong Vang

Martha Thurlow,
Director

NCEO is a collaborative effort of the University of Minnesota, the National Association of State Directors of Special Education (NASDSE), and the Council of Chief State School Officers (CCSSO). NCEO provides national leadership in assisting state and local education agencies in their development of policies and practices that encourage and support the participation of students with disabilities in accountability systems and data collection efforts.

NCEO focuses its efforts in the following areas:

  • Knowledge Development on the participation and performance of students with disabilities in state and national assessments and other educational reform efforts.
  • Technical Assistance and Dissemination through publications, presentations, technical assistance, and other networking activities.
  • Leadership and Coordination to build on the expertise of others and to develop leaders who can conduct needed research and provide additional technical assistance.

IDEAs that Work LogoThe Center is supported through a Cooperative Agreement (#H326G110002) with the Research to Practice Division, Office of Special Education Programs, U.S. Department of Education. The Center is affiliated with the Institute on Community Integration at the College of Education and Human Development, University of Minnesota. The contents of this report were developed under the Cooperative Agreement from the U.S. Department of Education, but does not necessarily represent the policy or opinions of the U.S. Department of Education or Office within it. Readers should not assume endorsement by the federal government.

Project Officer: David Egnor

National Center on Educational Outcomes
207 Pattee Hall
150 Pillsbury Dr. SE
Minneapolis, MN 55455
612/626-1530 • Fax: 612/624-0879 • http://www.nceo.info

The University of Minnesota is an equal opportunity educator and employer.


Acknowledgments

With the collective efforts of State Directors of Special Education, and State Directors of Assessment, we are able to report on the activities of 49 regular states and 6 of 11 federally funded entities (unique states). Because of the thoughtful and knowledgeable responses of the directors of special education, directors of assessment, and their designees who completed this survey, we are able to share new initiatives, trends, accomplishments, and emerging issues during this important period of education reform. The purpose of this report is to make public the trends and issues facing states, as well as the innovations states are using to meet the demands of changing federal legislation. We appreciate the time taken by respondents to gather information from other areas or departments, and we hope that this collaborative effort provided an opportunity to increase awareness within and across state programs and departments.

For their support, special thanks go to:

  • David Egnor, Office of Special Education Programs (OSEP) in the U.S. Department of Education;
  • Eileen Ahearn, National Association of State Directors of Special Education (NASDSE);
  • Laurene Christensen, National Center on Educational Outcomes;
  • Kristin Liu, National Center on Educational Outcomes;
  • Michael Moore, National Center on Educational Outcomes;
  • June De Leon, University of Guam, for her assistance in obtaining completed surveys from the Pacific unique states.

NCEO's 2012 Survey of States was prepared by Rebekah L. Rieke, Sheryl S. Lazarus, Martha L. Thurlow, and Lauren M. Dominguez.


State Directors of Special Education

ALABAMA
Crystal Richardson

ALASKA
Don Enoch

ARIZONA
Angela Denning

ARKANSAS
Martha Kay Asti

CALIFORNIA
Fred Balcom

COLORADO
Peg Brown-Clark

CONNECTICUT
Charlene Russell-Tucker

DELAWARE
Mary Ann Mieczkowski

FLORIDA
Monica Verra-Tirado

GEORGIA
Deborah Gay

HAWAII
Robert Campbell

IDAHO
Richard Henderson

ILLINOIS
Beth Hanselman

INDIANA
Nicole Norvell

IOWA
Martin Ikeda

KANSAS
Colleen Riley

KENTUCKY
Johnny Collett

LOUISIANA
Bernell Cook

MAINE
Janice Breton

MARYLAND
Marcella Franczkowski

MASSACHUSETTS
Marcia Mittnacht

MICHIGAN
Eleanor White

MINNESOTA
Barbara Troolin

MISSISSIPPI
Ann Moore

MISSOURI
Stephen Barr

MONTANA
Frank Podobnik

NEBRASKA
Gary Sherman

NEVADA
Marva Cleven

NEW HAMPSHIRE
Santina Thibedeau

NEW JERSEY
Peggy McDonald

NEW MEXICO
Amy Lane

NEW YORK
James Delorenzo

NORTH CAROLINA
Mary Watson

NORTH DAKOTA
Gerry Teevens

OHIO
Sue Zake

OKLAHOMA
Rene Axtell

OREGON
Nancy Latini

PENNSYLVANIA
John Tommasini

RHODE ISLAND
David Sienko

SOUTH CAROLINA
Cathy Boshamer

SOUTH DAKOTA
Ann Larsen

TENNESSEE
Bobbi Lussier

TEXAS
Gene Lenz

UTAH
Glenna Gallo

VERMONT
Alice Farrell

VIRGINIA
Jill Singer (acting)

WASHINGTON
Doug Gill

WEST VIRGINIA
Pat Homberg

WISCONSIN
Stephanie Petska

WYOMING
Tiffany Dobler

AMERICAN SAMOA
Jeanette Vasai

BUREAU OF INDIAN EDUCATION
Gloria Yepa

DEPARTMENT OF DEFENSE
David Cantrell

DISTRICT OF COLUMBIA
Amy Maisterra

GUAM
Yolanda Gabriel

NORTHERN MARIANA ISLANDS
Suzanne Lizama

MARSHALL ISLANDS
Ruthiran Lokeijak

MICRONESIA
Arthur Albert

PALAU
Helen Sengebau

PUERTO RICO
Dorita Zapata

U.S. VIRGIN ISLANDS
Jill Singer (acting)

These were the state directors of special education in September, 2012 when NCEO administered the survey.


State Directors of Assessment

ALABAMA
Gloria Turner

ALASKA
Erik McCormick

ARIZONA
Roberta Alley

ARKANSAS
Melody Morgan

CALIFORNIA
Rachel Perry

COLORADO
Joyce Zurkowski

CONNECTICUT
Diane Ullman (acting)

DELAWARE
Michael Stetter

FLORIDA
Victoria Ash

GEORGIA
Melissa Fincher

HAWAII
Kent Hinton

IDAHO
Carissa Miller

ILLINOIS
James Palmer (acting)

INDIANA
Wes Bruce

IOWA
David Tilly

KANSAS
Tom Foster

KENTUCKY
Ken Draut

LOUISIANA
Scott Norton

MAINE
Dan Hupp

MARYLAND
Janet Bagsby

MASSACHUSETTS
Elizabeth Davis

MICHIGAN
Joseph Martineau

MINNESOTA
Jennifer Dugan

MISSISSIPPI
Jan Kirkland-Hogue

MISSOURI
Michael J. Muenks

MONTANA
Judy Snow

NEBRASKA
Pat Roschewski

NEVADA
Cindy Sharp (acting)

NEW HAMPSHIRE
Keith Burke (acting)

NEW JERSEY
Jeffrey Hauger

NEW MEXICO
Robert Romero

NEW YORK
Candance H. Shyer

NORTH CAROLINA
Tammy Howard

NORTH DAKOTA
Greg Gallagher

OHIO
Jim Wright

OKLAHOMA
Joyce DeFehr

OREGON
Doug Kosty

PENNSYLVANIA
Ray Young

RHODE ISLAND
Phyllis Lynch

SOUTH CAROLINA
Elizabeth Jones

SOUTH DAKOTA
Gay Pickner

TENNESSEE
Deb Malone Sauberer

TEXAS
Gloria Zyskowski

UTAH
John Jesse

VERMONT
Michael Hock

VIRGINIA
Shelley Loving-Ryder

WASHINGTON
Robin Munson

WEST VIRGINIA
Juan D'Brot

WISCONSIN
Laura Pinsonneault

WYOMING
Paul Williams

AMERICAN SAMOA
Robert Soliai

BUREAU OF INDIAN EDUCATION
Stanley Holder

DEPARTMENT OF DEFENSE
Steve Schrankel

DISTRICT OF COLUMBIA
Leroy Tompkins

GUAM
Joseph L.M. Sanchez

NORTHERN MARIANA ISLANDS
Maria Sablan Quitugua

MARSHALL ISLANDS
Stanley Heine

MICRONESIA
Miyai M. Keller

PALAU
Raynold Mechol

PUERTO RICO
Pura Cotto

U.S. VIRGIN ISLANDS
Alexandria Baltimore-Hoofkin

These were the state directors of assessment in September, 2012 when NCEO administered the survey.


Executive Summary

This report summarizes the thirteenth survey of states by the National Center on Educational Outcomes (NCEO) at the University of Minnesota. Results are presented for 49 states and 6 of the 11 unique states. The purpose of this report is to provide a snapshot of the new initiatives, trends, accomplishments, and emerging issues during this important period of education reform as states documented the academic achievement of students with disabilities.

Key findings include:

  • Fewer than half of the states have defined what college-and-career-readiness means for students with disabilities participating in the alternate assessment based on alternate achievement standards (AA-AAS).
  • Fewer than half of the states offered their current general state assessments on computer-based platforms for math, reading, or science.
  • State technology staff contributed to technology decision making for the AA-AAS being developed by the consortia in eight states, and contributed to technology decision making for the general assessment being developed by the consortia in just over half of the states.
  • More than half of the states indicated a need for technology-related investments for the majority of districts in their state in order to improve the participation of students with disabilities in instructional activities and assessments. The types of investments most frequently cited as needed were additional computers and improved bandwidth for Internet connectivity.
  • Since 2009 there has been a 39% increase in states that disaggregate assessment results by primary disability category for the purpose of reporting assessment results for students with disabilities.
  • More than three-quarters of the states considered universal design during test conceptualization and construction.
  • Six states offered end-of-course alternate assessments based on AA-AAS.
  • Fewer than half of the states included data for all students with disabilities in their teacher evaluation system for general education teachers.
  • Twenty states did not disaggregate assessment results for English language learners with disabilities.

States widely recognized the benefits of inclusive assessment and accountability systems, and continued to improve assessment design, participation and accommodations policies, monitoring practices, and data reporting. In addition, states identified key areas of need for technical assistance as they move toward implementing next generation assessments.


Overview of 2012 Survey of States

This report marks the 13th time over the past 20 years that the National Center on Educational Outcomes (NCEO) has collected information from states about the participation and performance of students with disabilities in assessments during standards-based reform.

Eleven Unique States

American Samoa
Bureau of Indian Education
Department of Defense
District of Columbia
Guam
Northern Mariana Islands
Marshall Islands
Micronesia
Palau
Puerto Rico
U.S. Virgin Islands

States are facing many new issues as they implement college- and career-ready (CCR) standards and move toward implementing next generation assessments. Two Race-to-the-Top Assessment (RTTA) consortia were awarded grants to develop new general assessments--Partnership for the Assessment of Readiness for College and Careers (PARCC) and Smarter Balanced Assessment Consortium (Smarter Balanced). In addition, two General Supervision Enhancement Grant (GSEG) consortia--Dynamic Learning Maps (DLM) and National Center and State Collaborative (NCSC)--are developing new alternate assessments based on alternate achievement standards (AA-AAS) for students with the most significant cognitive disabilities. There are also two Enhanced Assessment Grant (EAG) consortia--Assessment Services Supporting ELs Through Technology Systems (ASSETS) and English Language Proficiency Assessment for the 21st Century (ELPA-21)--that are developing new English Language Proficiency (ELP) assessments. Many states belong to one or more of these consortia.

As in 2009, state directors of special education and state directors of assessment were asked to provide the name and contact information of the person they thought had the best working knowledge of the state's thinking, policies, and practices for including students with disabilities in assessment systems and other aspects of educational reform. In many states, more than one contact was identified and the respondents were asked to work as a team to complete the survey.

Responses were gathered online. A hard copy of the survey was provided to a few states that preferred to respond by completing a written questionnaire. Once the responses were compiled, the data were verified with the states. Forty-nine regular states responded to the survey. In addition, representatives from 6 of the 11 unique states completed the survey.

Survey responses showed that states were examining a number of issues related to participation and accommodations policies on the general assessment. States also reported information about the AA-AAS, and on new developments in assessment such as teacher evaluation and how they are contributing to the consortia. In the three years since the previous survey, states continued to make strong progress, though challenges remained and several new issues emerged.


College-and-Career-Ready (CCR) Standards

Most states are moving from individual state standards to the new Common Core State Standards that will be fewer, higher, and more rigorous than many current standards. States sought to address the needs of students with disabilities as the next generation of assessments are being developed.

States that adopted college-and-career-ready (CCR) standards were asked how they supported content teachers in helping students with disabilities achieve these standards (see Figure 1). Most states reported that they supported teachers by implementing CCR standards for students with disabilities, and many also provided professional development, as well as instructional resources. Many unique states implemented CCR standards for students with disabilities and provided professional development on CCR standards to educators.

Figure 1. Ways States Support Content Teachers to Help Students with Disabilities Achieve CCR Standards

Figure 1 Bar Chart

Note: Forty-nine regular and six unique states responded to this question. State respondents were able to select multiple responses

State respondents were asked to indicate whether their state defined what CCR means, both for students with disabilities who participate in general assessments and those who participate in alternate assessments (see Figure 2). Thirty-six states defined CCR for students participating in the general assessment, but only 22 defined it for alternate assessments. Few unique states defined CCR for students with disabilities.

Figure 2. States Developing Definitions for College-and-Career Readiness (CCR) for Students with Disabilities Who Participate in Different Assessment Options

Figure 2 Bar Chart

Note: Thirty-seven regular states and four unique states responded to this question.

The most frequent challenge reported by states in implementing CCR standards for students with disabilities (see Figure 3) was difficulty providing adequate training to all teachers. Just over half of states indicated it was challenging to define what CCR means for students with disabilities, and to communicate with families about CCR standards. Nineteen states indicated that CCR standards were too difficult for some students with disabilities to access, and 11 states noted that some students with disabilities may receive an alternate certificate in lieu of a diploma as a result of CCR. More than half of the responding unique states reported that it was challenging to provide adequate training to all teachers and that the CCR standards were challenging for some students with disabilities to access.

Figure 3. Challenges in Implementing College-and-Career-Ready Standards

Figure 3 Bar Chart

Note: Forty-six regular and four unique states responded to this question. State respondents were able to select multiple responses.

A total of 17 states commented on contextual factors related to college-and-career-ready standards. These comments were most often related to changes in standards and working with the consortia.

  • Adopted CCR standards for all students, including students with disabilities: all students/all standards.
  • Partially implementing CCR standards which are not fully rolled out yet.
  • Conducting professional development to LEAs on CCR standards which includes resources and training on addressing the needs of diverse learners (e.g., ELL, students with disabilities, gifted).

Participation in Consortia to Develop Next Generation Assessments

Respondents were asked to report on which of their state staff were involved in contributing to the creation of consortium policies. At the time of this survey, nearly all states were members of one or both of the general assessment consortia. Just over 30 states were members of an alternate assessment consortium.

General Assessment

In most states, assessment staff were more involved in the development of relevant polices for the general assessment for students with disabilities than staff from other offices (see Table 1). For example, assessment staff in 32 states contributed to the development of accommodations guidelines, while special education staff contributed in 26 states; senior administration staff in 20 states and curriculum and instruction staff in 12 states contributed to these guidelines. Technology staff contributed to technology decision making for the general assessment in just over half of the states. No unique states that responded to this question belonged to a consortium.

Table 1. State Contribution to General Assessment Consortia (PARCC, Smarter Balanced) Decision Making

 

Participation Guidelines

Assessment Claims

Technology Decision making

Assessment Scoring Policy

Accommodations Guidelines

Item Development

Reporting Formats

Senior Administration Staff

20

21

24

23

20

16

19

Curriculum and Instruction Staff

8

23

10

8

12

26

7

Special Education Staff

17

8

8

7

26

10

7

Assessment Staff

25

27

27

30

32

31

29

Technology Staff

0

0

26

1

4

5

9

Note: Forty-nine regular states responded to this question. State respondents were able to select multiple responses. States that were not in a general assessment consortium member were not included.

Alternate Assessment

Education staff in many states were actively involved in the development of guidelines and policies for the alternate assessment consortia (see Table 2). For example, special education staff in 25 states and assessment staff in 24 states contributed to the development of participation guidelines for the alternate assessment based on alternate achievement standards (AA-AAS). Technology staff contributed to technology decision making for the AA-AAS in only eight states.

Education staff in three unique states were actively involved in the development of guidelines and policies for the alternate assessment consortia.

Table 2. State Contribution to Alternate Assessment Consortia (DLM, NCSC) Decision Making

 

Participation Guidelines

Assessment Claims

Technology Decision making

Assessment Scoring Policy

Accommodations Guidelines

Performance level Descriptors

Item Development

Reporting Formats

Senior Administration Staff

11

10

10

7

10

7

9

6

Curriculum and Instruction Staff

4

5

3

5

5

6

7

3

Special Education Staff

25

17

17

18

26

19

20

15

Assessment Staff

24

20

17

17

23

17

16

17

Technology Staff

0

0

8

1

1

0

1

2

Note: Forty-five states responded to this question. State respondents were able to select multiple responses. States that were not in an alternate assessment consortium were not included.


Participation and Accommodations

With the inclusion of students with disabilities in assessments and accountability systems, states paid increased attention to the reporting of participation and performance data. Similarly, states increasingly attended to these data and considered ways to improve the performance of low performing students, including students with disabilities.

Participation Reporting Practices

States' participation reporting practices for students with disabilities varied depending on the nature of the students' participation in both 2009 and 2012 (see Table 3). Survey results showed that 11 fewer states did not count and did not give a score to students who did not participate in states' assessments in any way in 2012 than in 2009. Six unique states counted students as nonparticipants in both 2012 and 2009. Fewer states indicated that students who attended (sat for) the assessment but did not complete enough items to score were not counted as participants in 2012 than in 2009.

Table 3. Reporting Practices for Counting Students as Assessment Participants

 

State Category

Survey Year

Not Counted as Participant, Received No Score

Counted as Participant, Received No Score, Score of Zero, or Lowest Proficiency Level

Earned Score is Counted as Valid

Not Counted as Participants and Earned Score Counted as Valid

Students who did not participate in state assessments in any way (e.g., absent on test day, parent refusal)

Regular States

2012

31

8

1

1

2009

42

7

0

N/A

Unique States

2012

6

0

0

0

2009

6

0

1

N/A

Students who attended (sat for) assessment, but did not complete enough items to score

Regular States

2012

9

25

8

1

2009

15

29

4

N/A

Unique States

2012

0

3

3

0

2009

3

3

0

N/A

Students who used accommodations resulting in invalid scores (e.g., non-standard, modifications)

Regular States

2012

12

20

3

0

2009

19

17

4

N/A

Unique States

2012

1

4

1

0

2009

1

2

1

N/A

Note: Forty-nine regular states responded in 2012; fifty responded in 2009. For unique states, six responded in 2012 and eight in 2009.

A total of 16 states commented on contextual factors related to student inclusion in federal accountability reports. These comments were most often related to changes in reporting methods, changes in calculations or procedures, or issues related to flexibility waivers.

  • A lack of student participation in state assessments in any way has a negative impact on adequate yearly progress (AYP).
  • Students who were eligible for testing and who were not tested were counted in the denominator of the participation rate calculation and not counted in the numerator. These students lowered the "percent tested" figures.
  • Currently applying for a waiver and these areas are under review.

Reporting Practices for Students by Disability Category

States used a variety of practices to report assessment results for students with disabilities. Twenty-eight states disaggregated results by disability category (primary disability) in 2012. This was a large increase from the 10 states that disaggregated by primary disability in 2009 and 17 states in 2007. States disaggregated data by disability category for reasons that included examining trends, reporting purposes, and responding to requests (see Figure 4). The most frequently given reason was to examine trends.

Figure 4. Reasons for Reporting Assessment Results by Disability Category for Regular States

Figure 4 Bar Chart

Note: Forty-nine regular states responded to this question in 2012, Forty-nine responded in 2009, and fifty in 2007. State respondents were able to select multiple responses.

Two unique states disaggregated results by disability category (primary disability) in 2012. This was an increase from the one unique state that disaggregated by primary disability in 2009. Unique states disaggregated data by disability category for reasons that included examining trends, responding to requests, and reporting purposes (see Figure 5).

Figure 5. Reasons for Reporting Assessment Results by Disability Category for Unique States

Figure 5 Bar Chart

Note: Two unique states indicated that they disaggregated assessment results by primary disability category in 2012, and one indicated that it disaggregated data by primary disability category in 2009. None of the unique states disaggregated by primary disability category in 2007.

Eleven states commented on contextual factors related to disaggregation by primary disability category. These comments were most often related to changes in reporting methods, or issues related to calculations or procedures.

  • "n" counts for most disability categories are fairly small when broken down by content area. State is only able to do limited analysis internally.
  • State does not disaggregate scores by disability unless a FOIA request is made. However, for alternate assessments, certain disabilities are counted in the one percent and this is done internally.
  • State's identification process is non-categorical.
  • Disaggregation by primary disability category is done for internal analysis only and not publicly reported.

Participation Practices Related to Accommodations

Ninety-five percent of states reported that they monitored accommodations use in 2012. Monitoring was typically achieved by interviewing students, teachers, and administrators about accommodations use and directly observing test administrations or by conducting desk audits (see Figure 6). Twelve states monitored on a scheduled basis, and ten states monitored on a random basis. In the unique states, accommodations monitoring was completed through a variety of modes.

Figure 6. States' Accommodations Monitoring Activities

Figure 6 Bar Chart

Note: Forty-seven regular and six unique states responded to this question. State respondents were able to select multiple responses.

States communicated information about accommodations to districts and schools via a variety of communication modes (see Figure 7). Most states provided accommodations policy information on a website. Many states also sent the information to each district/school in written form or conducted a webinar. A few states used an online interactive format for the workshops. Many unique states provided written information to each district or school. Unique states were less likely than regular states to make the information available on a website.

Figure 7. Modes of Communicating Accommodations Information to Districts/Schools

Figure 7 Bar Chart

Note: Forty-six regular and six unique states responded to this question. State respondents were able to select multiple responses.

Most states examined the validity of certain accommodations for students with disabilities. More than half of the states collected data and reviewed research literature (see Figure 8). Compared to previous years, states increasingly collected data, convened stakeholders, or completed internal statistical analysis to examine validity. Unique states reported that they collected data, convened stakeholders, or had not examined the validity of accommodations.

Figure 8. Ways that Regular States Examined Validity of Accommodations

Figure 8 Bar Chart

Note: Forty-six regular states responded to this question in 2012; fifty responded in 2009. State respondents were able to select multiple responses.

Most unique states (see Figure 9) have not examined the validity of accommodations, though a few have collected data, and one has convened stakeholders.

Figure 9. Ways that Unique States Examined Validity of Accommodations

Figure 9 Bar Chart

Note: Six unique states responded to this question in 2012; five responded in 2009. State respondents were able to select multiple responses.

Difficulties Related to Accommodations

Eighty-eight percent of states identified one or more difficulties in ensuring that accommodations specified for students with Individualized Education Programs (IEPs) were carried out on test day. The most frequently reported difficulties were (a) arranging for trained readers, scribes, and interpreters, (b) ensuring test administrators and proctors knew which students they were supposed to supervise and which students should receive specific accommodations, (c) training proctors in providing accommodations, (d) ordering special test editions, and (d) recording accommodations (see Figure 10). Four unique states also identified arranging for trained readers, scribes, and interpreters as the greatest difficulty.

Figure 10. Identified Difficulties in Carrying Out Specified Accommodations on Test Day

Figure 10 Bar Chart

Note: Forty-seven regular and six unique states responded to this question. State respondents were able to select multiple responses.

A total of 24 states commented on contextual factors related to accommodations. These comments were most often related to monitoring of accommodations, difficulties in implementation of accommodations on "test day," and validity of accommodations.

  • Accommodations are maintained in a state level data base, and state monitors through this method in terms of numbers using specific accommodations, but this does not include a check to determine if the accommodations are also delivered during instruction or in the IEP.
  • The alternate assessment monitoring system monitors accommodations through randomly selected observation of videotaped performance, IEPs, and completed assessment documents.
  • Most schools have mastered testing logistics for providing accommodations. Concerns expressed commonly are amount of time and number of staff required for test administration accommodations.
  • "n" counts for any accommodation are fairly small and it is very difficult to do any analysis.
  • The difficulty varies by school districts; it would be difficult to generalize at the state level.

Current and Emerging Issues

Over the past several years, states made many changes to their assessment policies and practices in response to changes in regulations and guidance for the Elementary and Secondary Education Act (ESEA) and the Individuals with Disabilities Education Act (IDEA). Further, federal peer-review guidance and flexibility waiver applications prompted changes. Additionally, several issues emerged as states moved toward the implementation of next generation assessments.

Technology and Computer-based Testing

Thirty states indicated a need for technology-related investments for the majority of districts in their state in order to improve the participation of students with disabilities in instructional activities and assessments. Sixteen states did not indicate a need for technology-related investments. All six of the unique states that responded to this survey considered technology an important investment.

Nearly half of the regular states and nearly all of the unique states indicated that additional computers and improved bandwidth or capacity for Internet connectivity were the most needed types of technology-related investments for better participation of students with disabilities (see Figure 11). Other frequently mentioned technology-related investments were additional adaptive technology, specialized software for accommodations, an increased number of technology specialists, and specialized software to administer the AA-AAS.

Figure 11. Needed Technology-Related Investments for Better Participation of Students with Disabilities

Figure 11 Bar Chart

Note: Thirty regular and six unique states responded to this question. State respondents were able to select multiple responses.

In 2012, almost half of the states offered their general state assessments on computer-based platforms for math, reading, or science (see Table 4). Fewer than 10 states had a computer-based platform for their alternate assessment based on modified achievement standards (AA-MAS) or AA-AAS. None of the states with an alternate assessment based on grade-level achievement standards (AA-GLAS) offered a computer version of the test. No unique states offered a computerized version of any test.

Table 4. Content Areas and Specific Assessments Offered on Computer-based Platforms

Assessment

Math

Reading

Science

General Assessment

24

24

22

AA-AAS

7

7

7

AA-MAS

7

6

4

AA-GLAS

0

0

0

ELP Assessment

4

7

2

Non-Summative Assessments

13

13

11

Note: As of December 2012, 15 states had an AA-MAS and 2 states had an AA-GLAS.

Growth Models

Twenty-seven states used a growth model for accountability purposes, while 23 used one for reporting purposes in 2012 (see Figure 12). Thirteen states were currently developing a growth model. None of the unique states had a growth model, though two were developing one.

Figure 12. States' Use of Growth Models

Figure 12 Bar Chart

Note: Forty-five regular and four unique states responded to this question. State respondents were able to select multiple responses.

Seventeen states reported that students on IEPs taking the general assessment were included in their growth models, but students taking the AA-AAS were not included (see Figure 13). Twelve states included students on IEPs in the same way as students in other groups.

Figure 13. Inclusion of Students on IEPs in State Growth Model

Figure 13 Bar Chart

Note: Forty-nine regular states responded to this question.

Teacher Evaluation

In 2012, most states either had, or were in the process of developing, guidelines or a model for teacher evaluation (see Figure 14).Twenty-four regular states had guidelines in 2012, and 19 were in the process of developing guidelines. Only two regular states had no plans to develop guidelines. The unique states were evenly divided in their responses on teacher evaluation guidelines. Two unique states currently have guidelines, two unique states are in the process of developing guidelines, and two unique states have no plans to develop guidelines.

Figure 14. Number of States with Developed Guidelines or a Model for Teacher Evaluation

Figure 14 Bar Chart

Note: Forty-five regular and six unique states responded to this question.

Twenty states included data from all students with IEPs in the evaluation process for general education teachers in 2012 (see Figure 15). Fourteen states were developing an evaluation system and were uncertain how students with IEPs will be included. Eight states did not include data for students who participate in the AA-AAS in the evaluation system. Six states used Student Learning Objectives (SLOs) to measure the performance of students with IEPs. Most unique states either included data for all students with disabilities or were still developing an evaluation system and were uncertain how students with IEPs will be included.

Figure 15. Method for Including Data from Students with IEPs in State Guidelines/Models of Evaluation for General Education Teachers

Figure 15 Bar Chart

Note: Forty-four regular and four unique states responded to this question. State respondents were able to select multiple responses.

In 2012, most states did not use a value-added model to adjust for selected student characteristics in their teacher evaluation model (see Figure 16). Seven states indicated that they used a value-added model, and nine states indicated that they were developing a system that will include a value-added model. Two unique states do not use a value-added model for teacher evaluation. The rest of the unique states were unsure of the use of a value-added model for teacher evaluation in their state.

Figure 16. State Use of Value-Added Model for Teacher Evaluation

Figure 16 Bar Chart

Note: Forty-four regular and six unique states responded to this question.

Of the 16 states that either used a value-added model or were developing a system that will include a value-added model, 6 states indicated that the model adjusts or will adjust for students with disabilities and will provide differentiated accountability for students with disabilities. The unique states were not asked this question because none used a value-added model.

In 2012, nearly half of all states included special education teachers in a teacher evaluation system by applying the same multiple valid measures for special education teachers that were used for other teachers in the local education agency (LEA) (see Figure 17). Approximately 30 percent of states used individual student growth data if the special education teacher teaches or co-teaches a core subject area (i.e., reading/ELA, math) in a tested grade. And nearly 25 percent of states were still developing an evaluation system and were uncertain how special education teachers would be included. Two unique states included special education teachers in a teacher evaluation system by applying the same multiple valid measures for special education teachers that were used for other teachers in the local education agency (LEA) while two states were developing an evaluation system and were unsure how special education teachers would be included.

A total of 24 states commented on contextual factors related to teacher evaluation policies. These comments were most often related to inclusion of students with disabilities in teacher evaluation policies and inclusion of special education teachers in teacher evaluation policies.

  • Special education teachers are included if their students take the general assessment.
  • A committee is being convened to review the guidelines for special education teachers and other non-core curriculum teachers.
  • All teachers are required to demonstrate student learning over time. The measures used will likely vary from district to district and possibly from classroom to classroom. The regulations allow for maximum LEA flexibility in regard to multiple measures. Piloting new statewide teacher evaluation guidelines this year. Best practices for the inclusion of alternate assessment data are still being evaluated.

Figure 17. Inclusion of Special Education Teachers in Teacher Evaluation System

Figure 17 Bar Chart

Note: Forty-three regular and six unique states responded to this question. State respondents were able to select multiple responses.

Universal Design

In 2012, more than three-quarters of the states indicated that they considered universal design during test conceptualization and construction (see Figure 18). Many states also considered universal design during the final review, during the expert review process, and in the test development request for proposals (RFP). Fewer states considered it as part of statistical analysis processes or through think-aloud methods during field testing. Few unique states addressed elements of universal design in the test development process.

Figure 18. Points at which Universal Design is Addressed in the General Assessment Development Process

Figure 18 Bar Chart

Note: Forty-six regular and five unique states responded to this question. State respondents were able to select multiple responses.

A total of six states commented on contextual factors related to universal design considerations for the general assessment. These comments most often were related to changes in assessment or achievement levels, or other assessment issues or topics.

  • Will be useful in future assessments.
  • Will be using think-aloud methods/cognitive labs as state moves forward with new assessments.
  • State used to consider universal design. Since working with consortia, state assumes it is being considered and addressed.

Non-summative Assessments

In addition to summative assessments used for accountability purposes, some states require or recommend the use of non-summative assessments (i.e., interim/benchmark assessments, formative assessments). In 2012, 16 states did not have a policy on the use of interim/benchmark assessments by districts (see Figure 19). Nine states indicated that a policy was under discussion and a small number of states required or recommended that districts use specific types of assessments for interim or benchmark purposes. Only six states indicated that they did not have a policy on the use of formative assessments. Nine states indicated that their policies for formative assessments were under discussion. A small percentage of states either required or recommended that districts use formative assessments.

Figure 19. State Policies and Viewpoints on Use of Non-Summative Assessments

Figure 19 Pie Charts

Note: 49 regular states responded to the question about interim/benchmark assessment while 24 responded to the question about formative assessments.

Fourteen states commented on contextual factors related to non-summative assessment policies. These comments were most often related to changes in assessment or achievement levels, changes in reporting methods, or issues related to calculations or procedures.

  • Interim and formative assessments are part of a state approach to a balanced assessment approach; however, we do not have a policy that requires a specific type of assessment that must be used by districts. SBAC and DLM will have these assessments/tools as part of their system.
  • Districts are required to have a comprehensive assessment system including interim and formative assessments.
  • The use of formative and interim/benchmark assessments is a local decision.

End-of-Course Tests

Of the 24 states that reported using end-of-course (EOC) tests in 2012, 19 reported that EOC tests were used for accountability purposes. Only one unique state had an EOC test that was used for accountability purposes. Six states reported using end-of-course alternate assessments based on alternate achievement standards (AA-AAS) for students who needed them, with two states providing this option for only some courses.


Successful Practices and Recurring Challenges

For several assessment topics, state respondents were asked to indicate whether they had developed successful practices or faced recurring challenges. The respondents rated each topic as very challenging, challenging, successful, or very successful (see Figure 20 for regular states' responses). Most states reported that assessment validity, test design and content, and reporting and monitoring were areas of success. Issues related to assistive technology were considered more challenging. About as many respondents considered the assessment of English language learners (ELLs) with disabilities to be an area of success as considered it to be an area of challenge, although the English Language Proficiency Assessment was noted as an area of success. Both instructional accommodations and assessment accommodations were generally reported as successful.

Figure 20. Successes and Challenges Reported by Regular States

Figure 20 Pie Charts

Figure 20 Pie Charts, continued

Unique states reported assessment of ELLs with disabilities for accountability purposes as particularly challenging. Some unique states reported that instructional accommodations and assessment validity were areas of success--but others found them challenging. Figures for the unique states are in the appendix.


Alternate Assessments

Some students with disabilities participate in alternate assessments. All states administer alternate assessments based on alternate achievement standards (AA-AAS) for students with the most significant cognitive disabilities. Some states also offer an optional alternate assessment called the alternate assessment based on modified achievement standards (AA-MAS) for some low performing students with disabilities.

Alternate Assessments Based on Alternate Achievement Standards (AA-AAS)

Since the 2009 survey, thirteen regular states and one unique state have made major revisions to an AA-AAS. Thirty-two regular states and five unique states have not made major revisions to an AA-AAS since 2009. (Forty-five regular and six unique states responded to this question on the 2012 survey.)

Alternate Assessments Based on Modified Achievement Standards (AA-MAS)

Fourteen regular states and one unique state had an alternate assessment based on modified achievement standards (AA-MAS) in 2012. Thirty-two regular states and five unique states did not have an AA-MAS in 2012.

Of the states with an AA-MAS, ten regular states and one unique state planned to phase out and discontinue use of an AA-MAS when consortium assessments are launched (see Figure 21). Four states plan to phase out and transition students to a general assessment and one state has already discontinued use of an AA-MAS.

Figure 21. State Plans to Transition from AA-MAS to General Assessment

Figure 21 Bar Chart

Note: State respondents were able to select multiple responses. Fourteen regular states and one unique state responded to the question.


English Language Learners (ELLs) with Disabilities

Reporting Practices for English Language Learners (ELLs) with Disabilities

In 2012, 25 states reported that they disaggregate assessment results for ELLs with disabilities (see Figure 22). Seventeen states disaggregate results for ELLs with disabilities for the general assessment and eleven states disaggregate results for the English Language Proficiency (ELP) assessment. States disaggregated the results to examine trends, respond to requests, or for reporting purposes. Most unique states do not disaggregate for ELLs with disabilities.

Figure 22. Reporting Practices for ELLs with Disabilities

Figure 22 Bar Chart

Note: Twenty-five regular and six unique states reported that they disaggregate assessment results for ELLs with disabilities. State respondents were able to select multiple responses.

States were asked how they included ELLs with disabilities in state ELP assessment reports. Of the 25 states that disaggregated data for ELLs with disabilities, 22 states counted ELLs with disabilities as participants on ELP assessment reports, and also counted the earned score as valid (see Figure 23). Only one state did not count ELLs with disabilities as participants and gave them no score. One unique state reported that it counted ELLs with disabilities as participants and counted the earned score as valid.

Figure 23. How States Included ELLs with Disabilities on ELP Assessment Results

Figure 23 Bar Chart

Note: Twenty-five regular states and one unique state responded to the question.

English Language Learners (ELLs) with Disabilities and Accommodations

In 2012, just over half of the regular states reported that they offered accommodations to ELLs with disabilities on all sections of the ELP assessment while 15 regular states did so only on some sections of the test (e.g., listening/speaking) (see Figure 24). One regular state did not offer accommodations to ELLs with disabilities on the ELP assessment. Two unique states offered accommodations on all sections of the ELP assessment.

Figure 24. Accommodations Use on ELP Assessment for ELLs with Disabilities

Figure 24 Bar Chart

Note: Forty-four regular and four unique states responded to this question.

Twenty-five regular states made information on accommodations use on the ELP assessment available to the public. Three unique states also made this information available to the public. Seventeen regular states and one unique state did not make information on accommodations use on the ELP assessment available to the public.

English Language Learners (ELLs) with Significant Cognitive Disabilities

In 2012, 10 states did not require ELLs with significant cognitive disabilities to take an ELP assessment. For those states that did require participation of these students in the state ELP assessment, how they participated varied (see Figure 25). The most common methods of participation were that ELLs with significant cognitive disabilities either take an alternate ELP assessment or they take the same ELP assessment as all other ELLs. Five states required that ELLs with significant cognitive disabilities take some sections of the same ELP assessment as all other ELLs. Unique states indicated an even mix of all participation approaches.

Figure 25. ELLs with Significant Cognitive Disabilities Participation on ELP Assessment

Figure 25 Bar Chart

Note: Forty-five regular and four unique states responded to this question.

A total of seven states commented on contextual factors related to ELLs with disabilities. These comments were most often related to disaggregation for ELLs with disabilities and inclusion of ELLs with disabilities in ELP federal accountability reports.

  • Federal law does not require reporting of disaggregated data for ELLs with disabilities. We are able to provide disaggregated data for ELLs with disabilities by request.
  • Disaggregation for ELLs with disabilities appears on the population summary reports supplied by test contractor.
  • Disaggregation for ELLs with disabilities is reviewed internally.
  • Administered WIDA's Alternate ACCESS assessment for ELLs for the first time this year. Will include students who took this assessment in federal accountability reports next year.

Preferred Forms of Technical Assistance

In 2012 states were asked to rank forms of technical assistance for addressing challenges in their states. Thirty states indicated that conference calls on hot topics were one of their most preferred forms of technical assistance (see Figure 26). Twenty-four states indicated that online "how to" documents on assessment topics were one of their most preferred forms of technical assistance. Twenty-three states indicated that webinars on assessment related topics were most preferred.

Figure 26. Technical Assistance Preferences of Regular States

Figure 26 Bar Chart

Note: Forty-one regular states responded to this question. States were asked to rank order 13 technical assistance materials or strategies in order of most helpful to least helpful. The top five choices for each state were coded as their "most preferred" forms of technical assistance. The next four choices were coded as "preferred" with the bottom four choices per state coded as "least preferred."

Six unique states indicated that podcasts and awareness materials were two of their most preferred forms of technical assistance (see Figure 27). Four unique states indicated that individual consultation in the state and assistance with data analysis were most preferred.

Figure 27. Technical Assistance Preferences of Unique States

Figure 27 Bar Chart

Note: Six unique states responded to this question. States were asked to rank order 13 technical assistance materials or strategies in order of most helpful to least helpful. The top five choices for each state were coded as their "most preferred" forms of technical assistance. The next four choices were coded as "preferred" with the bottom four choices per state coded as "least preferred."


Appendix

Successes and Challenges Reported by Unique States

Unique states also provided commentary on successes and challenges. Included in this appendix are depictions of the issues that were most prevalent in unique states. The most frequently mentioned issues sometimes were different from those frequently cited as important to regular states. Test design/content, the use of assistive technology, and assessment accommodations were most often identified as areas of success by the unique states. Assessment validity, assessment of English language learners with disabilities, English language proficiency (ELP) assessment, and instructional accommodations were most frequently identified as challenges. Responses were mixed for reporting and monitoring.

Appendix Pie Charts

Appendix Pie Charts, continued

2013 by the Regents of the University of Minnesota.
The University of Minnesota is an equal opportunity educator and employer.

Online Privacy Statement
This page was last updated on January 03, 2013

NCEO is supported primarily through a Cooperative Agreement (#H326G050007, #H326G110002) with the Research to Practice Division, Office of Special Education Programs, U.S. Department of Education. Additional support for targeted projects, including those on LEP students, is provided by other federal and state agencies. Opinions expressed in this Web site do not necessarily reflect those of the U.S. Department of Education or Offices within it.