NCEO Logo
Bookmark and Share

Technical Report 65

A Summary of the Research on the Effects of Test Accommodations: 2009-2010

Christopher M. Rogers • Elizabeth M. Christian • Martha L. Thurlow

November 2012

All rights reserved. Any or all portions of this document may be reproduced and distributed without prior permission, provided the source is cited as:

Rogers, C. M., Christian, E. M., & Thurlow, M. L. (2012). A summary of the research on the effects of test accommodations: 2009-2010 (Technical Report 65). Minneapolis, MN: University of Minnesota, National Center on Educational Outcomes.

Table of Contents


Executive Summary

The use of accommodations in instruction and assessments continues to be of great importance for students with disabilities. This importance is reflected in an emphasis on research to investigate the effects of accommodations. Key issues under investigation include how accommodations affect test scores, how educators and students perceive accommodations, and how accommodations are selected and implemented.

The purpose of this report is to provide an update on the state of the research on testing accommodations as well as to identify promising future areas of research. Previous reports by the National Center on Educational Outcomes (NCEO) have covered research published since 1999. We summarize the research to review current research trends and enhance understanding of the implications of accommodations use in the development of future policy directions, implementation of current and new accommodations, and valid and reliable interpretations when accommodations are used in testing situations. In 2009 and 2010, 48 published research studies on the topic of testing accommodations were found. Among the main points of the 2009-10 research are:

Purpose: The majority of the research included in this review sought to evaluate the comparability of test scores when assessments were administered with and without accommodations. The second most common purpose for research was to report on perceptions and preferences about accommodations use. The majority of studies addressed multiple purposes.

Research design: Over 60% of the studies reported primary data collection on the part of the researchers, rather than drawing on existing archival data sets. Over half of the studies involved quasi-experimental designs. Researchers also drew on survey techniques and carried out literature meta-analyses.

Types of assessments, content areas: A wide variety of instrument types were used in these studies. Tests and descriptive surveys were the most common data collection methods used in the studies reviewed, as developed by the researchers for the purpose of the study. A large number of the studies involved academic content items drawn from specified sources outside of the researchers' work. Other studies used state criterion-referenced test data, norm-referenced measures, or multiple types of data in various combinations. Mathematics and reading were the most common content areas included in the 2009-2010 research. Other content areas were writing, other language arts, science, social studies, and psychology. Approximately one-quarter of all studies addressed more than one content area in the assessments used.

Participants: Participants were most frequently students, spanning a range of grade levels from K-12 to college/university students, although several studies included educators and parents as participants, in various combinations, as well. Studies varied in the number of participants; some studies included fewer than 20 participants, whereas other studies involved tens of thousands of participants.

Disabilities categories: Learning disabilities were the most common disabilities exhibited by participants in the research, accounting for over half of the studies. Attention problems and emotional behavioral disabilities were each the next most commonly studied. Low-incidence disabilities were included in about 40% of the studies.

Accommodations: Presentation accommodations were the most frequently studied category, with the Read Aloud accommodation being the most studied within this category (and across categories). Other commonly studied accommodations included Computerized Administration and Extended-time. There was a small number, about one-eighth of the studies, that analyzed relatively uncommon or unique accommodations from among various categories.

Findings: Most of the oral presentation and computer administration accommodations empirically tested showed positive effects on test scores. In addition, the read-aloud accommodation did not alter the construct being tested. Among studies of the perception of different accommodations, students often indicated a preference for one accommodation over others, whereas educator preferences were mixed regarding accommodations use. The most broadly-supported research finding was that accommodations provided during testing did not alter the academic constructs tested--including for mathematics, reading, science, and writing.

Limitations: Researchers often cited small sample size as well as a general lack of representativeness on age, grade level, and race as primary limitations of their research. Methodological issues such as the use of bundled (vs. individually administered) accommodations and non-random sampling of participants were also mentioned as limitations.

Directions for future research: A number of promising suggestions were noted, particularly for varying or improving on research methods to test the effects of specific accommodations, improving the representativeness of samples, and improving test development practices to reduce the need for accommodations. In many cases, researchers also found that the results of their studies generated many suggestions for further investigation, such as concurrent validity studies using other measures.

The studies in 2009-2010 demonstrated several similarities when viewing them in comparison with previous research, especially in relation to the 2007-2008 studies examined in the previous accommodations research review. However, there were several differences, or shifts, as well. There were increases in research examining science assessment accommodations and decreases in reading assessment accommodations. There was a rise in multi-purpose study designs, and accordingly, more studies employed multiple data collection methods and instruments. Test performance of students in elementary and middle school received increased attention. The number of accommodations receiving focused examination--including common and unique accommodations--expanded to 10 in the current review. There were decreases in studies measuring impact of the extended-time accommodation, as well as a small increase in examining response accommodations. Further, students with disabilities were reported not to have benefited from extended-time in half of the relevant studies, and to have benefited in the other half of the studies. Research provided more support for the benefits of computerized administration, along with demonstrated score equivalency with and without this accommodation, indicating no problematic concern regarding academic construct validity. In fact, attention to the effects of accommodations on construct validity has increased in general, and only 2 of 21 separate findings indicated that academic constructs were different in the accommodated and non-accommodated testing conditions.


Overview

Federal legislation has spurred states to include all students in statewide assessment, and vast improvements in inclusion have taken place over the past decade. For many students with disabilities, access to tests necessitates the provision of assessment accommodations. As the use of accommodations has increased, there has been a concurrent need to attend to the implementation of accommodations and to ensure the validity of results when accommodations are used. States look to educational research for answers about which accommodations have proven successful in increasing the validity of results for students with disabilities. Often this effort means looking for increased scores for students with disabilities, along with evidence that the constructs measured or the validity of inferences that can be drawn from results are not changed.

To synthesize research efforts, NCEO has provided reports on accommodations research completed over time. The time periods included 1999-2001 (Thompson, Blount, & Thurlow, 2002), 2002-2004 (Johnstone, Altman, Thurlow, & Thompson, 2006), 2005-2006 (Zenisky & Sireci, 2007), and 2007-2008 (Cormier, Altman, Shyyan, & Thurlow, 2010).

The purpose of this document is to provide a synthesis of the research on test accommodations published in 2009 and 2010. The research described here encompasses empirical studies of score comparability and validity studies as well as investigations into accommodations use and perceptions of their effectiveness. Taken together, the current research casts a wide net in exploring the issues surrounding test accommodations practices, with a number of efforts made on key accommodations. Reporting the findings of current research studies was a primary goal of this analysis; a second goal was to identify areas requiring continued investigation in the future.

Review Process

Similar to the process used in past accommodations research syntheses (Cormier, Altman, Shyyan, & Thurlow, 2010; Johnstone et al., 2006; Thompson, Blount, & Thurlow, 2002; Zenisky & Sireci, 2007), a number of sources were accessed to complete the review of the accommodations research published in 2009 and 2010. Specifically, five research databases were consulted, including Educational Resources Information Center (ERIC), PsycINFO, Academic Search Premier, Digital Dissertations, and Educational Abstracts. To confirm the thoroughness of our searches, we used the Web search engine Google Scholar to search for additional research. In addition, a hand-search of 30 journals was completed to ensure that no qualifying study was missed. A list of hand-searched journals is available on the National Center on Educational Outcomes website (www.nceo.info/OnlinePubs/AccommBibliography/AccomStudMethods.htm).

Online archives of several organizations were also searched for relevant publications. These organizations include Behavioral Research and Teaching (BRT) at the University of Oregon (http://brt.uoregon.edu/), the National Center for Research on Evaluation, Standards, and Student Testing (CRESST; http://www.cse.ucla.edu/), and the Wisconsin Center for Educational Research (WCER; http://www.wcer .wisc.edu/testacc).

The initial search was completed in December, 2010. A second search was completed in April, 2011 to ensure that all articles published in 2009 and 2010 were found and included in this review. Within each of these research databases and publications archives, we used a sequence of search terms. Terms searched for this review were:

  • standardized (also large-scale, state, standards-based) test (also testing) changes
  • standardized (also large-scale, state, standards-based) test (also testing) modification(s)
  • standardized (also large-scale, state, standards-based) test (also testing)
  • accommodation(s)
  • test changes
  • test modifications
  • test accommodations

Many of these search terms were used as delimiters when searches yielded large pools of documents found to be irrelevant to the searches.

The research documents from these searches were then considered for inclusion in this review with respect to several criteria. First, the decision was made to focus only on research published or defended in doctoral dissertations in 2009 and 2010. Second, the scope of the research was limited to investigations of accommodations for regular assessment (hence, articles specific to alternate assessments, accommodations for instruction or learning, and universal design in general were not part of this review). Third, research involving English language learners (ELLs) was included only if the target population was ELLs with disabilities. Fourth, presentations from professional conferences were not searched or included in this review, based on the researchers' criteria to include only research that would be accessible to readers and that had gone through the level of peer review typically required for publication in professional journals or through a doctoral committee review. (This criterion was implemented for the first time during the 2007-2008 review) Finally, in order to be included in the online bibliography and summarized in this report, studies needed to involve either (1) experimental manipulation of an accommodation, (2) investigation of the comparability of test scores across accommodated and non-accommodated conditions, or (3) examination of survey results on teachers' knowledge and/or perceptions of accommodations.


Results

The results of our analyses of the 48 studies published from January 2009 through December 2010 are presented in substantial detail. We provide the studies' publication types, as well as the range of research purposes. We specify the types of research approaches and the primary and secondary sources of data collection. We also describe the data collection methods and instruments. We provide the academic content areas covered in the research. We depict research participants in terms of their being students, educators, and parents, their ages or grade levels, the participant sample sizes and disability status, and the disability categories. We report the types of accommodations studied. We also explicate the research findings in terms of the impact of accommodations as well as perceptions about accommodations, incidence of accommodations use, and implementation. Additional sections offer perspectives on accommodations in postsecondary education, the accommodations decision-making process, and the association of accommodations to academic discipline. Finally, limitations and future research directions in the assembled body of research literature are presented as reported by the researchers.

Publication Type

The results of the review process showed a total of 48 studies about accommodations were published during the period from January 2009 through December 2010. As shown in Figure 1, of these 48 studies, 36 were journal articles, 10 were dissertations, and 2 were published professional reports released by research organizations (e.g., National Center on Educational Outcomes, University of Oregon Behavioral Research and Teaching).

Figure 1. Percentage of Accommodations Studies by Publication Type

Figure 1 Pie Chart

The total number of studies published on accommodations in 2009-2010 (n=48) increased since the previous report examining accommodations research published in 2007-2008 (n=40). There was also an increase in the number of journal articles (n=25 in 2007-2008; n=36 in 2009-2010), and a slight decrease in the number of dissertations published on accommodations (n=13 in 2007-2008; n=10 in 2009-2010). The increase in journal articles included in this report may be due, in part, to an increased number of journals that published research on accommodations in 2009-2010. The report on accommodations research in 2007-2008 included articles from 19 journals; the articles described in the current report were found in 24 journals.

Purposes of the Research

A number of purposes were identified in the accommodations research published in 2009 and 2010. Table 1 provides a view of the predominant focus of each of these 48 studies. In some cases, a work had only one expressed purpose; this describes 11 of the studies (see Appendix A-1). The majority of studies sought to accomplish multiple purposes. In those cases, we identified the "primary purpose" according to the title of the work or the first-mentioned purpose in the text of the work.

Table 1. Primary Purpose of Reviewed Research

Purpose

Number of Studies

Compare scores
    only students with disabilities (7 studies)
    only students without disabilities (0 studies)
    both students with and without disabilities (8 studies)

15

Study/compare perceptions and preferences about use

11

Compare test items

6

Evaluate test structure

5

Summarize research on test accommodations

4

Report on implementation practices and accommodations use

3

Investigate test validity

3

Identify predictors of the need for test accommodations

1

Discuss issues

0

Total

48

The most common primary purpose for research published during 2009-2010 was to report on the effect of accommodations on test scores (31%), through comparing scores of students who received accommodations to those who did not. The next most common primary purpose was studying perceptions of the accommodations and preferences between or among a small number of accommodations of a certain type (23%). Other primary purposes included comparing test items, which refers to whether item difficulty or other item-specific content validity issues changed when test format changed from print-based to electronic (e.g., Kim & Huynh, 2010), or to audio presentation (e.g., Cook et al., 2009), among others. The purpose of evaluating test structure focused on the effects of accommodations on academic constructs. Factor structure was examined by comparing the tests with and without accommodations.

We identified the primary purpose of summarizing research in works that were expressly written as literature reviews; for example, Lindstrom (2010) inquired about the impact of different types of accommodations on the mathematics test scores of students with high-incidence disabilities. The purpose of reporting on implementation practices and accommodations use was fairly uncommon as a primary study purpose, yet an example was when Johnstone and his colleagues (2009) inquired about factors that may have affected use of assistive technology.

The investigation of test validity was the primary purpose of only three studies (Elliott et al., 2010; Laitusis, 2010; Lovett et al., 2010). For example, Laitusis (2010) used an external validation measure of teacher rating of comprehension abilities to analyze correlational data and regression procedures to examine possible connections with comprehension as measured by a standardized test both with and without a form of the oral presentation accommodation. The least common primary purpose was to identify predictors of the need for test accommodations, which was the primary focus of one study (Cawthon, 2009) in which relationships across instructional factors and the effect of accommodations use were explored (see Appendix A-2).

Table 2 provides a more detailed view of the body of literature showing the multiple purposes of many studies. For example, some efforts included analyses of score comparisons between students with disabilities and students without disabilities when using accommodations, yet also sought students' comments through survey or interview about their test-taking experience.

Table 2. All Purposes of Reviewed Research

Purpose

Proportion of
Studiesa

Compare scores
    only students with disabilities (19%)
    only students without disabilities (2%)
    both students with and without disabilities (31%)

52%

Study/compare perceptions and preferences about use

40%

Discuss issues

38%

Report on implementation practices and accommodations use

21%

Compare test items

19%

Summarize research on test accommodations

17%

Evaluate test structure

10%

Investigate test validity

6%

Identify predictors of the need for test accommodations

2%

a The total of these percentages is >100% due to the multiple purposes identified in most (37) of the studies; 23 of the studies had 2 identified purposes, and 14 of the studies had 3 identified purposes.

The most common single purpose of the 2009-2010 published studies was to demonstrate the effect of accommodations on test scores; this was included in over half of the works (52%). Study approaches either compared test scores of students with disabilities and students without disabilities when using accommodations, or compared test scores of students with disabilities when using and not using accommodations. The former approach was the most common, comprising fully two-thirds of this category of research. An additional study (Lovett et al., 2010) considered the impact of using supports commonly implemented as accommodations--word-processing and extra time--on the quality of essay-based college-level course examinations completed only by students without disabilities. Another purpose we identified in over one-third of the studies was a focus on discussing issues, usually noted when the researchers offered a detailed consideration of a central issue related to accommodations. For instance, Bayles (2009) presented discussion related to instructional and curricular access for students with disabilities, Lazarus and her colleagues (2009) discussed the trend line of accommodations policy development, Freeland and her colleagues (2010) considered training and experience with technology as a possible intervening variable, and Lovett (2010) structured his literature review around answering questions about the extended-time accommodation.

The purpose of reporting on implementation practices and accommodations use was present in about one-fifth of all studies. For instance, in the course of summarizing research about accommodations in technology-supported assessments, Salend (2009) also reported about related accommodations practices. The purpose of comparing test items co-occurred in many studies on comparing scores between accommodated and non-accommodated tests, yet added the focus on analyzing differential item functioning (DIF). For instance, Stone and her colleagues (2010) examined differential benefits of standard print, large-print, and braille formats for students with and without blindness or visual impairments. We made a judgment call as to which of these purposes was predominant for these types of studies, tending to note that comparing scores came first in the study text or encompassed more of the results reporting than comparing items. The purpose of summarizing accommodations research was identified when the researcher included a comprehensive review of literature; other than those studies that were written as literature reviews, examples of the level of comprehensiveness we sought occurred in dissertations where another purpose predominated but a substantive research summary was also completed.

Research Type and Data Collection Source

Just over half of the accommodations research reviewed here used a quasi-experimental research design to gather data on the research purposes. As seen in Table 3, the number of descriptive quantitative research studies decreased slightly in 2010 compared to 2009, while the number of studies using a quasi-experimental design remained about the same. Though few studies were reported to use experimental, longitudinal, or meta-analytic designs, these categories also were rarely included in past reports. The data reported here may reflect an increase in the use of these designs in accommodations research. Furthermore, there appeared to be a large difference between data collection sources, with about twice as many studies using primary versus secondary sources of data overall and within each year. This is a change from the previous report, in which approximately equal numbers of studies used primary and secondary data sources. Primary data sources included actual data collection procedures that researchers undertook to obtain their data. Secondary data collection included the use of archival or extant data.

Table 3. Research Type and Data Collection Source by Year

Research Design

Data Collection Source

Research Type Tools

Primary

Secondary

2009

2010

2009

2010

Quasi-experimental

8

8

4

5

25

Descriptive quantitative

5

4

1

0

10

Descriptive qualitative

3

2

0

2

7

Correlation/prediction

0

1

1

0

2

Experimental

1

1

0

1

3

Longitudinal

0

0

0

0

0

Meta-Analysis

0

0

1

0

1

Year Totals

17

16

7

8

48

Source Totals Across Years

33

15

48

Data Collection Methods and Instruments

The researchers collected study data gathered through primary or secondary procedures using various methods and tools, as seen in Figure 2. The majority of the research included in this synthesis for 2009-2010 used data acquired through academic content testing. Just over half of the studies employed surveys to gather data. Interviews, observations, and focus groups were used much less frequently. For this analysis, we considered "articles" the method or source for those studies that reviewed research, including one study that employed formal meta-analysis. One study used state policies as the data source for the descriptive analyses completed. Fewer than half of the studies reported using more than one method or tool to gather data.

Figure 2. Data Collection Methods Used in 2009-2010 Research

Figure 2 Bar Chart

Note: Of the 48 studies reviewed for this report, 12 reported using two data collection methods, and 5 reported using three data collection methods.

Nearly all of the studies used data collection instruments of one form or another; only four studies did not employ any instruments. Table 4 presents the types of data collection instruments used in studies. Surveys presented items of an attitudinal or self-report nature. Tests were course- or classroom-based. Assessments were statewide or large-scale in scope. Protocols refer to non-academic sets of questions, usually presented in an interview or focus group format. Measures referred to norm-referenced academic or cognitive instruments. All of these instruments were placed into five categories: protocols or surveys developed by study authors, norm-referenced cognitive ability measures, norm-referenced academic achievement measures, state criterion-referenced academic assessments. and surveys or academic tests developed by education professionals or drawn by researchers from other sources. Non-test protocols developed by the author or authors of the studies--the most commonly-used instrument type--included performance tasks, questionnaires or surveys, and interview or focus-group protocols, among others. Surveys or academic tests developed by education professionals or researchers used sources outside of current studies, and were exemplified by attitudinal surveys such as the Attitudes Toward Requesting Accommodations (ATRA) scale, or by subsets of items drawn from released or otherwise-available pools such as the National Assessment of Educational Progress, as well as course-content exams. State criterion-referenced assessments included those of Georgia, South Carolina, Texas, and Wisconsin, as well as some from states that remained unidentified in the research. Norm-referenced academic achievement measures included the Gates-MacGinitie Reading Test (GMRT). Norm-referenced cognitive ability measures included the Test of Silent Word Reading Fluency (TOSWRF), among others. A substantial minority--10 studies in all--used instrumentation of more than one kind. Additionally, a small number of studies used multiple instruments in each of them, often of the same kind (Laitusis, 2010; Logan, 2009; Lovett et al., 2010; Parks, 2009). A small number (n=5) of the instruments was used in more than one study: Attitudes Toward Requesting Accommodations (ATRA) survey, the Principles and NCTM standards for school mathematics test, the Gates MacGinitie Reading Test (GMRT), the Woodcock Johnson III Tests of Academic Achievement measure, and the South Carolina Palmetto Achievement Challenge Test (SC PACT) assessment. We present a complete listing of the instruments used in each of the studies in Appendix C, including the related studies that served as sources for these instruments, when available.

Table 4. Data Collection Instrument Types

Instrument Type

Number of Studies

Non-academic protocols or surveys developed by study author/s

19

Surveys or academic tests developed by professionals or researchers using sources outside of current study

17

State criterion-referenced assessments

11

Norm-referenced academic achievement measures

8

Norm-referenced cognitive ability measures

2

None

4

Multiple (types)

10

Content Area Assessed

A number of studies published during 2009-2010 focused on accommodations used in certain academic content areas. As shown in Table 5, math and reading were the two most commonly assessed content areas. Table 5 also provides a comparison to content areas in NCEO's previous reports on accommodations (Cormier et al., 2010; Zenisky & Sireci, 2007). In general, the emphasis on reading and math is consistent across reviews. The number of studies on writing, social studies, and psychology has remained fairly consistent since 2005. An increase in science studies is apparent across years. There were no studies citing Civics/US History as a content area in the 2007-2008 and 2009-2010 reports. All studies published in 2009-2010 specified a content area. This is a change from past reports, in which at least one study did not cite the content area studied.

Table 5. Academic Content Area Assessed Across Three Reports

Content Area Assessed

2005-2006a

2007-2008b

2009-2010c

Mathematics

17

15

20

Reading

14

18

16

Writing

4

4

3

Other Language Artsd

9

4

4

Science

1

3

7

Social Studies

1

1

2

Civics/US History

1

0

0

Psychology

1

1

1

Not Specific

7

1

0

Multiple Content

14

10

13

a Studies in 2005-2006 including examinations of more than one content area ranged in number of areas assessed from 2 to 6.

b Studies in 2007-2008 including examinations of more than one content area ranged in number of areas assessed from 2 to 4.

c Studies in 2009-2010 including examinations of more than one content area ranged in number of areas assessed from 2 to 5.

d Detailed descriptions of what constituted 'Other Language Arts' for each of the four studies from 2009-2010 can be found in Appendix C, Table C-2.

Research Participants

Researchers drew participants from differing roles in education (see Figure 3 and Appendix D, Table D-1). A large majority studied only students--32 of the 48 studies from 2009-2010. The next largest participant group studied was 'educators only,' describing or analyzing the educator perspective on accommodations. Additional data are reported about combinations of participant groups, as well as noting that some studies did not specify participants; these were usually the topical literature review documents.

Figure 3. Types of Research Participants

Figure 3 Bar Chart

Table 6 shows details about the size and composition of the participant groups in the research studies published during 2009 and 2010; this information is displayed in more detail by study in Appendix D. The size of the samples varied from 12 (Mastergeorge & Martinez, 2010) to 61,270 (Anjorin, 2009). In 2009-2010, there was a larger number of studies in which at least 50% of the participants were people with disabilities (n=17) than there were studies where at least 50% of the participants were people without disabilities (n=15). Eleven studies examined participant groups composed primarily of people with disabilities, which are reported in the 75-100% column. In fact, 10 of these 11 studies focused only on students with disabilities. Most studies involving participants with disabilities had numbers between 25-299, and only 2 studies had participant numbers of 1,000 or more. Alternately, studies with 24% or fewer of the participants having disabilities tended to number 1,000 participants or more. Also, the studies with mostly participants without disabilities included studies focused on educator input and perspectives.

Table 6. Participant Sample Sizes and Ratio of Individuals with Disabilities

Number of Research Participants by Study

Number of Studies by Proportion of Sample Comprising Individuals with Disabilities

0-24%

25-49%

50-74%

75-100%

Unavail.1

Total

1-9

0

0

0

0

0

0

10-24

0

1

1

0

1

3

25-49

0

1

1

3

3

8

50-99

0

0

0

3

0

3

100-299

3

1

0

3

3

10

300-499

0

1

0

0

2

3

500-999

0

0

1

0

1

2

1000 or more

5

3

3

2

1

14

Total

8

7

6

11

11

432

1 11 of the studies did not specify the proportion of participants who had disabilities.

2 5 of the studies did not specify the number of participants.

Analyzing the proportions more closely, a finer distinction is indicated in the center columns, in which the studies examined samples that had somewhat more participants without disabilities (25-49%) and somewhat more participants with disabilities (50-74%). These two columns have almost equivalent overall numbers, with a total of 6-7 studies in each. These studies, with relatively similar ratios of people with and without disabilities, tended to examine data from at least 500 participants (n=7) compared with 2 studies with 100-499 participants, and 4 studies with 10-49 participants. Finally, about ¼ of the studies with participant numbers reported did not specify the proportions of participants with or without disabilities; 7 of these 11 studies collected data only from educator participants.

School Level

Research on accommodations published during 2009 and 2010 involved kindergarten through college-aged participants (see Table 7). Previous reports included research with participants in kindergarten through postsecondary (see Appendix D for more detail); the category postsecondary/college represents a change from past reports.

As seen in Table 7, a plurality of the studies published in 2009 and 2010 focused on middle school students (n=18). Thirteen studies involved elementary school students, and ten involved high school students. About one quarter of the studies (n=12) involved samples from across more than one grade-level cluster; most of these studies included relatively larger groups of 50 or more participants (about 67%), and secondary data sources (see Appendices B and D). Put another way, these multiple grade-level studies were primarily analyses of extant large-scale assessment data sets, often drawn at the state level. Although not more common than K-12 studies, there was a noteworthy number of studies that examined accommodations use and implementation at the postsecondary/college level. Twelve studies did not involve students as participants.

Table 7. Grade Level of Research Participants

Education Level of Participants in Studies

Number of Studies

Elementary school (K-5)

13

Middle school (6-8)

18

High school (9-12)

10

Postsecondary

7

Multiple grade-level clusters

12

Not applicable (No age)

12

Disability Categories

A broad range of disability categories was included in samples in the 2009-2010 research (see Appendix D for details). As shown in Table 8, seven studies did not specify disability categories of participants, and eight studies did not include students in the sample. Of the remaining 33 studies, the most commonly studied disability category was learning disabilities (n = 26); nine of these studies had only participants with learning disabilities. In comparison to the previous reporting period, 2007-2008, the proportion of studies with participants with learning disabilities changed from about three-eighths of the studies to over half of the studies. Approximately one third of these remaining 33 studies included participants with an attention problem, an emotional behavioral disability, blindness/visual impairment, or deafness/hearing impairment. The least common disability category was autism, and all of the studies specifying that category also included participants with other categories as well. Sixteen studies included participant groups with various disabilities, rather than all having one specific category of disability. Only eight studies reported participants with "multiple disabilities"; that is, they included participants who each had more than one disability identified.

Table 8. Disabilities Reported for Research Participants

Disabilities of Research Participants

Number of Studies

Learning disabilities

26

Attention problem

11

Emotional behavioral disability

11

Blindness/Visual impairment

10

Deafness/Hearing impairment

9

Physical disabilitya

9

Speech/Language

7

Intellectual disabilitiesb

8

Autism

5

Multiple disabilitiesc

8

No disability

11

Not specifiedd

7

Not applicablee

8

a Physical disability = mobility and/or impairment with arm use.

b Intellectual disabilities = students who were referred to as having "mental retardation" in previous report; also, this number includes one European study (Peltenburg et al., 2009) that applied the term "learning disability" to its participants who were reportedly ages 8-12 but were identified as appropriate for assessment items at the educational level of grade 2.

c Multiple disabilities = individual students who were specifically categorized as having more than one disability.

d Not specified = those studies or reviews (3) of studies that did not report about or provide detail as to the participants' disabilities.

e Not applicable = those documents that had only non-students as participants; this includes an NCEO policy review.

Types of Accommodations

The number of times specific categories of accommodations were included in 2009-2010 published research is summarized in Table 8. Presentation accommodations were the most frequently studied category (n=28), and within this category the most common accommodations were read-aloud (n=20) and computer administration (n=9). The next most frequent category studied was response, and in that category, computer administration (n=9) was the most common accommodation. It should be noted that the computer administration accommodation fits into three categories: presentation, equipment/materials, and response. Several studies (n=15) analyzed accommodations from more than one category. Three studies--Bayles (2009), Bublitz (2009), and Mastergeorge and Martinez (2010)--examined accommodations as naturalistically identified in students' IEPs, but were not specified by the researchers. One study--Altman et al. (2010)--examined accommodations naturalistically identified in students' IEPs, but these were too numerous to mention, and their specific effects on score data were not the central focus of the study. A complete listing of accommodations studied is provided in Appendix E.

Table 8. Accommodations in Reviewed Research

Accommodation Category

Number of Studies

Presentation

28

Equipment/Materials

10

Response

19

Timing/Scheduling

16

Setting

9

Multiple accommodations

15

Research Findings

The findings of the body of research literature on accommodations published from 2009-2010 are summarized in Tables 9-19. We present information according to the nature of the studies, in keeping with their varying purposes and focuses. The findings included reviews of perceptions about accommodations, including those of student test-takers as well as educators and other stakeholders, primarily parents. We summarize the findings of the research on specific accommodations, including read-aloud, computerized administration, extended-time, calculator, and aggregated sets of accommodations commonly called "bundles." We also summarize the findings on unique accommodations--those examined in only one study each--including scribing, word-processing, a virtual manipulative tool, a resource guide modification, American sign language (ASL) via avatar, and braille and large-print. Separate summaries of findings include varying implementation conditions as well as incidence of use of various accommodations across large data sets. The findings from studies in postsecondary educational contexts, which have numbered about 6-7 in 2005-2006, 2007-2008, and 2009-2010, receive separate attention. We also report separately accommodations decision making as addressed by five studies. This report also presents findings by academic content areas: math, reading, writing, other language arts, science, and social studies. In Appendix F, we provide substantial detail on an individual study level.

Impact of Accommodations

Research examining the effect of accommodations on assessment performance for students with disabilities comprised 34 studies published in 2009 and 2010 (see Table 9; see also Appendix F, Tables F-1 to F-6 for details about each study of this type). In a continuing trend, oral administration, or the "read-aloud" accommodation, was the single most investigated accommodation in 2009-2010, with nearly one-third of the accommodation-specific studies (n=11). Several of the studies found that the academic construct was not altered by the inclusion of read-aloud to support test-takers. Three studies indicated that read-aloud provided a differential boost for students with disabilities in comparison with students without disabilities, while two studies showed that read-aloud helped to improve performance for all students, and one study showed that it helped improve scores for students with disabilities in comparison to their scores without read-aloud. One of the studies finding differential boost reported the effects of read-aloud alone although it actually implemented the read-aloud accommodation in an accommodations bundle that included 150% extended-time and recording answers in the test booklet. (See Appendix F, Table F-1.)

Computerized administration was another frequently-examined accommodation in the 2009-2010 published literature, with seven studies. The findings were somewhat mixed, with some studies affirming this test mode as supporting construct validity and the needs of students with disabilities, and others finding the opposite. Three studies indicated that this accommodation helped to improve performance of students with disabilities, yet one study found that there was no difference in test results for students with learning disabilities. One study indicated that test mode had no effect on the test construct, and one study--a meta-analysis of 81 studies--indicated that the computerized presentation of tests was comparable to paper-based assessments in science, but not in reading or other language arts, not in social studies, and not in mathematics. Finally, one study (Russell et al., 2009b) examined the relative impact of two different ways of providing ASL--through a recording of a human signing and an avatar signing--and found that neither had more impact on test scores of students with hearing impairments or deafness than the other. (See Appendix F, Table F-2.)

The extended-time accommodation was examined primarily as to its impact on assessment scores of students with disabilities. In comparison with no additional time, students with disabilities did not score differently when given extended-time to complete testing, according to three studies. Alternately, two studies indicated that, in comparison with students without disabilities, students with disabilities differentially benefited from extended-time--that is, extended-time provided a "differential boost" for students with disabilities. (See Appendix F, Table F-3.)

Effects of the calculator accommodation were explored in three studies. One study found that students without disabilities were provided a differential boost when using a graphing calculator, in comparison with students with disabilities. Another study comparing performance of students with learning disabilities and students with attention deficit/hyperactivity disorder who received or did not receive the calculator accommodation resulted in no improvement in performance and no overall decrease in math anxiety for students in either group--in fact, some individual students with disabilities experienced higher math anxiety when using the calculator accommodation. The third study yielded no difference in scores when using a graphing calculator compared with using a four-function calculator for either students with disabilities or students without disabilities. (See Appendix F, Table F-4.)

Two studies scrutinized effects of different aggregated sets of accommodations--also called accommodations bundles. One study combined unique extended-time and unique read-aloud approaches, comparing effects for accommodated and standard administrations for students with disabilities and students without disabilities, and reported that the accommodations package helped to improve scores of all students--both those with and those without disabilities--yet not consistently across both accommodations. Another study compared performance when provided IEP-specified accommodations for students with disabilities as well as teacher-recommended accommodations for students without accommodations to performance when provided with a package of accommodations--read-aloud directions, paraphrase directions, verbal encouragement, and extended-time--for students with disabilities and students without disabilities. Most students with disabilities (78%) benefited from accommodations in comparison to the no accommodations condition, and about half of the scores of students without disabilities (55%) improved, although about the same proportion of scores of the students without disabilities improved from the teacher-specified accommodations as improved from the standard package of accommodations. (See Appendix F, Table F-5.)

We categorized six studies as having inspected the effect of unique accommodations--that is, accommodations included in only one study. Most of these studies considered accommodations that were novel or otherwise not typical in their design or implementation. For instance, a virtual manipulative tool offered through a computer-based test platform assisted with basic operations for most students with learning disabilities (Peltenburg et al., 2009), as defined in the Netherlands context, in which students ages 8-12 were performing at the level of "end Grade 2" (p. 276). (See Table 9 for detail about findings of each study; also, see Appendix F, Table F-6.)

Table 9. Summary of Research Findings by Specific Accommodation

Accommodation Studied (total)

Finding

Number of Studies

Read-aloud (11)

Did not alter the construct being tested

5

Provided a differential boost for scores of students with disabilities compared to those of students without disabilities

3

Improved performance of all students

2

Improved performance of students with disabilities

1

Computerized
administration (7)

Improved performance of students with disabilities

3

Did not improve performance of students with disabilities

1

Changed the construct being tested

1

Did not change the construct being tested

1

Two different types did not benefit students with disabilities more in comparison to one another

1

Extended-time (5)

Did not improve performance of students with disabilities

3

Provided a differential boost for scores of students with disabilities compared to those of students without disabilities

2

Calculator (3)

Provided a differential boost for scores of students without disabilities compared to those of students with disabilities

1

Did not improve performance of students with disabilities

1

Two different types did not benefit students with disabilities more in comparison to one another

1

Aggregated set (2)

Had mixed effects on performance of students with disabilities

1

Had positive effect on scores for students without disabilities

1

Partial scribing (1)

Perceptions of students, parents, and teachers varied about familiarity with implementation practices, including "partial scribing" method, during state English and mathematics assessments

1

Word-processing (1)

During course examinations, college students typed more words in the essay and speed tasks than they handwrote, but there were no differences in quality measures; in combination with extended-time, word-processed essays increased in length and improved in quality in comparison to handwritten essays

1

Virtual manipulatives (1)

Improved performance of most students with LD (as defined in Netherlands context) even when this "100 board" was not fully used for every mathematics item

1

Resource guide (modification) (1)

Had negative effect on scores for grade 4 and grade 7 students with disabilities, and mixed effects on performance for students without disabilities, on state reading assessment

1

ASL via avatar (1)

Had no different effect on scores for students who were deaf or had hearing impairments, at varying performance levels, in comparison with ASL accommodation through recorded human interpreters. About 2/3 of test-takers expressed preference for human interpreter, and 1/3 expressed preference for avatar interpreter.

1

Braille and large-print (1)

Had mixed effects on students who were blind or had visual impairments, varying by grade level (grades 4 and 8) and by ELA area (reading and writing), in comparison with students without disabilities.

1

Perceptions about Accommodations

Table 10 shows the results of research on perceptions about accommodations. More than one-half of the studies (n=9) reported on student perceptions, with most of those studies (n=5) relating to preferring one accommodation of some kind over another--for instance, some students preferred the human ASL interpreter over an avatar (Russell et al., 2009b). Further, student preferences tended to support computerized test administration over a paper-and-pencil format, according to three studies (Arce-Ferrer & Guzman, 2009; Kingston, 2009; Russell et al., 2009a), although one study (Lee et al., 2010) found the opposite. Students also indicated a complicated view of the modifications in one study (Roach et al., 2010) and of the accommodations in another study (Logan, 2009). The perceptions of educators about accommodations were mixed in three studies, and were primarily positive in one study. The mixed nature of educator perceptions was related in one study to a concern about altering the exam itself (Byrd, 2010), yet many educators affirmed the inherent value of supporting students through test accommodations but with reservations (Zhang et al., 2010). One study noted that educators tended to support IEP-specified accommodations, but not accommodations that were not planned in advance (Elliott et al., 2009). One study indicated that educators had primarily a positive view of accommodations, due in part to the fairness in test results that they established (Mastergeorge & Martinez, 2010). One study presented the varying understandings and frames of reference on accommodations for research participants, including students, teachers, and parents (Jordan, 2009). (See Appendix F, Table F-7 for more detailed explanation of findings of each study.)

Table 10. Summary of Research Findings on Perceptions about Accommodations

Study Findings

Number of Studies

Student perceptions indicated a preference for one accommodation over others

5

Educator perceptions were mixed regarding use of accommodations

3

Participant groups had differing perspectives about accommodations provided

1

Student perceptions were mixed about the accommodations studied

2

Student perceptions were mixed about the modifications studied

1

Findings were inconclusive about student's perceptions of accommodations

1

Educator perceptions were mostly positive about use of accommodations, and supportive of equal treatment of test results for tests using accommodations

1

Implementation and Use of Accommodations

Table 11 shows several studies (n=10) that reported on incidence of accommodations use and implementation-related matters. Most of these findings (n=7) reported on common accommodations in use in various settings and with specific disability categories; for instance, five studies indicated that the state assessments examined most commonly offered small group administration as an accommodation. Findings also focused on the manner in which some accommodations are implemented, with one study reporting on the computer as medium for different accommodations practices (Salend, 2009), and another on implementation of read aloud (Lazarus et al., 2009). Finally, one study expounded the variety of factors associated with the implementation of accommodations, including educator training and knowledge (Bayles, 2009). (See Appendix F, Table F-8 for more detailed explanation of findings of each study.)

Table 11. Summary of Research Findings on the Implementation of Accommodations

Study Findings

Number of Studies

The most common accommodation involved small group administration

5

Common accommodations for students with visual impairments on reading assessments included audio recordings, enlarged print or page, read-aloud by teacher, and magnification tools, as well as tactile graphics on mathematics assessments

2

Accommodations presented through computer-based platforms have had variations in their implementation

1

The read-aloud accommodation has had variations in its implementation

1

Educators have had varying degrees of familiarity with accommodations, depending in part on school grade level

1

Accommodations in Postsecondary Education

Table 12 presents a set of research findings for nine studies that were focused specifically on accommodations in educational settings beyond the K-12 school setting. This report marks the first time we have separated these findings from the findings for other groups. Studies sought to investigate effects of accommodations on test performance, test-takers' experiences using accommodations, and stakeholder groups' perceptions of accommodations, along with implementation and decision-making issues. The studies (n=4) on the perceptions of postsecondary students with disabilities of accommodations provided insights into factors that were related to students' decisions to seek accommodations support in coursework and course examinations--including aspects of the university size and type (relative enrollment numbers and public or private institution), as well as the learning environment (in-person or online), and the nature of disabilities (visible or invisible to peers or others). Another group of findings pertained to accommodation effects; computerized administration compared favorably to paper-and-pencil format, and extended-time added complexity to the effects (Lee et al., 2010); further, word-processed essays composed with extended-time were scored highly, yet there were mitigating elements limiting this pattern (Lovett et al., 2010); and finally, students with disabilities completing selected response course exams performed equivalently to their peers without disabilities (Ricketts et al., 2010). (See Appendix F, Table F-9 for more detailed explanation of findings of each study.)

Table 12. Summary of Research Findings on Accommodations at the Postsecondary Level

Study Findings

Number of Studies

Perceptions of university students with disabilities about whether they sought accommodations varied, and were affected by university characteristics, the learning environment, and the relative visibility or invisibility of their disabilities, among other factors.

4

University student performance on course-related exams improved with various accommodations under specified conditions, and students preferred some accommodations over others.

3

University faculty perceptions about accommodations were primarily positive, and students had a sense that their professors were supportive in providing accommodations; faculty perceptions may have been specific to types of accommodations, were related to personal beliefs about education of students with disabilities, knowledge about legal responsibilities, and institutional support, among other factors.

2

Accommodations Decision-making Process

Another small number of studies (n=5) provided insight into the nature of, and factors related to, the process of selecting accommodations (i.e., accommodations decision making). Research findings are presented in Table 13. Two of these studies (Bublitz, 2009; Mariano et al., 2009) focused only on seeking findings about accommodations decision making, whereas the other three studies (Altman et al., 2010; Cawthon, 2010; Lovett, 2010) also reported additional findings other than those pertaining to decision making. Three studies specifically examined factors that influence decision making. For instance, one study (Mariano et al., 2009) compared educator training on different decision-making models, and the possible effects of educators trained with one model recommending significantly more presentation accommodations than educators trained with the other model. Two studies relayed educators' conscious considerations in selecting accommodations. For instance, one study (Cawthon, 2010) identified the pieces of evidence that educators of students who are deaf or hard-of-hearing used in decision making. (See Appendix F, Table F-10 for more detailed explanation of findings of each study.)

Table 13. Summary of Research Findings on Accommodations Decision-making Processes

Study Findings

Number of Studies

Researchers factored out considerations best influencing and not influencing accommodations selection decisions

3

Educators reported about considerations in making accommodations selection decisions

2

Accommodations by Academic Content Assessments

For the first time in this report, we analyzed findings according to academic content area. This focus reflected a recognition that many accommodations are associated with specific academic content. Some examples of these cases include: calculators for math and science assessments, and word-processing for writing assessments or constructed responses on reading, other English language arts assessments, and science assessments. Some accommodations, such as oral administration, may be presented differently depending on the academic construct being assessed.

We present findings for each content area here according to the frequency with which the content areas were identified in the set of 48 research studies reviewed: 27 findings from 20 studies in mathematics, 20 findings from 16 studies in reading, 7 findings from 7 studies in science, 4 findings from 4 studies in other language arts, 3 findings from 3 studies in writing, and 2 findings from 2 studies in social studies (see Figure 4). Analysis of findings for each content area are the same as those we employed earlier in this report, including the impact of accommodations on assessment performance, perceptions about accommodations, construct validity of accommodated assessments, and matters of implementation and instances of use of accommodations.

Figure 4. Research Findings by Content Area

Figure 4 Bar Chart

Note: The number of findings does not equate with the number of studies, because many studies reported more than one finding.

Table 14 displays the 27 research findings for accommodations in 20 studies of mathematics assessments, sorted by frequency according to the nature of the findings. The most common individual finding was that accommodations did not change the mathematics construct or constructs assessed, noted in five studies. These studies focused on over 10 accommodations, including calculator, read-aloud directions, read-aloud questions, alternate test setting, extended-time, computerized administration, small group administration, and checking comprehension of directions. Eleven of the math findings were unique; we present them individually in Table 14 and Table F-11, by signifying that only one study produced each of the findings.

Twelve studies of mathematics provided insights on the performance of students using accommodations--including one study examining the impact of modifications (Elliott et al., 2010). Half of the performance findings (n=6) resulted from a comparison of scores between students with disabilities and students without disabilities, and these findings diverged widely from one another. Four findings pertained to the differential score increases that accommodations brought to some students in comparison to others; these findings were for teacher-recommended accommodations and a standard accommodations package (Elliott et al., 2009), as well as computerized administration (Russell et al., 2009a), a virtual manipulative tool (Peltenburg et al., 2009), and various modifications (Roach et al., 2010). However, three findings indicated that both students with disabilities and students without disabilities improved when provided supports--such as four-function and graphing calculators (Bouck, 2010) and some specific modifications (Elliott et al., 2010), and one study using the graphing calculator yielded that not all students with disabilities improved in scores (Bouck, 2009). Further, two studies found that accommodations did not assist students with disabilities in improving more than students without disabilities, and one study (Lindstrom, 2010) found that students without disabilities improved more when using read-aloud accommodations than did students with disabilities. In the remaining two studies, the findings were complex in the literature review (Lindstrom, 2010), and the other (Parks, 2009) found that calculator use did not improve test results.

In the other six studies, which compared effects of accommodations use to non-use for students with disabilities, there was more concurrence in the findings. Three studies found that these students' scores were higher for those using supports than those who did not. These studies involved the following supports: virtual manipulative tool (Peltenburg et al., 2009), a set of accommodations offered through an online platform (Russell et al., 2009a), and a set of modifications (Roach et al., 2010). Two studies compared the relative benefit of two accommodations of the same type, and found that neither supported students with disabilities more than the other. The accommodations were four-function and graphing calculators (Bouck, 2001), and American Sign Language (ASL) through recording of human or avatar signer (Russell et al., 2009b). Finally, only one study found that students with disabilities scored essentially the same whether using accommodations or not (Freeland et al., 2010).

Approximately one-fourth of the findings (n=7) pertained to perceptions about accommodations offered in math testing, and most of the perspectives reported (n=6) were those of the test-takers themselves. In two studies, students with disabilities in general offered information about accommodations preferences including that accommodations offered through a computer-based test administration platform are preferable to their previous testing experiences in which accommodations are offered in non-digital formats (Russell et al., 2009a; Russell et al., 2009b), and that they prefer a specific version of an American Sign Language (ASL)--when humans sign rather than avatars signing (Russell et al., 2009b). When comparing perceptions about accommodations between students with disabilities and others, two studies found apparent contradictions: that students with ADHD and LD experienced higher anxiety during the test than students without disabilities, and sustained the anxiety whether or not they received a calculator accommodation (Parks, 2009), yet that students with a typical variety of disabilities had similar preferences for using calculators as students without disabilities (Bouck, 2010). Additionally, one study (Jordan, 2009) reported on the differing views of students with disabilities and their educators and parents. Two other studies reported their uniquely specific findings about the effect of accommodations on students' (Roach et al., 2010) and educators' perspectives (Mastergeorge & Martinez, 2010).

Finally, two studies reported patterns of use of specific accommodations--one comparing students with disabilities and students without disabilities (Bouck, 2010), and the other comparing students with disabilities and their educators (Schoch, 2010). A single study (Cawthon, 2010) reported findings related to educators' accommodations decision-making processes and accommodations practices. (See Appendix F, Table F-11 for more detailed explanation of findings of each study.)

Table 14. Summary of Research Findings on Accommodations in Mathematics Assessments (from 20 studies)

Study Findings

Number of Studies

PERFORMANCE

12

    All Students

6

The accommodations and modifications DID NOT provide a differential boost for scores of students with disabilities as compared to those of students without disabilities; all students benefited from the accommodations

2

The accommodations provided a differential boost for scores of students without disabilities as compared to those of students with disabilities as a whole; NOT all students with disabilities benefited from the accommodations

1

The accommodations provided a differential boost for scores of students with disabilities as compared to those of students without disabilities; all students benefited from the accommodations

1

Students with disabilities and students without disabilities who used the accommodations experienced mixed results in comparison to one another and in comparison to those who did not use accommodations

1

Students with disabilities and students without disabilities who used accommodations did NOT perform significantly better than those who did not use accommodations

1

    Students with Disabilities

6

Students with disabilities who used the accommodations performed significantly better than those who did not use the accommodations

3

Students with disabilities using two different accommodations benefited from neither accommodation more in comparison to the other

2

Students with disabilities who used the accommodations DID NOT perform significantly better than those who did not use accommodations

1

PERCEPTIONS

7

Students' and other participant groups' perceptions differed or were mixed regarding the accommodations studied

2

Students with disabilities expressed preference for one version of an accommodation over another

2

Students with disabilities and students without disabilities indicated similar benefits when using accommodations

1

Educators evidenced no bias in rating scores of students with disabilities in comparison with scores of students without disabilities

1

Students with disabilities indicated benefits when using accommodations

1

VALIDITY

5

The accommodations DID NOT change the construct/s

5

INCIDENCE OF USE

2

Students with and without disabilities reported similar accommodations use patterns

1

Educators and students reported about their accommodations use patterns

1

DECISION MAKING

1

Educators reported about their decision-making processes and accommodations practices for assessments

1

Note: Some of these 20 studies reported support for more than one category of findings.

Table 15 details the 18 findings for accommodations in reading assessments. Similar to the math findings, the finding with the researchers agreeing most, in 6 studies, was that the accommodations on the reading assessments did not change the academic construct or constructs being tested. The accommodations examined included read-aloud (Cook et al., 2009; Cook et al., 2010; Snyder, 2010), computerized administration (Kingston, 2009), various state-allowed accommodations (Roxbury, 2010), and braille (Stone et al., 2010).

Seven studies reported findings about the performance of students using test supports--including three studies examining impact of modifications. Most of the performance findings (n=5) resulted from a comparison of scores between students with disabilities and students without disabilities, and these findings were mostly convergent on the point that accommodations like read-aloud (Cook et al., 2009) or modifications (Elliott et al., 2010; Randall & Engelhard, 2010; Roach et al., 2010) supported students both with and without disabilities. From among all of the data of these four studies, there was only one instance of differential benefit in comparing students with disabilities and students without disabilities: the grade 3 students with disabilities using the read-aloud modification on the reading assessment improved more than their peers without disabilities, although this differential benefit was not present for the grade 7 students in the same study (Randall & Engelhard, 2010). In contrast, two studies found that accommodations benefited students with disabilities more than students without disabilities. These studies examined a bundled set of accommodations (Fletcher et al., 2009) and read-aloud for students with learning disabilities (Laitusis, 2010). The remaining performance study, comparing accommodations use and non-use for students with disabilities, found that students with visual impairments scored essentially the same whether using various unspecified access technologies or not (Freeland et al., 2010).

A small number of the findings (n=3) pertained to perceptions about accommodations offered on reading assessments. These studies generally supported the idea that the perceptions were mixed; that is, use of accommodations was not reflected in only positive attitudes and feelings. Each of the three studies demonstrated more complex results. One study (Jordan, 2009) reported on the views of students with disabilities and their educators and parents. In another study (Logan, 2009), the researchers found unexpected results: students with a set of motivations or attitudes, termed "achievement goals" according to the questionnaire, did not have positive experiences using accommodations on the reading assessment. The last study of this type (Roach et al., 2010) yielded uniquely specific findings about students' preferences for or against the available accommodations.

Finally, one study (Roxbury, 2010) reported patterns of use of accommodations, comparing students with disabilities and students without disabilities. Another study (Cawthon, 2010) reported findings related to educators' accommodations decision-making processes and accommodations practices. (See Appendix F, Table F-12 for more detailed explanation of findings of each study.)

Table 15. Summary of Research Findings on Accommodations in Reading Assessments (from 16 studies)

Study Findings

Number of Studies

PERFORMANCE

7

    All Students

6

The modifications and accommodations assisted students with disabilities and students without disabilities in improving assessment performance

4

The accommodations provided a differential boost for scores of students with disabilities as compared to those of students without disabilities; all students benefited from the accommodations

1

Students with disabilities using two different accommodations benefited from one accommodation more in comparison to the other, and the same accommodation provided a differential boost for scores of students with disabilities as compared to those of students without disabilities

1

    Students with Disabilities

1

Students with disabilities who used accommodations did NOT perform significantly better than those who did not use accommodations

1

VALIDITY

6

The accommodations DID NOT change the construct/s1

6

PERCEPTIONS

3

Students and other participant groups differed or were mixed about perceptions regarding the accommodations studied

3

INCIDENCE OF USE

1

Students not provided accommodations (without disabilities) performed better than students provided accommodations (with disabilities)

1

DECISION MAKING

1

Educators reported about their decision-making processes and accommodations practices for assessments

1

Note: Some of these 16 studies reported support for more than one category of findings.

1 This finding indicates that read-aloud served not as a modification on the reading test, but rather was an accommodation.

Table 16 presents the findings for the science assessment accommodations. The most common individual finding was that accommodations did not change the science construct or constructs assessed, a finding supported by four studies. The accommodations examined included read-aloud (Kim et al., 2009a; Kim et al., 2009b), computerized administration (Kingston, 2009), and various state-allowed accommodations (Roxbury, 2010). The remaining three studies reported unique findings, each not supporting the others. Two of the findings pertained to the effects of accommodations on performance. In a comparison of the science scores of students with disabilities and students without disabilities, the standard accommodations package assisted both groups in improving their scores, yet students with disabilities benefited differentially more than their peers without disabilities (Elliott et al., 2009). In a comparison of the assessment results of students with disabilities using accommodations with those not using accommodations, both groups had similar results, indicating no benefit of access technologies on a computer-based test (Freeland et al., 2010). The last study (Cawthon, 2010) provided insights into the accommodations decision-making process for special educators. (See Appendix F, Table F-13 for more detailed explanation of findings of each study.)

Table 16. Summary of Research Findings on Accommodations in Science Assessments (from 7 studies)

Study Findings

Number of Studies

VALIDITY

4

The accommodations DID NOT change the construct/s

4

PERFORMANCE

2

    All Students

1

The accommodations provided a differential boost for scores of students with disabilities as compared to those of students without disabilities; all students benefited from the accommodations

1

    Students with Disabilities

1

Students with disabilities who used accommodations did NOT perform significantly better than those who did not use accommodations

1

DECISION MAKING

1

Educators reported about their decision-making processes and accommodations practices for assessments

1

Table 17 shows findings of four studies on accommodations offered in assessments on "other language arts," an academic construct which explicitly excludes reading and writing. This narrow body of literature yielded five separate findings, most of which illuminated the area of construct validity, but with divergent results. Two studies (Finch et al., 2009; Kim & Huynh, 2010) indicated that the tests with accommodations did not change the academic constructs tested by the non-accommodated assessment, while one study (Kingston, 2009) indicated that accommodations changed the construct. One study (Kim & Huynh, 2010) informed the impact of accommodations on performance, comparing scores of students with disabilities and students without disabilities, in both conditions of testing with and without accommodations. This study found that students with disabilities did not benefit from using accommodations and that students without disabilities did benefit from accommodations, at a minimal yet significant degree. Finally, one study (Mastergeorge & Martinez, 2010) demonstrated that educators had primarily a positive view of accommodations, due in part to the fairness in test results that they established. (See Appendix F, Table F-14 for more detailed explanation of findings of each study.)

Table 17. Summary of Research Findings on Accommodations in Other Language Arts Assessments (from 4 studies)

Study Findings

Number of Studies

VALIDITY

3

The accommodations DID NOT change the construct/s

2

The accommodations changed the construct/s

1

PERFORMANCE

1

    All Students

1

Students with disabilities who used accommodations DID NOT perform significantly better than those who did not use accommodations; students without disabilities who used accommodations performed significantly better yet at a minimal increase over those who did not use accommodations

1

PERCEPTIONS

1

Educators evidenced no bias in rating scores of students with disabilities in comparison with scores of students without disabilities

1

Note: Some of these 4 studies reported support for more than one category of findings.

Table 18 shows findings of three studies on accommodations in writing assessments. The majority of the findings (n=2) pertain to construct validity, converging to indicate that accommodations did not change the writing constructs assessed (Cook et al., 2010; Stone et al., 2010). One study (Lovett et al., 2010) compared scores of students with disabilities who tested with accommodations to those who did not do so. The result of this study indicated that students with disabilities showed no improvement when using accommodations. (See Appendix F, Table F-15 for more detailed explanation of findings of each study.)

Table 18. Summary of Research Findings on Accommodations in Writing Assessments (from 3 studies)

Study Findings

Number of Studies

VALIDITY

2

The accommodations DID NOT change the construct/s

2

PERFORMANCE

1

    Students with Disabilities

1

Students with disabilities who used accommodations did NOT perform significantly better than those who did not use accommodations

1

The fewest findings were reported about accommodations used in social studies assessments (see Table 19). Both of the studies (Freeland et al., 2010; Kingston, 2009) reporting these findings were not solely focused on analyzing data from social studies tests, but rather had included this content area along with assessment scores from tests in math, reading, and science, among others. Accommodations did not support students with disabilities to improve their scores over when they had taken the test without accommodations--in fact, students with visual impairments and students with total blindness scored higher without access technologies than with them (Freeland et al., 2010). Pertaining to construct validity, the other study--a meta-analysis of two studies with social studies scores--found that test-takers scored higher on computer-administered tests than the tests presented in a standard administration, meaning that these assessments were testing qualitatively different academic constructs, but with a low effect size (Kingston, 2009). (See Appendix F, Table F-16 for more detailed explanation of findings of each study.)

Table 19. Summary of Research Findings on Accommodations in Social Studies Assessments (from 2 studies)

Study Findings

Number of Studies

PERFORMANCE

1

    Students with Disabilities

1

Students with disabilities who used accommodations did NOT perform significantly better than those who did not use accommodations

1

VALIDITY

1

The accommodations changed the construct/s

1

Across the academic content areas, accommodations research from 2009 through 2010 supported a few consistent findings. Regarding construct validity, the literature indicated that accommodated tests were not different from non-accommodated tests as far as the nature of the content being tested, as supported by 20 out of 21 findings. Regarding the impact of accommodations on assessment outcomes, the areas of convergence in the findings did not seem to cross academic content areas, at least beyond mathematics and reading. An exception to this pattern was that students with disabilities did not perform significantly better when provided accommodations than when not provided them for assessments on other language arts, writing, and social studies; however, these findings were reported by relatively few studies (n=3), so these are not necessarily strong conclusions. Alternately, most of the findings were narrowed to content areas, and were affected by limited numbers of studies. Findings about perceptions of accommodations varied, and only three academic content areas were studied--mathematics, reading, and other language arts. Research areas that had limited findings included accommodations decision making by educators--represented by only one study (Cawthon, 2010) and incidence of accommodations use, which addressed only mathematics (n=2) and reading (n=1).

Limitations and Future Research

As is often the case in research, many of the studies reviewed discussed limitations in order to provide context for the results that were observed (n=38). As seen in Table 20, limitations were summarized under five broad categories. A study was counted for a given category when it provided at least one limitation under that category. A more comprehensive description of limitations for each individual study is available in Appendix G.

The most commonly cited category of limitations in the research was methodology, where frequently the use of bundled (vs. individually administered) accommodations and non-random sampling of participants were referenced. Many authors also identified sample characteristics as a limitation to the research. Specifically, common limitations were sample size and the representativeness of the samples obtained on variables such as age, grade level, and race. More detailed information regarding specific limitations of each study is also available in Appendix G-1.

Table 20. Categorized Limitations Identified by Authors

Limitation Category

Number of Studiesa

Methodology

29

Sample Characteristics

22

Results

12

No Limitations Listed

10

Test/Test Context

8

Other

6

aTwenty-six studies included more than one category of limitations, represented in 2 to 4 limitations categories.

As would be expected, methodology and sample characteristics were also often highlighted as areas that needed to be addressed in future research--as seen in Table 21. However, we found that researchers recognized more instances where the test or test context used in the study led to implications for future research than was the case when identifying limitations (Table 21). More detailed information about suggestions for future research is available in Appendix G-2.

Table 21. Categorized Areas of Future Research Identified by Authors

Future Research

Number of Studiesa

Methodology

26

Sample Characteristics

15

Test/Test Context

15

No Future Directions Listed

9

Other

6

Results

5

aTwenty studies listed directions for future research that fit into multiple categories.


Discussion

Several themes are evident in the research studies published in 2009 and 2010, especially in relation to the research studies from 2007 and 2008, which were reported in the previous NCEO accommodations research review (Cormier et al., 2010). We address here themes in terms of purposes, research designs, assessment types, study participant characteristics, accommodations, academic content areas and research findings associated with them, and study limitations and future research directions. We conclude with several comments on promising trends overall.

Research Purposes

The nature of the research literature on accommodations has continued to change. Many of the studies in 2009-2010 combined the effect of accommodations on performance and their effect on assessment constructs. Many also combined quantitative and qualitative research on the impact of accommodations on students with disabilities, in that they examined accommodations' effects on test scores as well as their effects on perceptions of test takers. There were several differences between the purposes identified in the 2007-2008 and the purposes in the 2009-2010 studies. First of all, there was a much lower proportion of studies focused on comparing scores in the current set of studies: 63% in 2007-2008 and 31% in 2009-2010. About one-fourth (23%) of the current set of studies was focused on examining perceptions and preferences on use, a much larger proportion than the 13% of the 2007-2008 studies. The proportion of studies that described implementation practices and accommodations use was 20% of 2007-2008 studies, but a much lower 6% in 2009-2010 studies. Test validity was the purpose for a similarly low proportion in both reports: 6% for 2009-2010 and 3% for 2007-2008.

Research Types and Data Collection Sources

The research studies in 2009-2010 were mostly experimental (6%) or quasi-experimental (52%), which was a larger proportion than in 2007-2008. On the other hand, there was a much smaller proportion of studies using a descriptive quantitative design (21%) compared with 2009-2010 (55%). Further, the source of data in 2007-2008 was reported to be just over one-half from primary sources--that is, collected by researchers rather than drawn from extant data--whereas in 2009-2010, the data came from primary sources for over two-thirds of the studies.

Data Collection Methods

Data collection methods generally were quite different between studies published in 2007-2008 and 2009-2010. With multiple purposes, there often was more than one data collection method and more than one instrument used. Over one-third of the 2009-2010 studies used more than one data collection method. Although the most common data collection method was content testing in both 2007-2008 and 2009-2010, there was a large difference in the use of surveys, from about one-fifth of 2007-2008 studies to over half of 2009-2010 studies. This shift seemed to be related to researchers' efforts to uncover students' and educators' experiences during the implementation of accommodations. Other methods used often in 2009-2010 included interviews and observations.

Participants

Grade Level

Research on accommodations has varied in terms of the focus on different grade level clusters--elementary, middle school, and high school--and the 2009-2010 studies differ from the 2007-2008 studies on this variable as well. First, a larger proportion of 2009-2010 published research analyzed accommodations across more than one grade level cluster than in 2007-2008. Second, although the proportion of high school and postsecondary participants were each about the same across the two periods, there was a larger proportion of studies published in 2009-2010 with elementary participants and with middle school students than in the research published in 2007-2008.

Disability Categories

The disability categories of study participants with disabilities were also somewhat different 2009-2010 compared to 2007-2008. The overall proportion of participants in many disability categories increased. The studies with participants with learning disabilities increased in proportion from 38% in 2007-2008 to 54% in 2009-2010. Similar increases occurred for participants with attention problems: from 8% in 2007-2008 to 23% in 2009-2010. Additional increases are reported here in descending order of difference: for blindness/visual impairment, from 6% in 2007-2008 to 21% in 2009-2010; for deafness/hearing impairment, from 5% to 19%; for emotional behavioral disability, from 10% to 23%, and for intellectual disabilities, from 5% to 17%. Some of these increases might be due to researchers providing data about their participants' disability categories; a smaller proportion of studies in the current review failed to specify participant disability categories. Another possible source of the increase is that a larger proportion of individual participants were identified as having more than one disability, 12% in 2007-2008 and 17% in 2009-2010. The increases in the proportion of participants' disabilities did not seem to be due to changes in the proportion of studies using large secondary data sets because fewer studies (31%) in 2009-2010 used this type of data, in comparison with 45% of the 2007-2008 studies. More studies both collected disability data and also utilized comparative procedures to measure the impact of accommodations use by participants with various disabilities, yielding more findings about the effects of specific accommodations for students with specific disabilities.

Accommodations

The 2009-2010 studies included 10 specific accommodations, in four of the five accommodation categories. In the presentation category, read-aloud, braille, and large-print were represented. In the equipment/materials category, computerized administration, calculator, and sign-language recording were included. In the response category, partial-scribe, word-processing, and virtual manipulative were represented. In the timing/scheduling category, extended-time was the focus of research. In comparison, in 2007-2008 studies, four specific accommodations were examined: read-aloud and segmenting text (presentation), computerized administration (equipment/materials), and extended-time (timing/scheduling).

The specific accommodations were examined through a higher number of studies using primary data sources in 2009-2010 (33 of 48 studies--69%), compared to 2007-2008 (22 of 40 studies--55%). There seemed to be some shifts in attention to specific accommodations. The read-aloud accommodation and computerized administration both maintained the same proportion of studies from 2007-2008 to 2009-2010, at 23% and 15% respectively. Examination of extended-time decreased from 25% of the 40 studies in 2007-2008 to 10% of the 48 studies in 2009-2010. Aggregated or bundled accommodations were studied less frequently, decreasing from 5 studies (13%) in 2007-2008 to 2 studies (4%) in 2009-2010.

Content Areas and Associated Research Findings

Accommodations for mathematics and reading continued to be the most commonly examined in 2009-2010 studies, yet attention to accommodations for science assessments seemed to be increasing. The researchers of the 2009-2010 studies showed more interest in investigating accommodations used during science tests, and somewhat more interest in mathematics accommodations, than those involved with 2007-2008 studies. This difference might be related to the increase in attention to the performance of students with disabilities on statewide science assessments during the 2007-2008 school year, which is the federally-required timeline of implementing science tests, and reporting data on this performance (Thurlow, Rogers, & Christensen, 2010). On the other hand, slightly fewer of the studies reported on accommodations for reading assessments. Another difference in the studies published in 2009-2010 was that a larger number, and a slightly higher proportion, of them examined accommodations used in more than one content area.

When examining the findings by specific accommodation, some interesting comparisons and contrasts can be observed for the 2009-2010 studies compared to the 2007-2008 studies. In 2009-2010, three findings indicated that read-aloud provided a differential benefit for students with disabilities, and two findings indicated that all students benefited when taking tests using read-aloud. An identical number of study findings in 2007-2008 reported these results. A chief difference between these sets of findings was that only one study (Temple, 2007) involved reading assessments in 2007-2008, whereas most of the 2009-2010 studies (all except Lindstrom, 2010) involved reading.

Findings on the impact of the use of computerized administration received new attention in studies published in 2009-2010, with 3 of the 4 relevant studies finding that this accommodation benefited students with disabilities. In 2007-2008, no studies examined the comparative impact of computerized administration. The 2007-2008 studies focused primarily on analyses of potential effects of computerized administration on construct validity, with 5 of 6 relevant studies indicating no problematic effects on construct validity. The 2009-2010 published studies included a meta-analysis that found computerized delivery changed the academic constructs involved (Kingston, 2009); and another study found that this accommodation changed the constructs of the intelligence test (Arce-Ferrer & Guzman, 2009).

Impact of the calculator response accommodation was studied more often in 2009-2010 (three studies) than in 2007-2008 (one study--Sharoni & Vogel, 2007). The findings tended to be more negative in 2009-2010, with none of the three studies indicating differential benefits for students with disabilities, and only one study indicating any benefit for students with disabilities (Bouck, 2010).

The 2009-2010 studies yielded contested benefits for students with disabilities. Three studies measuring the impact of extended-time--on introductory psychology course exams (Lee et al., 2010), a literature review with many academic content areas (Lovett, 2010), and a writing assessment (Lovett et al., 2010) found that students with disabilities did not benefit from extended time. Two studies indicated that students with disabilities did differentially benefit from extended-time: on a math assessment (Lindstrom, 2010), and in undergraduate medical course examinations (Ricketts et al., 2010). These studies were generally in contrast to those published in 2007-2008, where effects were generally minimal, but more often found on K-12 assessments.

In 2009-2010, there seemed to be increased research attention to the potential for problematic effects of accommodated tests on construct validity. In 2007-2008, there were 8 study findings relevant to construct validity, and in 2009-2010, there were 21 relevant findings. In 2007-2008, at least four findings (about half) indicated that accommodations--including computer administration and read-aloud--offered on math and reading tests affected the construct being tested. In 2009-2010, the studies' results showed that various accommodations did not affect the academic constructs being assessed. That is, the 2009-2010 studies indicated that accommodations were not associated with construct validity concerns. This was the case across academic content areas, including mathematics, reading, science, writing, and a majority of studies in the other content areas. In fact, accommodations were found only to change the constructs in 2 of the 21 findings examining construct validity. These two findings focused on the same accommodation--computerized administration--in the same study (Kingston, 2009) with the academic constructs of other language arts and social studies. Another study (Arce-Ferrer & Guzman, 2009) found that computerized administration did not change the constructs measured on an intelligence test. Further, the read-aloud accommodation was found not to alter the construct being tested in mathematics, reading, and science. A growing set of studies in academic content areas examined perceptions about using accommodations during testing. In comparison with only 12.5% of all studies published in 2007-2008, perceptions were investigated in 25% of the mathematics studies, 17% of the reading studies, and 20% of the other language arts studies in 2009-2010. Further, all of the 2007-2008 studies examined educators' perceptions of accommodations, whereas about half of the 2009-2010 studies identified student test-takers' perceptions.

Limitations

The most noticeable difference between the accommodations research published in 2007-2008 and that published in 2009-2010 is the increase in researchers identifying methodological issues as limitations. Although identifying sample characteristics has remained the same between the two sets of years method choices such as having no control group or engaging in non-random sampling has been increasingly noted as a limitation. Researchers have pointed out that the unit of analysis has been the classroom rather than the individual student. Random assignment of research participants to differing conditions--such as testing with accommodations or testing without accommodations--has not occurred by participant, but rather has sometimes been implemented at the classroom or school level. Further, researchers note that accommodations have been implemented differently across conditions. These decisions indicated that studies are not true experiments but rather have used quasi-experimental designs.

Other limitations noted more often by researchers whose work was published in 2009-2010 included test and test context. Researchers commented in some studies that the test used was different somehow from tests typically used in the participants' school or district, or that the test segment presented to participants did not use typical administration conditions. Test context issues also included inconsistency of tests across grade levels, different test forms being used in different accommodated conditions, participants running out of time in test administration sessions, more than typical incidence of missing data, and in some cases, suspicions that participants did not respond honestly to survey questions.

Researchers also reported more results-oriented limitations. For example, some studies used results that were not truly independent, such as when study participant scores from both academic years were linked to one another. Another concern pertained to analyses in which the effects of accommodations were difficult to distinguish from the effects of students' disabilities. Several limitations from studies published in 2009-2010 did not seem to fit into categories used in previous reports. The "other" category of limitations included that students' prior knowledge was not reported through alternative data sources, which could have helped to clarify any issues with the test and accommodations used. Another limitation was that researchers could not pinpoint all of the potential sources of differential item functioning (DIF). Finally, well more than twice as many studies in the current report (n=26) cited limitations that could be described as fitting more than one limitations category. This compared to the limitations of the 10 studies reported in studies published in 2007 to 2008.

Future Research

There were a few differences in the potential future research directions that researchers identified in the 2009-2010 studies compared to 2007-2008. First, a larger proportion of the 2009-2010 studies indicated a need for more research with improved methodology, such as investigating impact and functionality of accommodations through single-subject designs, and inquiring about practitioners' knowledge and perceptions of accommodations. Second, a smaller proportion of the studies indicated need for more research involving improvements in results. An example of results improvements that have decreased is the need to replicate the results in order to demonstrate their validity or generalizability. This change may be related to a larger proportion of studies focused on validity of accommodations in 2009-2010. Finally, more than twice as many studies (n=20) in 2009-2010 identified future research directions, compared to the nine studies in 2007 to 2008 doing so.

Trends

Some themes we recognized in the literature included a steady rise in investigating accommodations for science assessments, increased activity in collecting data simultaneously across grade level clusters--elementary, middle school, and high school--and increased examination of secondary large data sets, at the district and state levels. Another trend was that researchers crafted multi-purpose study designs--that is, test data were collected to measure the impact of accommodations, and survey and interview data were collected about students' experiences in using accommodations. The literature paid continued attention in the current review to students with low-incidence disabilities--including perceptual impairments in seeing and hearing. Additionally, we observed fewer studies on the extended-time accommodation, as well as a small increase in examining response accommodations.


References

Report References

Barnard, L., Lan, W. Y., & Lechtenberger, D. (2008, March). How student attitudes toward requesting accommodations are related to academic achievement in postsecondary education. Paper presented at the annual meeting of the American Educational Research Association, New York.

Bishop-Temple, C. (2007). The effects of interactive read-alouds on the reading achievement of middle grade reading students in a core remedial program. Dissertation Abstracts International: Section A. Humanities and Social Sciences, 68(10), 4179.

Brown, J. I., Fishco, V. V., & Hanna, G. (1993). Nelson-Denny Reading Test, Form H. Itasca, IL: Riverside.

Cawthon, S., & The Online Research Lab. (2006). Findings from the National Survey on Accommodations and Alternate Assessments for Students who are Deaf or Hard of Hearing. Journal of Deaf Studies and Deaf Education, 11(3), 337–359.

Cormier, D. C., Altman, J. R., Shyyan, V., & Thurlow, M. L. (2010). A summary of the research on the effects of test accommodations: 2007-2008 (Technical Report 56). Minneapolis, MN: University of Minnesota, National Center on Educational Outcomes.

CTB/McGraw-Hill. (1997). TerraNova Multiple Assessment Battery. Monterey, CA: Author.

Cury, F., Elliot, A. J., DaFonseca, D., & Moller, A. C. (2006). The social-cognitive model of achievement motivation and the 2 x 2 achievement goal framework. Journal of Personality and Social Psychology, 90(4), 666-679.

Elliot, A. J., & McGregor, H. A. (2001). A 2 x 2 achievement goal framework. Journal of Personality and Social Psychology, 80(3), 501-519.

Haladyna, T. M., Downing, S. M., & Rodriguez, M. C. (2002). A review of multiple-choice item-writing guidelines for classroom assessment. Applied Measurement in Education, 15(3), 309-334.

Hammill, D. D., & Larsen, S. C. (1996). Test of Written Language. (3rd ed.). Austin, TX: Pro-Ed.

Hopko, D. R., Mahadevan, R., Bare, R. L., & Hunt, M. K. (2003). The abbreviated math anxiety scale (AMAS): Construction, validity, and reliability. Assessment, 10(2), 178-182.

Janssen, J., Scheltens, F., & Kraemer, J. (2005). Leerling- en onderwijsvolgsysteem. Rekenen-wiskunde groep 4. Handleiding [Student and education monitoring system. Mathematics grade 2. Teachers guide]. Arnhem, The Netherlands: CITO.

Johnstone, C. J., Altman, J., Thurlow, M. L., & Thompson, S. J. (2006). A summary of research on the effects of test accommodations: 2002 through 2004 (Technical Report 45). Minneapolis, MN: University of Minnesota, National Center on Educational Outcomes.

Kaufman, A. S., & Kaufman, N. L. (2004). Kaufman Test of Educational Achievement-Second Edition (KTEA-II) administration and scoring manual. Circle Pines, MN: American Guidance Service.

Lewandowski, L. J., Lovett, B. J., Parolin, R., Gordon, M., & Codding, R. S. (2007). Extended time accommodations and the mathematics performance of students with and without ADHD. Journal of Psychoeducational Assessment, 25(1), 17-28.

Lewandowski, L. J., Lovett, B. J., & Rogers, C. L. (2008). Extended time as a testing accommodation for students with reading disabilities: Does a rising tide lift all ships? Journal of Psychoeducational Assessment, 26(4), 315-324.

MacGinitie, W. H., MacGinitie, R. K., Maria, K., & Dreyer, L. G. (2000). Gates-MacGinitie Reading Tests–Manual for scoring and interpretation. Itasca, IL: Riverside.

Mastergeorge, A. M., & Martinez, J. F. (2010). Rating performance assessments of students with disabilities: A study of reliability and bias. Journal of Psychoeducational Assessment, 28(6), 536-550.

Mather, N., Hammill, D. D., Allen, E. A., & Roberts, R. (2004). TOSWRF: Test of Silent Word Reading Fluency: Examiner's manual. Austin, TX: Pro-Ed.

National Council of Teachers of Mathematics. (2000). Principles and NCTM standards for school mathematics. Reston, VA: Author.

Northwest Evaluation Association. (2009). NWEA technical manual for Measures of Academic Process and Measures of Academic Progress for primary grades. Portland, OR: Author.

Pekrun, R., Goetz, T., Perry, R. P., Kramer, K., Hochstadt, M., & Molfenter, S. (2004). Beyond test anxiety: Development and validation of the test emotions questionnaire. Anxiety, Stress, and Coping, 17(3), 287-316.

Pintrich, P. R., Smith, D. A. F., Garcia, T., & McKeachie, W. J. (1991). A manual for the use of the Motivated Strategies for Learning Questionnaire (MSLQ). Ann Arbor: University of Michigan, National Center for Research to Improve Postsecondary Teaching and Learning.

Pintrich, P. R., Smith, D. A. F., Garcia, T., & McKeachie, W. J. (1993). Reliability and predictive validity of the Motivated Strategies for Learning Questionnaire (MSLQ). Educational and Psychological Measurement, 53(3), 803-813.

Raven, J. (2000). The Raven's Progressive Matrices: Change and stability over culture and time. Cognitive Psychology, 41, 1-48.

Raven, J., Raven, J. C., & Court, J. (1993). Test de matrices progresivas: manual para la aplicacia¨n [Standard progressive matrices test: Direction for administration manual]. Buenos Aires: Paida¨s.

Rodriguez, M. C. (2005). Three options are optimal for multiple-choice items: A meta-analysis of 80 years of research. Educational Measurement: Issues and Practice, 24,3-13.

Sharoni, V., & Vogel, G. (2007). Entrance test accommodations, admission and enrollment of students with learning disabilities in teacher training colleges in Israel. Assessment & Evaluation in Higher Education, 32(3), 255-270.

Thompson, S., Blount, A., & Thurlow, M. (2002). A summary of research on the effects of test accommodations: 1999 through 2001 (Technical Report 34). Minneapolis, MN: University of Minnesota, National Center on Educational Outcomes.

Thurlow, M., Rogers, C., & Christensen, L. (2010). Science assessments for students with disabilities in school year 2006-2007: What we know about participation, performance, and accommodations (Synthesis Report 77). Minneapolis, MN: University of Minnesota, National Center on Educational Outcomes.

Woodcock, R. W., McGrew, K. S., & Mather, N. (2001). Woodcock-Johnson Tests of Achievement. (3rd ed.). Itasca, IL: Riverside Publishing.

Woodcock, R., Mather, N., & Schrank, F. A. (2004). Woodcock-Johnson III: Diagnostic Reading Battery. Itasca, IL: Riverside Publishing.

Zenisky, A. L., & Sireci, S. G. (2007). A summary of the research on the effects of test accommodations: 2005-2006 (Technical Report 47). Minneapolis, MN: University of Minnesota, National Center on Educational Outcomes.

2009 and 2010 Accommodation References

Altman, J. R., Cormier, D. C., Lazarus, S. S., Thurlow, M. L., Holbrook, M., Byers, M., Chambers, D., Moore, M., & Pence, N. (2010). Accommodations: Results of a survey of Alabama special education teachers. Minneapolis, MN: University of Minnesota, National Center on Educational Outcomes.

Anjorin, I. (2009). High-stakes tests for students with specific learning disabilities: Disability-based differential item functioning. Dissertation Abstracts International: Section A. Humanities and Social Sciences, 71(02).

Arce-Ferrer, A. J., & Guzman, E. M. (2009). Studying the equivalence of computer-delivered and paper-based administrations of the Raven standard progressive matrices test. Educational and Psychological Measurement, 69(5), 855-867. doi:10.1177/0013164409332219

Barnard-Brak, L., & Sulak, T. (2010). Online versus face-to-face accommodations among college students with disabilities. The American Journal of Distance Education, 24(2), 81-91. doi:10.1080/08923641003604251

Barnard-Brak, L., Davis, T., Tate, A., & Sulak, T. (2009). Attitudes as a predictor of college students requesting accommodations. Journal of Vocational Rehabilitation, 31(3), 189-198. doi:10.3233/JVR-2009-0488

Barnard-Brak, L., Sulak, T., Tate, A., & Lechtenberger, D. (2010). Measuring college students' attitudes toward requesting accommodations: A national multi-institutional study. Assessment for Effective Intervention, 35(3), 141-147. doi:10.1177/1534508409358900

Bayles, M. (2009). Perceptions of educators and parents of the California High School Exit Examination (CAHSEE) requirement for students with disabilities. Dissertation Abstracts International: Section A. Humanities and Social Sciences, 71(04).

Bouck, E. C. (2009). Calculating the value of graphing calculators for seventh-grade students with and without disabilities: A pilot study. Remedial and Special Education, 30(4), 207-215. doi:10.1177/0741932508321010

Bouck, E. (2010). Does type matter: Evaluating the effectiveness of four-function and graphing calculators. Journal of Computers in Mathematics and Science Teaching, 29(1), 5-17.

Bublitz, D. F. (2009). Special education teachers' attitudes, knowledge, and decision-making about high-stakes testing accommodations for students with disabilities. Dissertation Abstracts International: Section A. Humanities and Social Sciences, 70(04).

Byrd, T. D. M. (2010). East Tennessee State University faculty attitudes and student perceptions in providing accommodations to students with disabilities. Dissertation Abstracts International: Section A. Humanities and Social Sciences, 71(10).

Cawthon, S. W. (2009). Accommodations for students who are deaf or hard of hearing in large-scale, standardized assessments: Surveying the landscape and charting a new direction. Educational Measurement: Issues and Practice, 28(2), 41-49. doi:10.1111/j.1745-3992.2009.00147.x

Cawthon, S. W. (2010). Science and evidence of success: Two emerging issues in assessment accommodations for students who are deaf or hard of hearing. Journal of Deaf Studies and Deaf Education, 15(2), 185-203. doi:10.1093/deafed/enq002

Cook, L., Eignor, D., Steinberg, J., Sawaki, Y., & Cline, F. (2009). Using factor analysis to investigate the impact of accommodations on the scores of students with disabilities on a reading comprehension assessment. Journal of Applied Testing Technology, 10(2). doi:10.1080/08957341003673831

Cook, L., Eignor, D., Sawaki, Y., Steinberg, J., & Cline, F. (2010). Using factor analysis to investigate accommodations used by students with disabilities on an English-language arts assessment. Applied Measurement in Education, 23(2), 187-208.

Elliott, S. N., Kratochwill, T. R., McKevitt, B. C., & Malecki, C. K. (2009). The effects and perceived consequences of testing accommodations on math and science performance assessments. School Psychology Quarterly, 24(4), 224-239. doi: 10.1037/a0018000

Elliott, S. N., Kettler, R. J., Beddow, P. A., Kurz, A., Compton, E., McGrath, D., Bruen, C., Hinton, K., Palmer, P., Rodriguez, M. C., Bolt, D., & Roach, A. T. (2010). Effects of using modified items to test students with persistent academic difficulties. Exceptional Children, 76(4), 475-495.

Finch, H., Barton, K., & Meyer, P. (2009). Differential item functioning analysis for accommodated versus non-accommodated students. Educational Assessment, 14(1), 38-56. doi:10.1080/10627190902816264

Fletcher, J. M., Francis, D. J., O'Malley, K. Copeland, K., Mehta, P., Caldwell, C. J., Kalinowski, S., Young, V., & Vaughn, S. (2009). Effects of a bundled accommodations package on high-stakes testing for middle school students with reading disabilities. Exceptional Children, 75(4), 447-463.

Freeland, A. L., Emerson, R. W., Curtis, A. B., & Fogarty, K. (2010). Exploring the relationship between access technology and standardized test scores for youths with visual impairments: Secondary analysis of the National Longitudinal Transition Study 2. Journal of Visual Impairment & Blindness, 104(3), 170-182.

Johnstone, C., Thurlow, M., Altman, J., Timmons, J., & Kato, K. (2009). Assistive technology approaches for large-scale assessment: Perceptions of teachers of students with visual impairments. Exceptionality, 17(2), 66-75. doi:10.1080/09362830902805756

Jordan, A. S. (2009). Appropriate accommodations for individual needs allowable by state guidelines. Dissertation Abstracts International: Section A. Humanities and Social Sciences, 70(10).

Kim, D. H., & Huynh, H. (2010). Equivalence of paper-and-pencil and online administration modes of the statewide English test for students with and without disabilities. Educational Assessment, 15(2), 107-121. doi:10.1080/10627197.2010.491066

Kim, D. H., Schneider, C., & Siskind, T. (2009). Examining the underlying factor structure of a statewide science test under oral and standard administrations. Journal of Psychoeducational Assessment, 27(4), 323-333. doi:10.1177/0734282908328632

Kim, D., Schneider, C., & Siskind, T. (2009). Examining equivalence of accommodations on a statewide elementary-level science test. Applied Measurement in Education, 22(2), 144-163. doi:10.1080/08957340902754619

Kingston, N. M. (2009). Comparability of computer- and paper-administered multiple-choice tests for K-12 populations: A synthesis. Applied Measurement in Education, 22(1), 22-37. doi:10.1080/08957340802558326

Laitusis, C. C. (2010). Examining the impact of audio presentation on tests of reading comprehension. Applied Measurement in Education, 23(2), 153-167. doi:10.1080/08957341003673815

Lazarus, S. S., Thurlow, M. L., Lail, K. E., & Christensen, L. (2009). A longitudinal analysis of state accommodations policies: Twelve years of change, 1993-2005. The Journal of Special Education, 43(2), 67-80. doi:10.1177/0022466907313524

Lee, K. S., Osborne, R. E., & Carpenter, D. N. (2010). Testing accommodations for university students with AD/HD: Computerized vs. paper-pencil/regular vs. extended time. Journal of Educational Computing Research, 42(4), 443-458. doi:10.2190/EC.42.4.e

Lindstrom, J. H. (2010). Mathematics assessment accommodations: Implications of differential boost for students with learning disabilities. Intervention in School and Clinic, 46(1), 5-12. doi:10.1177/1053451210369517

Logan, J. P. (2009). The affective and motivational impact of the test accommodation extended time based on students' performance goal orientations. Dissertation Abstracts International: Section A. Humanities and Social Sciences, 70(06).

Lovett, B. J. (2010). Extended time testing accommodations for students with disabilities: Answers to five fundamental questions. Review of Educational Research, 80(4), 611-638.

Lovett, B. J., Lewandowski, L. J., Berger, C., & Gathje, R. A. (2010). Effects of response mode and time allotment on college students' writing. Journal of College Reading and Learning, 40(2), 64-79.

Mariano, G., Tindal, G., Carrizales, D., & Lenhardt, B. (2009). Analysis of teacher accommodation recommendations for a large-scale test. Eugene, OR: University of Oregon, Behavioral Research and Teaching.

Mastergeorge, A. M., & Martinez, J. F. (2010). Rating performance assessments of students with disabilities: A study of reliability and bias. Journal of Psychoeducational Assessment, 28(6), 536-550. doi:10.1177/0734282909351022

Parks, M. Q. (2009). Possible effects of calculators on the problem solving abilities and mathematical anxiety of students with learning disabilities or attention deficit hyperactivity disorder. Dissertation Abstracts International: Section A. Humanities and Social Sciences, 70(07).

Peltenburg, M., van den Heuvel-Panhuizen, M., & Doig, B. (2009). Mathematical power of special-needs pupils: An ICT-based dynamic assessment format to reveal weak pupils' learning potential. British Journal of Educational Technology, 40(2), 273-284. doi:10.1111/j.1467-8535.2008.00917.x

Randall, J., & Engelhard, G., Jr. (2010). Performance of students with and without disabilities under modified conditions: Using resource guides and read-aloud test modifications on a high-stakes reading test. The Journal of Special Education, 44(2), 79-93. doi:10.1177/0022466908331045

Ricketts, C., Brice, J., & Coombes, L. (2010). Are multiple choice tests fair to medical students with specific learning disabilities? Advances in Health Sciences Education, 15(2), 265-275. doi:10.1007/s10459-009-9197-8

Roach, A. T., Beddow, P. A., Kurz, A., Kettler, R. J., & Elliott, S. N. (2010). Incorporating student input in developing alternate assessments based on modified academic achievement standards. Exceptional Children, 77(1), 61-80.

Roxbury, T. L. (2010). A psychometric evaluation of a state testing program: Accommodated versus non-accommodated students. Dissertation Abstracts International: Section A. Humanities and Social Sciences, 71(06).

Russell, M., Hoffmann, T., & Higgins, J. (2009). A universally designed test delivery system. TEACHING Exceptional Children, 42(2), 6-12.

Russell, M., Kavanaugh, M., Masters, J., Higgins, J., & Hoffmann, T. (2009). Computer-based signing accommodations: Comparing a recorded human with an avatar. Journal of Applied Testing Technology, 10(3).

Salend, S. (2009). Using technology to create and administer accessible tests. Teaching Exceptional Children, 41(3), 40-51.

Schoch, C. S. (2010). Teacher variations when administering math graphics items to students with visual impairments. Dissertation Abstracts International: Section A. Humanities and Social Sciences, 72(02).

Snyder, J. (2010). Audio adapted assessment data: Does the addition of audio to written items modify the item calibration? Dissertation Abstracts International: Section A. Humanities and Social Sciences, 71(05).

Stone, E., Cook L., Cahalan Laitusis, C., & Cline, F. (2010). Using differential item functioning to investigate the impact of testing accommodations on an English-language arts assessment for students who are blind or visually impaired. Applied Measurement in Education, 23(2), 132-152. doi:10.1080/08957341003673773

Zhang, D., Landmark, L., Reber, A., Hsu, H. Y., Kwok, O., & Benz, M. (2010). University faculty knowledge, beliefs, and practices in providing reasonable accommodations to students with disabilities. Remedial and Special Education, 31(4), 276-286. doi:10.1177/0741932509338348


Appendices

The appendices are available in the PDF version of this document.

Appendix A: Research Purposes
Appendix B: Research Characteristics
Appendix C: Instrument Characteristics
Appendix D: Participant and Sample Characteristics
Appendix E: Accommodations Studied
Appendix F: Research Findings
Appendix G: Study Limitations and Future Research

Top of Page

[Resources/copyright.html]

Online Privacy Statement
This page was last updated on May 20, 2013

NCEO is supported primarily through a Cooperative Agreement (#H326G050007, #H326G110002) with the Research to Practice Division, Office of Special Education Programs, U.S. Department of Education. Additional support for targeted projects, including those on LEP students, is provided by other federal and state agencies. Opinions expressed in this Web site do not necessarily reflect those of the U.S. Department of Education or Offices within it.