Access to Computer-Based Testing for Students with Disabilities


NCEO Synthesis Report 45

Published by the National Center on Educational Outcomes

Prepared by:

Sandra J. Thompson  • Martha L. Thurlow  • Rachel F. Quenemoen  • Camilla A. Lehr

June 2002


Any or all portions of this document may be reproduced and distributed without prior permission, provided the source is cited as:

Thompson, S. J., Thurlow, M. L., Quenemoen, R. F., & Lehr, C. A. (2002). Access to computer-based testing for students with disabilities (Synthesis Report 45). Minneapolis, MN: University of Minnesota, National Center on Educational Outcomes. Retrieved [today's date], from the World Wide Web: http://education.umn.edu/NCEO/OnlinePubs/Synthesis45.html


Executive Summary

Called the “next frontier in testing,” computer-based testing is being promoted as the solution to many of states’ testing problems. With pressure to find more cost effective and less labor intensive approaches to testing, states are seeing computer-based testing as a way to address the increasingly challenging prospect of assessing all students in a state at nearly all grades. Computer-based testing is viewed with optimism as an approach that will make testing less expensive in the long run, and that will produce better assessments of the wide range of students who must now be included in state and district assessments.

Unfortunately, most states and testing companies have not specifically considered the needs of students with disabilities as they pursue computer-based testing. Often, the approach has simply been to take the paper and pencil test and put it onto a computer. This is not enough. Poor design elements on the paper test will transfer to the screen, and there will be additional challenges created by the move as well, challenges that may reduce the validity of the assessment results and possibly exclude some groups from participation in the assessment.

This paper recognizes both the opportunities created by the new frontier of computer-based testing, but also identifies the challenges. Research findings and accommodations considerations are also addressed, with the end result being a process and considerations for the initial transformation of paper/pencil assessments to inclusive computer-based testing.

The recommended process for a good transformation of a paper and pencil test to computer-based testing assumes first that the principles of universally designed assessments have been followed. Then, the five step that are recommended (and discussed in the paper) are:

Step 1. Assemble a group of experts to guide the transformation.

Step 2. Decide how each accommodation will be incorporated into the computer-based test.

Step 3. Consider each accommodation or assessment feature in light of the constructs being tested.

Step 4. Consider the feasibility of incorporating the accommodation into the computer-based test.

Step 5. Consider training implications for staff and students.

The paper also presents initial considerations for common accommodations within the categories of timing/scheduling, presentation, response, and setting.


Overview

On January 8, 2002, President Bush signed the reauthorization of the Elementary and Secondary Education Act into law as the “No Child Left Behind Act of 2001.” This Act requires states to have annual assessments in place in reading and mathematics for all students in grades three through eight by the end of the 2005-2006 school year, with science assessments added by the beginning of the 2007-2008 school year. Only nine states currently administer standards-based tests in both subjects across grades three through eight (Quality Counts, 2002), setting an unprecedented opportunity for states to enhance the participation of all students as they build and improve their assessment systems. Increased requirements within the law for itemized score analyses, disaggregation within each school and district by gender, racial and ethnic group, migrant status, English proficiency, disability, and income will challenge states to create new and more efficient ways to administer, score, and report assessment results.

Computer-based testing has been called the “next frontier in testing” as educators, testing companies, and state departments quickly work to transform paper/pencil tests into technology-based formats (Trotter, 2001). These efforts have occurred in a variety of ways and for a variety of tests. For example, some educators have transferred all of their classroom quizzes and tests into a computer-based format. The paper/pencil version of the Graduate Record Exam™ has been replaced with a computerized version that is administered across a variety of locations. NCS Pearson has developed eMeasurement™ Services—a suite of tools that delivers tests and their results electronically.1 As a result of these advances, states are facing pressure to create computer-based large-scale assessments (Russell, 2002). Some states are investigating the possibility of computerized adaptive testing for their statewide assessments, where the difficulty level of questions are presented and adjusted based on whether students’ responses are correct. According to Bennett (1998), “Whereas there is certainly a concerted move toward technology-based large-scale tests, there is no question that this assessment mode is still in its infancy. Like many innovations in their early stages, today’s computerized tests automate an existing process without reconceptualizing it to realize the dramatic improvements that the innovation could allow. Thus, these tests are substantively the same as those administered on paper” (p. 3).

With the dramatic increase in the use of the Internet over the past few years, and with it, the considerable potential of online learning (Kerrey & Isakson, 2002), assessment will need to undergo a complete transformation to keep pace. According to the Web-based Education Commission, “Perhaps the greatest barrier to innovative teaching is assessment that measures yesterday’s learning goals…Too often today’s tests measure yesterday’s skills with yesterday’s testing technologies—paper and pencil” (p. 3).

Experts suggest that the Internet will be used to develop tests and present items through dynamic and interactive stimuli such as audio, video, and animation (Lewis, 2001). Given this momentum, it is not surprising that there is a trend toward investigating and incorporating the Internet as the testing medium for statewide assessments. Bennett (2001) stated, “The trend is clear: the infrastructure is quickly falling into place for Internet delivery of assessment to schools, perhaps first in survey programs like NAEP (National Assessment of Educational Progress) that require only a small participant sample from each school, but eventually for inclusive assessments delivered directly to the desktop” (p. 10).

As the trend toward computer-based testing moves forward, it is important to focus carefully on the requirements of the newly enacted No Child Left Behind Act of 2001, and on the assessment participation requirements in the 1997 reauthorization of the Individuals with Disabilities Education Act. In addition, a 1996 Department of Justice Policy Ruling states that Titles II and III of the Americans with Disabilities Act requires State and local governments to provide effective communication whenever they communicate through the Internet. The Office for Civil Rights discussed the provision of effective communication:

The issue is not whether the student with the disability is merely provided access, but the issue is rather the extent to which the communication is actually as effective as that provided to others. Title II [of the Americans with Disabilities Act of 1990] also strongly affirms the important role that computer technology is expected to play as an auxiliary aid by which communication is made effective for persons with disabilities (Pages 1-2, 1996 Letter; 28 C.F.R. 35.160 (a)).

In further clarification, the Office for Civil Rights lists three basic components of effective communication: “timeliness of delivery, accuracy of the translation, and provision in a manner and medium appropriate to the significance of the message and the abilities of the individual with the disability” (Page 1, 1997 Letter). This clarification presents a significant and timely responsibility in the design of computer-based testing.

For the full benefits of computer-based testing to be realized, a thoughtful and systematic process to examine the transfer of existing paper/pencil assessments must occur. It is not enough to simply transfer test items from paper to screen. Not only will poor design elements on the paper test transfer to the screen, additional challenges may result that reduce the validity of the assessment results and possibly exclude some groups of students from assessment participation.

This paper presents factors to consider in the design of computer-based testing for all students, including students with disabilities and students with limited English proficiency. We begin with the opportunities and challenges presented by this “new frontier” in testing, and then explore research about effective universally designed assessments and technology-based accommodations, and relate this knowledge to computer-based testing design features. Finally, we present a process and consideration for the initial transformation of paper/pencil assessments to inclusive computer-based testing.


Opportunities

Several advocates have articulated the positive merits of computer-based testing. Some of the advantages over paper/pencil tests that have been cited include: efficient administration, preferred by students, self-selection options for students, improved writing performance, built-in accommodations, immediate results, efficient item development, increased authenticity, and the potential to shift focus from assessment to instruction. This section describes each of these prospective opportunities.

 

Efficient Administration

Computer-based tests can be administered to individuals or small groups of students in classrooms or computer labs, eliminating timing issues caused by the need to administer paper/pencil tests in large groups in single sittings. Different students can take different tests simultaneously in the same room.

 

Preferred by Students        

In an evaluation of testing experience, students overwhelmingly preferred computerized testing to paper/pencil testing (Brown & Augustine, 2001). Most students, regardless of group or ability, believed that the computer was easier, faster, and more fun. Students also responded that using a computer helped concentration by presenting only one question at a time. A recent survey on computer use by students with disabilities in Germany (Ommerborn & Schuemer, 2001) found several more advantages than disadvantages to computer use.

Brown-Chidsey and Boscardin (1999) interviewed students with learning disabilities and found that the computer helped them deal with limitations that often interfered with the completion of their work. The researchers concluded, “Students’ beliefs about computers are likely to shape the extent to which instructional technology enhances their achievement” (Brown-Chidsey, Boscardin, & Sireci, 1999, p. 4). A study at the Boston College Center for the Study of Testing, Evaluation, and Assessment (Trotter, 2001) found, “Students who are accustomed to writing on computers tend to do better on computerized tests than on paper exams. Conversely, students who don’t use computers often to write tend to do better when they complete their tests on paper” (p. 3).

 

Self-Selection Options for Students

Students have the option to choose features on computer-based tests, including format features and built-in accommodations. For example, Calhoon et al. found that “teachers are unlikely to provide a reader to meet student needs because teachers prefer test accommodations that require little individualization and do not require curricular or environmental modifications” (p. 272). Other recent work on accommodations for English Language Learners (Anderson, Liu, Swierzbin, Thurlow, & Bielinski, 2000; Liu, Anderson, Swierzbin, & Thurlow, 1999) has shown that students may not want to use certain accommodations (e.g., headphones to have instructions read in English, bilingual dictionaries) unless they are provided in specific ways. Teachers have reported that students with learning disabilities may opt not to use certain accommodations at certain times because they are not seen as helpful. Having the ability to self-select a technology-based reader or other tool may provide students access to a necessary accommodation that may not be offered currently, due to issues of convenience.

 

Improved Writing Performance

As computers become more common in schools, many of today’s students are accustomed to using computers in their daily work. Students write and calculate on computers as easily and with more speed and efficiency than previous generations could on paper. Research has shown that writing on computers leads students to write more and revise more than writing with paper/pencil (Daiute, 1985; Morocco & Neuman, 1986). Paper/pencil tests that require writing may underestimate the writing ability of students who have grown accustomed to writing on computers (Russell & Haney, 1997). In a survey of computer use by students with disabilities in Germany, Ommerborn and Schuemer (2001) found that the greatest advantage to students was the ease in which computers allowed them to write essays. Several of the students surveyed said that it was very difficult for them to write by hand.

 

Built-in Accommodations

Computer technology has been touted as a tool that can be used to empower students with disabilities (Goldberg & O’Neill, 2000). Specifically, computer-based testing has been viewed as a vehicle to increase the participation of students with disabilities in assessment programs. For example, the windows operating system supports a great variety of adaptive devices (e.g., screen readers, Braille displays, screen magnification, self-voicing Web browsers). According to Greenwood and Rieth (1994), the primary strength of computer-based testing is its “potential for removing traditional barriers to the inclusion of persons with disabilities in the assessment process through adaptations and accommodations as well as through new forms” (p. 110).

Computer-based testing can provide flexibility in administration for students with various learning styles. For example, the National Research Council (NRC, 2001) found computer-based testing to be effective for students who perform better visually than with text, are not native English speakers, or are insecure about their capabilities. According to NRC, “Technology is already being used to assess students with physical disabilities and other learners whose special needs preclude representative performance using traditional media for measurement” (p. 286).

Standardization of accommodated assessment administrations can be facilitated by computer-based testing. According to Brown-Chidsey and Boscardin (1999), “Using a computer to present a test orally controls for standardization of administration and allows each student to complete the assessment at his/her own pace” (p. 2). Brown and Augustine (2001) cited educator appreciation of a computer’s ability to present items over and over, in both written and verbal form, without the need for a non-standard (and sometimes impatient) human reader. Several studies have shown the positive effects of providing a reader for math tests (see Calhoon, Fuchs & Hamlett, 2000; Fuchs, Fuchs, Eaton, Hamlett, & Karns, 2000; Tindal, Heath, Hollenbeck, Almond, & Harniss, 1998).

With the use of audio and video built into computer-based tests, specialized testing equipment such as audiocassette recorders and VCRs could become obsolete (Bennett, Goodman, Hessinger, Ligget, Marshall, Kahn, & Zack, 1999). According to Bennett (1995), “Test directions and help functions would be redundantly encoded as text, audio, video, and Braille, with the choice of representation left to the examinee. The digital audio would allow for spoken directions, whereas the video could present instruction in sign language or speech-readable form. Among other things, these standardized presentations should reduce the noncomparability associated with the uneven quality of human readers and sign-language interpreters” (p. 10).

Finally, just as the use of accommodations on paper/pencil tests has increased awareness and use of accommodations in the classroom, so can opportunities to use the built-in accommodation features of computer-based tests encourage and increase the use of those features in classroom and other environments. For example, Williams (2002) believes, “It is possible that new developments in speech recognition technology could increase opportunities for individual reading practice with feedback, as well as collecting assessment data to inform instructional decision making” (p. 41). In addition, most computer-based tests have built-in tutorials and practice tests. These tutorials provide students with both opportunities for familiarizing themselves with the software and immediate feedback (Association of Test Producers, 2000).

 

Immediate Results

One of the major drawbacks of state testing on paper has been the long wait for results because of the need to distribute, collect, and then scan test booklets/answer forms and hand score open-response items and essays. Students tested in the spring often do not receive their results until fall—nor do their teachers or schools. The results of computer-based tests can be available immediately, providing schools with diagnostic tools to use for improved instruction, and states with information to guide policy. Even open-ended items can be scored automatically, greatly reducing cost and scoring time (Thompson, 1999). According to a report by the National Governors Association (2002), cost savings can result from “the elimination of printing and shipping activities when paper testing ceases” (p. 7).

 

Efficient Item Development

As computer-based testing becomes more developed, item development will be more efficient, higher quality, and less expensive (National Governors Association, 2002). Bennett (1998) believes that at some point items might be generated electronically, with items matched to particular specifications at the moment of administration. “Test design will also be the focal point for responding to diversity. The effects of different test designs on minority group members, females, …will be routinely simulated in deciding what skills and which task formats to use in large-scale assessments” (Bennett, 1998, p. 9). According to Russell (2002), “already, some testing programs are experimenting with ways to generate large banks of test items via computer algorithms with the hope of saving the time and money currently required to produce test items manually” (p. 65). Baker (2002) cited several research efforts that have significantly advanced the progress of schema or template-based, multiple-choice development and test management systems (see Bejar, 1995; Bennett, 2002; Chung, Baker, & Cheak, 2001; Chung Klein, Herl & Bewley, 2001; Gitomer, Steinbert, & Mislevy, 1995; Mislevy, Steinbert, & Almond, 1999).

 

Increased Authenticity

Computers allow for increased use of “authentic assessments”—responses can be open-ended rather than just relying on multiple choice. According to Bennett (1998), the next generation of computer-based tests will be “qualitatively different from those of the first generation. This difference will be evident in the test questions (and, in some cases, the characteristics they measure), as well as in development, scoring, and administrative processes” (p. 4, see Table 1). Bennett notes that many Americans are now receiving their news from TV and the World Wide Web, with the expectation that students will increasingly be able to process information from a variety of sources, not just from print. Bennett also suggests that response formats will shift dramatically, perhaps including problems in which a student is not expected to find the best answer, but a reasonable one within certain constraints.

Table 1. Three Generations of Large-Scale Educational Assessment

Generation

Key Characteristics

First-Generation Computer-Based Tests

(Infrastructure Building)

Primarily serve institutional needs

Measure traditional skills and use test designs and item formats closely resembling paper-based tests

Take limited advantage of technology

Next-Generation Computer-Based Tests

(Qualitative Change)

Primarily serve institutional needs

Use new item formats (including multimedia and constructed response), automatic item generation, automatic scoring, and electronic networks to make performance assessment an integral program component; measure some new constructs

Allow customers to interact with testing companies entirely electronically

Generation “R” Test (Reinvention)

Serve both institutional and individual purposes

Integrated with instruction via electronic tools so that performance is sampled repeatedly over time; designed according to cognitive principles

Use complex simulations, including virtual reality, that model real environments and allow more natural interaction with computers

Adapted from: Bennett, R.E. (1998). Reinventing assessment: Speculations on the future of large-scale educational testing. Princeton, NJ: Policy Information Center, Educational Testing Service.

 

Shifts Focus from Assessment to Instruction

Bennett (1998) believes that eventually large-scale assessment will join with instruction. “Decisions like certification of course mastery, graduation eligibility, and school effectiveness will no longer be based largely on one examination given at a single time but will also incorporate information from a series of measurements” (p. 11). “By virtue of moving assessment into the curriculum, the locus of the debate over performance differences must logically shift from the accuracy of assessment to the adequacy of instruction” (p. 12). Bennett continues this line of thought in a 2001 article, “When well-constructed tests closely reflect the curriculum, group differences should become more an issue of instructional inadequacy than test inaccuracy. As attention shifts to the adequacy of instruction, the ability to derive meaningful information from test performance becomes more critical” (p. 2).


Challenges

Despite the potential advantages offered by computer-based testing, there remain several challenges, especially in the transition from paper/pencil assessments. First of all, the use of technology cannot take the place of content mastery. No matter how well a test is designed, or what media are used for administration, students who have not had an opportunity to learn the material tested will perform poorly. Students need access to the information tested in order to have a fair chance at performing well. Hollenbeck, Tindal, Harniss, and Almond (1999) strongly caution that the use of a computer, in and of itself, does not improve the overall quality of student writing. They, and other researchers, continue to find significantly lower mean test scores for students with disabilities than for their peers without disabilities. Other challenges that must be overcome in order for computer-based testing to be effective include: issues of equity and skill in computer use, added challenges for some students, technological challenges, security of online data, lack of expertise in designing accessible Web pages, and prohibitive development cost.

 

Issues of Equity and Skill in Computer Use

Concerns continue to exist in the area of equity, where questions are asked about whether the required use of computers for important tests puts some students at a disadvantage because of lack of access, use, or familiarity (Trotter, 2001). Concerns include unfamiliarity with answering standardized test questions on a computer screen, using buttons to search for specific items, and indecision about whether to use traditional tools (e.g., hand held calculator) vs. computer-based tools. According to Wissick and Gardner (2000), “Students will not take advantage of help options or use navigation guides if they require more personal processing energy than they can evoke” (p. 38).

A survey on computer use by students with disabilities in Germany (Ommerbon & Schuemer, 2001) found the cost of acquiring and using a computer as the greatest barrier, with the second being a lack of training opportunities. Students who needed assistive technology cited high cost and lack of information as barriers to increased computer use.

The gap in access to technology—sometimes referred to as the “Digital Divide”—is continuing to grow. According to Bolt and Crawford, authors of Digital Divide (2000, p. 98):

While over 80 percent of families with incomes of $100,000 or more have computers at home, only about 25 percent of those households with annual incomes under $30,000 have home access to computers. Demographically, this means that the digital revolution is in full swing in America’s wealthy suburbs and affluent sections of cities and towns, while in some of our poorest areas, it is a phenomenon that is at best heard about on television. The gap has widened considerably for computer ownership among racial minorities when compared with European-Americans. In the context of the overall racial digital divide, a low-income European-American child is three times more likely to have internet access than his or her African-American counterpart, and four times as likely as a Latino family in the same socioeconomic category.

 

Added Challenges for Some Students

Some research questions whether the medium of test presentation affects the comparability of the tasks students are being asked to complete. Here are some findings that show added difficulty for some students.

  • Computer-based testing places more demands on certain skills such as typing, using multiple screens to recall a passage, mouse navigation, and the use of key combinations (Bennett, 1999; Ommerborn & Schuemer, 2001).
  • Some people become more fatigued when reading text on a computer screen than on paper (Allan et al, 2001; Mourant, Lakshmanan, & Chantadisai, 1981).
  • Long passages may be more difficult to read on computer screen (Haas & Hayes, 1986).
  • The inability to see an entire problem on screen at one time is challenging because some items require scrolling horizontally and vertically to get an entire graphic on the page (Hollenbeck, Tindal, Harniss, & Almond, 1999).
  • Few teachers use computers in math instruction, or spreadsheets, so students do not know how to “think on the monitor” (Trotter, 2001).
  • Graphic user surfaces present considerable obstacles to students with visual impairments (Ommerborn & Schuemer, 2001).

 

Technological Challenges

Computers and the Internet do not always work the way we want them to. The word “crash” has taken on a whole new meaning in our technology-oriented world. An issue brief of the National Governors Association listed some of the problems: “testing sessions may be interrupted, proceed so slowly as to interfere with student performance, or encounter difficulties in machine operation or telecommunications that cause data to be lost entirely. Unlike a paper-and-pencil testing system, keeping a computerized system functioning requires significant technical expertise, which many schools lack” (p. 7). Burk (1999) argued, “Computerized testing for students with disabilities is viable but only with appropriate equipment, staff preparation, and student preparation” (p. 6). Some researchers, like Hamilton, Klein, and Lorie (2001), question whether an infrastructure currently exists that can support the use of computers by large numbers of students. They also question the quality of the hardware, especially with our constant evolution of technology, and whether there is sufficient training for staff who must help with administration and technological difficulties that may be encountered. Also, the test program may be device-dependent; for example, there may be a difference in contrast between monitors and speed of the computer. A test presented online may default to the computer’s font, print size, and background color. Graphics may become distorted on small screens, reducing standardization of the assessment presentation. According to a report by the National Governors Association (2002, p. 7):

The reality of statewide computerized testing is that equipment will vary from one school to the next and, sometimes, from one machine to the next within the same school. Similarly, the speed of the Internet connection may differ across schools or within the same school by time of day. The result of these variations is that one student may take a test on a small-screen monitor running at low resolution, thereby requiring repeated scrolling to read comprehension passages. Because of the Internet connection, that student may have to wait five seconds before the next passage is displayed. In contrast, another student may be able to see not only the entire passage but also the questions on the same single screen, with no wait between passages. It is known that such variations can affect performance, but it is not known how to adjust for them in test results.

A constant challenge is ongoing entry of new Web browsers and new versions of existing browsers. In addition, HTML and document converters are constantly being developed and modified. Unfortunately, several features may not be universally accessible and advancements in assistive technology are usually several steps behind new Internet components and tools. For example, using an eye pointing device may increase the time needed to position each eye pointing frame, leading to increased fatigue, boredom, and inattention by the test-taker (Haaf, Duncan, Skarakis-Doyle, Carew, & Kapitan, 1999). As computer-based testing becomes a reality across states and districts, it is important to ensure that the new technology either improves accessibility or is compatible with existing assistive computer technology.

 

Security of Online Data

Critics question whether online data are secure. In a report by the National Governors Association (2002), security issues related to protecting test questions and ensuring the confidentiality of student data in a computerized system were compared to those encountered with conventional tests and were found to be conceptually similar. Differences were found in mechanisms to accomplish breaches and protect against them. For example, test questions and student data could be stolen from central servers or from local computers. This can be minimized through technical design that encrypts questions and student records and through the careful use of passwords.

 

Lack of Ability to Design Accessible Web Pages

According to WebAIM, (Web Accessibility in Mind, an initiative of the Center for Persons with Disabilities at Utah State University, 2001), there are 27.3 million people with disabilities who are limited in the ways they can use the Internet: “The saddest aspect of this fact is that the know-how and the technology to overcome these limitations already exist, but they are greatly under-utilized, mostly because Web developers simply do not know enough about the issue to design pages that are accessible to people with disabilities. Unfortunately, even some of the more informed Web developers minimize the importance of the issue, or even ignore the problem altogether” (p. 1).

 

Prohibitive Development Cost

Development expenses listed in a report by the National Governors Association (2002) include: “central hardware to deliver the test over the Internet, local telecommunications hardware, machines in schools for students to take the tests on, and test authoring and delivery software. Labor expenses include costs for entering questions into the testing software, assuring quality in the test’s operation, extracting student records from the test database and translating the information into a form suitable for analysis, and servicing the technology that runs the system. There are also ongoing connection charges” (p. 7). The National Governors Association recommends that states form consortia, cooperative agreements, or buying pools in order to reduce the costs of “test questions, telecommunications equipment, computer hardware, testing software, and equipment maintenance” (p. 9).


Universally Designed Computer-based Tests

Universal design is defined by the Center for Universal Design (1997) as “the design of products and environments to be usable by all people, to the greatest extent possible, without the need for adaptation or specialized design.” The Assistive Technology Act of 1998 (PL 105-394) addresses universal design through this definition:

The term ‘universal design’ means a concept or philosophy for designing and delivering products and services that are usable by people with the widest possible range of functional capabilities, which include products and services that are directly usable (without requiring assistive technologies) and products and services that are made usable with assistive technologies.

A recent report on the application of universal design to large-scale assessments (Thompson, Johnstone, & Thurlow, 2002) found that good basic design, whether on paper or technology-based, increases access for everyone, and poor design can have detrimental effects for nearly everyone. Many accessibility issues relate to content and design features, with content defined as subject matter on the page while design is defined as the organization or arrangement of objects and information on the page.

 

Content

An important function of well-designed assessments is that they actually measure what they are intended to measure. Test developers need to carefully examine what is to be tested and design items that offer the greatest opportunity for success within those constructs. Just as universally designed architecture removes physical, sensory, and cognitive barriers to all types of people in public and private structures, universally designed assessments need to remove all non-construct-oriented cognitive, sensory, emotional, and physical barriers.

Assessment instructions need to be easy to understand, regardless of a student’s experience, knowledge, language skills, or current concentration level. Directions and questions need to be in simple, clear, and understandable language. It is important for designers of computer-based tests to strive for content that is understandable and navigable. According to WebAIM (2001), “this includes not only making the language clear and simple, but also providing understandable mechanisms for navigating within and between pages” (p. 8).

 

Design Features

Legibility is the physical appearance of text; the way shapes of letters and numbers enable people to read text “quickly, effortlessly, and with understanding” (Schriver, 1997, p. 252). Though a great deal of research has been conducted in this area, the personal opinions of editors often prevail (Bloodsworth, 1993; Tinker, 1963). Bias results from items that contain physical features that interfere with a student’s focus on or understanding of the construct an item is intended to assess. Format dimensions can include contrast, type size, spacing, typeface, leading, justification, line length/width, blank space, graphs and tables, illustrations, and response formats (see Table 2).

Table 2. Characteristics of Maximum Legibility

Dimension Maximum Legibility Characteristics
Contrast

Black type on matte pastel or off-white paper is most favorable for both legibility and eye strain.

Type Size

Large type sizes are most effective for young students who are learning to read, students with visual difficulties, and individuals with eye fatigue issues.

Spacing

The amount of space between each character can affect legibility. Spacing needs to be wide between both letters and words. Fixed-space fonts seem to be more legible for some readers than proportional-spaced fonts.

Leading

Leading, the amount of vertical space between lines of type, must be enough to avoid type that looks blurry and has a muddy look. The amount needed varies with type size (for example, 14-point type needs 3-6 points of leading).

Typeface

Standard typeface, using upper and lower case, is more readable than italic, slanted, small caps, or all caps.

Justification

Unjustified text (with staggered right margin) is easier to see and scan than justified text – especially for poor readers.

Line Length

Optimal length is about 4 inches or 8 to 10 words per line. This length avoids reader fatigue and difficulty locating the beginning of the next line, which causes readers to lose their place.

Blank Space

A general rule is to allow text to occupy only about half of a page. Blank space anchors text on the paper and increases legibility.

Graphs and Tables

Symbols used on graphs need to be highly discriminable. Labels should be placed directly next to plot lines so that information can be found quickly and not require short-term memory.

Illustrations

When used, an illustration should be directly next to the question for which it is needed. Because illustrations create numerous visual and distraction challenges, and may interfere with the use of some accommodations (such as magnifiers), they should be used only when they contain information being assessed.

Response Formats

Response options should include larger circles (for bubble response tests), as well as multiple other forms of response.

From Thompson, Johnstone, & Thurlow, 2002.

 

It is important to maintain these aspects of universal design when converting paper/pencil tests to computer-based tests. Poor design on paper will result in poor design on a screen. In addition to the universal design elements described above, computer-based testing can offer several additional features that can increase the accessibility of assessments for all students, including students with disabilities and English language learners. According to WebAIM (2001), “Everyone benefits from well-designed Web sites, regardless of cognitive capabilities. In this context, ‘well-designed’ can be defined as having a simple and intuitive interface, clearly worded text, and a consistent navigational scheme between pages” (p. 8). These features also need to take into account variations in technology available in schools across a district or state, and the other challenges described in the previous section.

The provision of navigation tools and orientation information in pages can maximize access for all users. However, there are users who cannot access visual clues such as image maps, scroll bars, side-by-side frames, or graphics. Some users lose contextual information because they are accessing a page one word at a time through speech synthesis or braille. Ommerborn and Schuemer (2001, p. 21) conducted a survey of German students with disabilities and found that:

Being able to use various ways of sending commands within a programme not only helps people with specific handicaps, but also renders working with a computer much more comfortable for all users with their different preferences and skills…Multimedia products addressing several senses or allowing the user to choose between visual and acoustic information not only makes access easier for people with impaired senses but also makes the product altogether more attractive.


Assistive Technology

Even though items on universally designed assessments will be accessible for most students, there will still be some students who continue to need accommodations, including assistive technology. According to Bowe (2000), “One big advantage of universal design is that it minimizes the need, on the part of people with disabilities, for assistive technology devices and services” (p. 25). Items are biased when they do not allow for adaptation for use with assistive technology that is needed to facilitate use of the student’s primary means of communication. Computer-based tests need to be accessible for a variety of forms of assistive technology (e.g., key guards, specialized keyboards, trackballs, screen readers, screen enlargers) for students with physical or sensory disabilities. Bowe (2000) stated, “If a product or service is not usable by some individual, it is the responsibility of its developers to find ways to make it usable, or, at minimum, to arrange for it to be used together with assistive technologies of the user’s choice” (p. 27). Appendix A describes several resources to assist assessment developers in increasing access to assistive technology.

It is important to note that making computer-based testing amenable to assistive technology does not mean that students will automatically know what to do. Educators, especially special educators, need to be competent in technology knowledge and use. According to Lahm and Nickels (1999), “Educators must become proactive in their technology-related professional development because teacher education programs have only recently begun addressing the technology skills of their students” (p. 56). The Knowledge and Skills Subcommittee of the Council for Exceptional Children’s (CEC) Professional Standards and Practice Standing Committee has developed a set of 51 competencies for assistive technology that cross 8 categories, along with knowledge and skills statements for each category (see Lahm & Nickels, 1999).

 

Laws Governing Assistive Technology

The use of assistive technology is defined in the Individuals with Disabilities Education Act (IDEA 97), the Rehabilitation Act of 1997, and is implied in the Americans with Disabilities Act (ADA). IDEA 97 defines assistive technology as “any item, piece of equipment, or product system…that is used to improve the functional capabilities of individuals with disabilities; and any service that directly assists an individual in the selection, acquisition, or use of an assistive technology device.” An “assistive technology device” is further defined as “any item, piece of equipment, or product system, whether acquired commercially off the shelf, modified, or customized, that is used to increase, maintain, or improve the functional capabilities of a child with a disability” (20 U.S.C. 1401(1)).

The Rehabilitation Act (reauthorized in 1997) requires institutions receiving federal funds to have accessible Web sites. Similarly, the Americans with Disabilities Act (ADA) requires covered entities to furnish appropriate auxiliary aids and services where necessary to ensure effective communication with individuals with disabilities, unless doing so would result in a fundamental alteration to the program or service or in an undue burden (See 28 C.F.R. 36.303; 28 C.F.R. 35.160). Auxiliary aids include taped texts, Brailled materials, large print materials, captioning, and other methods of making audio and visual media available to people with disabilities. Titles II and III of the ADA require State and local governments and the business sector to provide effective communication whenever they communicate through the Internet. In order to specifically address the needs of people with visual disabilities, an ADA policy ruling determined that a text format rather than a graphical format assures accessibility to the Internet for individuals using screen readers. Without special coding, a text browser will only display the word “image” when it reads a graphic image, and if the graphic is essential to navigating the site (e.g., navigational button or arrow) or if it contains important information (e.g., table or image map) the user can get stuck and not be able to move or understand the information provided.

 

Assistive Technology Resources

There are several resources available to increase the accessibility of computer-based testing for students with disabilities. These resources are found primarily in the area of general Web content. Chishold, Vanderheiden, and Jacobs (1999) offer guidelines on how to make Web content accessible to people with disabilities. They are quick to point out that following these guidelines can also make Web content more available to all users, including those who use voice browsers, mobile phones, automobile-based personal computers, and other technology. The guidelines, found in Table 3, explain how to make multimedia content more accessible to a wide audience. For more information about Web accessibility, visit http://www.webaim.org, the official Web site of Web Accessibility in Mind (WebAIM). Several additional resources can be found in Appendix A.

 

Table 3. Web Content Accessibility Guidelines

21 December 2001: The Authoring Tool Accessibility Guidelines Working Group has released the first public Working Draft of Authoring Tool Accessibility Guidelines "Wombat". The guidelines are for developers who wish to design authoring tools that produce accessible Web content and who wish to create accessible authoring interfaces)

Guideline 1. Provide equivalent alternatives to auditory and visual content.
Provide content that, when presented to the user, conveys essentially the same function or purpose as auditory or visual content.

Guideline 2. Don't rely on color alone.
Ensure that text and graphics are understandable when viewed without color.

Guideline 3. Use markup and style sheets and do so properly.
Mark up documents with the proper structural elements. Control presentation with style sheets rather than with presentation elements and attributes.

Guideline 4. Clarify natural language usage.
Use markup that facilitates pronunciation or interpretation of abbreviated or foreign text.

Guideline 5. Create tables that transform gracefully.
Ensure that tables have necessary markup to be transformed by accessible browsers and other user agents.

Guideline 6. Ensure that pages featuring new technologies transform gracefully.
Ensure that pages are accessible even when newer technologies are not supported or are turned off.

Guideline 7. Ensure user control of time-sensitive content changes.
Ensure that moving, blinking, scrolling, or auto-updating objects or pages may be paused or stopped.

Guideline 8. Ensure direct accessibility of embedded user interfaces.
Ensure that the user interface follows principles of accessible design: device-independent access to functionality, keyboard operability, self-voicing, etc.

Guideline 9. Design for device-independence.
Use features that enable activation of page elements via a variety of input devices.

Guideline 10. Use interim solutions.
Use interim accessibility solutions so that assistive technologies and older browsers will operate correctly.

Guideline 11. Use W3C technologies and guidelines.
Use W3C technologies (according to specification) and follow accessibility guidelines. Where it is not possible to use a W3C technology, or doing so results in material that does not transform gracefully, provide an alternative version of the content that is accessible.

Guideline 12. Provide context and orientation information.
Provide context and orientation information to help users understand complex pages or elements.

Guideline 13. Provide clear navigation mechanisms.
Provide clear and consistent navigation mechanisms -- orientation information, navigation bars, a site map, etc. -- to increase the likelihood that a person will find what they are looking for at a site.

Guideline 14. Ensure that documents are clear and simple.
Ensure that documents are clear and simple so they may be more easily understood.


Computerized Adaptive Testing

In computerized adaptive testing, a student responds to an item, which is followed by more difficult items if the student responded correctly, or easier items if the student responded incorrectly (Hamilton, Klein, & Lorié, 2001). Through this process, a student’s performance level is determined. According to Hamilton, Klein and Lorié (2001), “each response leads to a revised estimate of the student’s proficiency and a decision either to stop testing or to administer an additional item that is harder or easier than the previous one” (p. 12).

The advantages cited for computerized adaptive testing include short and efficient administration time, with the computer selecting the next item immediately after an item is completed. A proficiency level is determined through the completion of fewer items than a test in which students respond to every item on the test. According to McBride (1985), “A well-constructed adaptive test attains a specified level of measurement precision in about half the length of time a conventional test would require to reach the same level. This is attributable to the adaptive feature; by tailoring the choice of questions to match the examinee’s ability, the test bypasses most questions that are inappropriate in difficulty level and contribute little to the accurate estimation of the test-taker’s ability” (p. 26).

However Stone and Lunz (1994) found that the inability of students taking computerized adaptive tests to review items and alter their responses may affect the quality of measurement. Students cannot select the order in which they respond to items, or leave some items blank.

There is some research that suggests that students who change earlier answers may improve their scores by a small margin (Gerson & Bergstrom, 1995; Stocking, 1996). There is also concern that some students may respond to early items wrong on purpose to get easier questions (Wainer, 1993).

The use of computerized adaptive tests for large-scale assessments has come under scrutiny by federal officials who question whether “levels” testing meets accountability requirements of Title I (Olson in Education Week, 2002). Levels testing, which has been defined as testing at a student’s instructional level rather than at his or her grade level, relies on overlapping levels within a single grade level, and common items among the levels. Computerized adaptive testing goes beyond the need for separate booklets by using a variety of complex algorithms that allows the student to move among different “levels” more freely, based on performance (Quenemoen, Thurlow, & Bielinski, in press).

 

Process for Developing Inclusive Computer-based Tests

The transformation of traditional paper/pencil tests to inclusive computer-based tests takes careful and thorough work that includes the collaborative expertise of many people. As discussed earlier in this paper, in order for the full benefits of computer-based testing to be realized, a thoughtful and systematic process to examine the transfer of existing paper/pencil assessments must occur. It is not enough to simply transfer test items from paper to screen. Not only will poor design elements on the paper test transfer to the screen, additional challenges may result in reducing the validity of assessment results. Some of the challenges traditionally present with accommodations could be minimized through universally designed computer-based tests, while others might remain or present even greater challenges. Here are some steps to follow in addressing these transformation issues.

Step 1. Assemble a group of experts to guide the transformation. This group needs to include experts on assessment design, accessible Web design, universal design, and assistive technology, along with state and local assessment and special education personnel. Table 4 contains a worksheet to use when gathering this group.

Table 4. Assemble a Group of Experts to Guide the Development of Computer-based Tests.

Type of Expert

Names/Positions

Assessment design experts

 

Accessible Web design experts

 

Universal design experts

 

Assistive technology experts

 

State assessment personnel

 

State special education personnel

 

Local assessment personnel

 

Local special education personnel

 

 

Step 2. Decide how each accommodation will be incorporated into the computer-based test. Examine each possible accommodation in light of computer-based administration. Some of the traditional paper/pencil accommodations will no longer be needed (e.g., marking responses on test form rather than on answer sheet), while others will become built-in features that are available to every test-taker. Some accommodations will be more difficult to incorporate than others, requiring careful work by test designers and technology specialists. The standards and guidelines for accessible Web design found in Appendices B, C, and D should be used when building in these features.

Step 3. Consider each accommodation or assessment feature in light of the constructs being tested. For example, what are the implications of the use of a screen reader when the construct being measured is reading, or the use of a spellcheck when achievement in spelling is being measured as part of the writing process? As the use of speech recognition technology permeates the corporate world, constructs that focus on writing on paper without the use of a dictionary or spellchecker may become obsolete and need to be reconsidered.

Step 4. Consider the feasibility of incorporating the accommodation into computer-based tests. Questions about the feasibility of the accommodation may require review by technical advisors, or members of a policy/budget committee, or may require short-term solutions along with long term planning. According to the Technology Act of 1998 (§ 1194.2 Application):

(a) When developing, procuring, maintaining, or using electronic and information technology, each agency shall ensure that the products comply with the applicable provisions of this part, unless an undue burden would be imposed on the agency.

(1) When compliance with the provisions of this part imposes an undue burden, agencies shall provide individuals with disabilities with the information and data involved by an alternative means of access that allows the individual to use the information and data.

Construct a specific plan for building in features that are not immediately available, in order to keep them in the purview of test developers. Extensive pilot testing needs to be conducted with a variety of equipment scenarios and accessibility features.

Step 5. Consider training implications for staff and students. The best technology will be useless if students or staff do not know how to use it. Careful design of local training and implementation needs to be part of the planning process. Special consideration needs to be given to the computer literacy of students and their experience using features like screen readers. Information about the features available on computer-based tests needs to be marketed to schools and available to IEP teams to use in planning a student’s instruction and in preparation for the most accessible assessments possible. Practice tests that include these features need to be available to all schools year around. This availability presents an excellent opportunity for students whose schools have previously been unaware of or balked at the use of assistive technology.


Considerations

Most states have a list of possible or common accommodations for students with disabilities within the categories of timing/scheduling, presentation, response, and setting (Thurlow, Lazarus, & Thompson, 2002). Some states also list accommodations specifically designed for students with limited English proficiency (Rivera, Stansfield, Scialdone, & Sharkey, 2000).

 

Presentation Accommodations

The list of accommodations in Table 5 is an expanded list of presentation accommodations generated to address the needs of students with a variety of accommodation needs—including students with disabilities, students with limited English proficiency, students with both disabilities and limited English proficiency, and students who do not receive special services, but have a variety of unique learning and response styles and needs. For each accommodation, relevant considerations are provided in the table. The three columns to the right of the Considerations Column represent:

  • A built-in feature of universally designed computer-based tests (available for self-selection by any student)
  • The need for this accommodation is not affected by computer-based testing
  • A new or different accommodation may be needed for computer-based testing

 Following Table 5 is a summary of considerations for each of the presentation accommodations.

Table 5. Presentation Accommodations

Accommodation

Considerations for Computer-based Tests

1*
Built in

2*
Not affected

3*
Other  accom. needed

Large print and magnification

 

Capacity for any student to self-select print size or magnification

Graphics and text-based user interfaces have different challenges

Scrolling issues

Determination of optimal print size (e.g., default set at 14 pt) and size of graphics to reduce need for large print or magnification

Variations in screen size

Effects of magnification on graphics and tables

X

 

X

Instructions simplified/clarified

 

Instructions designed for maximum simplicity/clarity

Capacity for student to self-select alternate versions of instructions in written or audio format

Capacity to have instructions repeated as often as student chooses

Variable audio speed

X

 

 

Audio presentation of instructions and test items

 

 

Capacity for any student to self-select audio (screen reader) presentation of instructions (all students wear ear/headphones)

Graphics and text-based user interfaces have different challenges

Capacity to repeat instructions and items as often as student chooses

Variable audio speed

Audio presentation must be high quality

X

 

 

Instructions and test items presented in sign language

 

 

Capacity for student to self-select alternate versions of instructions in written format

Capacity for student to self-select signed versions of instructions and test items (Note: some words may not be easily translated into sign language)

Graphics and text-based user interfaces have different challenges

Not feasible to read lips on video

X

 

 

Instructions and test items presented in a language other than English

 

 

 

Capacity for student to self-select alternate language versions of test items in written or audio format

Beware of the speed at which some languages are produced, they may take more space than English

Machine translation capability

Graphics and text-based user interfaces have different challenges

Capacity for pop-up translation

Variable audio speed

X

X

 

Braille

 

Use of screen reader that converts text into synthesized speech or Braille

Alternative tags for images

Graphics and text-based user interfaces have different challenges

Students using Braille may require extra time

 

X

X

Highlighter and place holding templates Capacity for any student to self-select highlighter

Graphics and text-based user interfaces have different challenges

Clear instructions for use of highlighter

X

 

 

Graphics or images that supplement text Careful selection of images

Alternative text or "alt tags" for images

Graphics and text-based user interfaces have different challenges

Avoidance of complex backgrounds or wallpaper that may interfere with the readability of overlying text

Tactile graphics or three-dimensional models may be needed for images

 

 

X

Paper/pencil test format Students who are not computer literate

Students who need accommodations that are not available on computer-based assessments

 

 

X

Use of color

 

Capacity for multiple choices of screen and text color

Graphics and text-based user interfaces have different challenges

X

 

 

Flashing or blinking text or objects

Flash or blink frequency

X

 

 

Multiple column layout Linear presentation order needs to be logical

Access to screen reader

Graphics and text-based user interfaces have different challenges

X

 

X

Captioning

 

Capacity to provide captioning for video content

X

 

X

*1 Built-in feature of universally designed computer-based test (available for self-selection by any student)
*2 Need for accommodation not affected by computer-based test
*3 New or different accommodation may be needed because of computer-based test

 

Large print and magnification. When type is enlarged on a screen, students may need to scroll back and forth, or up and down to read an entire test item. Text that re-wraps to fit into the screen when magnified is more useful than text that requires horizontal scrolling to be accessible. Some students use a large screen monitor to enlarge pages proportionally. Graphics, when enlarged, may become very pixilated and difficult to view. Students who use hand held magnifiers or monocular devices when working on paper may not be able to use these devices on a screen because of the distortion of computer images. If a graphics user interface is used (versus text based), students will not have the option of altering print size on the screen. However, a text-based user interface may default to a small print size or font on some computers.

Instructions simplified/clarified. Instructions for all students need clearly worded text that can be followed simply and intuitively, with a consistent navigational scheme between pages/items. Students need an option to self-select alternate forms of instructions in written or audio format.

Audio presentation of instructions and test items. Screen readers can present text as synthesized speech. Screen readers need to be operable at variable speeds and need to allow students the option of repeating instructions or items as often as desired. The use of text-to-speech for test items may not be a viable option if the construct tested is the ability to read print. A caution to be aware of is that screen-readers will attempt to pronounce acronyms (e.g., CRT) and abbreviations that contain vowels (e.g., AZ). It is important to avoid these both in the text of test items and in the alternative text or “alt tags” that are used. A text-based user interface is required for the use of screen readers.

Instructions and test items presented in sign language. Since most students who read sign language also read print, this accommodation would apply mostly to the use of multimedia item presentation (e.g., items that use audio or video). Students need to be able to self-select signed versions of audio or video instructions and test items. If sign language is used, it needs to be large enough on the screen and have good resolution for students to be able to determine subtle signs. Students also need to have the option to repeat instructions or items. Reading the speech of a person on a Web video is not feasible. Captioning in addition to signing may be the most feasible option for audio or video presentations.

Instructions and test items presented in a language other than English. Translated items in some languages may significantly increase the length of a test, especially if the language requires phrases or explanations of English words. Some students need English and native language versions of items available at the same time. Computer-based testing may provide an advantage in both of these situations for students who are computer literate and able to scroll across and down long pages, and who can move between two versions of items. For students using screen readers, it is important for the screen reading software to recognize non-Latin based languages (e.g., Chinese, Korean, Hmong). Audio versions in native languages need to be in a dialect familiar to the student (e.g., a student from Mexico may have difficulty understanding a translation from Spain).

The use of machine translations is increasing. Yet, at this time, the translation may not be good enough to produce valid test items. Tests developed in multiple languages, with human rather than machine translation continue to be the most valid. Machine translators may be useful as a dictionary or glossary for specific words or phrases. The disadvantage of a human translator is the lack of standardized translation. For example, an interpreter may change the difficulty of items through word choice, explaining vocabulary for which there is not direct translation, or otherwise coaching students.

Braille. Tests that do not require students to read printed text (e.g., math tests) can be read by a student with a screen reader that converts text into synthesized speech. Tests that do require students to read printed text (e.g., reading tests) could be read by a student with a screen reader that converts text into Braille through a refreshable Braille device attached to the computer. For students who are deaf and blind, all of the content must be in a text format so that it can be converted to Braille. Images must also be accessible. The Technology Act requires that “when an image represents a program element, the information conveyed by the image must also be available in text.” Strategies for this are described in the section on images and graphics below.

Highlighter and place holding templates. Students should be able to self-select the use of a highlighting feature to mark words or phases within test items, just as they might on paper/pencil tests.

Graphics or images that supplement text. The purpose of graphics and images on an assessment is to aid in the understanding of an item, and not purely for decorative purposes. That said, images can aid greatly in the understanding of content, especially for students with learning disabilities and students whose native language is not English. Pictures and other graphics cannot be directly accessed by users of screen-readers or foreign language translation applications. The Assistive Technology Act requires that “When an image represents a program element, the information conveyed by the image must also be available in text.” The Act goes on to state, “A text equivalent for every non-text element shall be provided (e.g., via “alt,” “longdesc,” or in element content).” Images need to be selected carefully, with a concise, yet complete description in an alt tag.

Tactile graphics or three-dimensional models may be needed for images. It is also important to avoid the use of complex backgrounds or wallpaper that may interfere with the readability of overlying text. Simpler versions of any screens with complex backgrounds need to be available.

Paper/pencil test format. Some students will continue to need paper and pencil versions of tests. There are still many students who are not computer literate. These students may, for example, be recent immigrants from countries where computers are not used in instruction, or they may have had little formal schooling in their home country. Other students may have had insufficient opportunities to become computer literate in U.S. schools for a variety of reasons. Some students need accommodations that have not been made available on computer-based tests, especially if the assessments are graphics based rather than text based.

Use of color. Students need to be able to choose a variety of contrasting colors for background and text. According to the Assistive Technology Act, computer “applications shall not override user selected contrast and color selections and other individual display attributes.” In addition, for the assistance of students who are color blind or who are using monochrome monitors, the Assistive Technology Act states, “Color coding shall not be used as the only means of conveying information, indicating an action, prompting a response, or distinguishing a visual element…Web pages shall be designed so that all information conveyed with color is also available without color, for example from context or markup.” If color-coding is used to distinguish information, some other distinguishing feature should also be present (such as an asterisk or other textual indication).

Flashing or blinking text or objects. It is important to avoid text or objects that flash or flicker at rates that may induce seizures in people who are susceptible to them. The Assistive Technology Act requires that “software shall not use flashing or blinking text, objects, or other elements having a flash or blink frequency greater than 2 Hz and lower than 55 Hz.”

 Multiple column layout. Items that use columns or tables need to be analyzed carefully to make sure that their linear presentation order is logical, enabling screen readers to access the information.

Captioning. As multi-media begins to be used for assessment presentation, it will be important to provide synchronized captions or transcripts for the audio portion of the content. Closed or open captioning for Web-based multimedia can be provided in the same way as for television shows or movies.

 

Response Accommodations

In Table 6 is an expanded list of response accommodations. For each of the accommodaitons, several considerations are listed. In the columns to the right of these considerations we indicate whether the accommodation is a built in feature, the need for the accommodation is not affected by being a computer-based test, or another accommodation (new or different) may be needed.

Table 6. Response Accommodations

Accommodation Considerations for Computer-based Tests

1*
Built in

2*
Not affected

3*
Other  accom. needed

Write in test booklet

 

 

Capacity for multiple options for selecting response – mouse click, keyboard, touch screen, speech recognition, assistive devices to access the keyboard (e.g., mouth stick or head wand)

Speech recognition capability

X

 

X

Scribe

 

Capacity for multiple options for selecting response (see above)

**Speech recognition capability

X

 

X

Brailler Speech recognition capability

 

X

X

Tape recorder Speech recognition capability

 

 

X

Paper/pencil response

 

 

Student computer literacy

Option for paper/pencil in addition to computer (e.g., scratch paper for solving problems, drafting ideas)

Option for paper/pencil in place of computer (e.g., extended response items)

Speech recognition capability

 

X

X

Spell check

 

 

Capacity for any student to self-select spell check option

Capacity to disable spell check option when spelling achievement is being measured

Spelling implications when using speech recognition software

X

 

 

Calculator Capacity for any student to select calculator option

X

 

 

English or bilingual dictionary/glossary

 

Capacity for any student to self-select dictionary option

Capacity for Pop-up definitions of key words/phrases (built into assessment)

Capacity for use of multiple languages

X

X

X

*1 Built-in feature of universally designed computer-based tests (available for self-selection by any student)
*2 Need for accommodation not affected by computer-based test
*3 New or different accommodation may be needed because of computer-based test
** According to Williams (2002), “The terms ‘speech recognition’ and ‘voice recognition’ are sometimes used interchangeably; however, voice recognition is primarily the task of determining the identity of a speaker, rather than the content of his or her speech” (p. 43).

Write in test booklet. There are many options for marking responses on computer-based tests that are not available on paper. It would still be possible for a student to dictate responses to a teacher, who would then mark them on the computer. The option of speech recognition software is also becoming more available. Speech recognition technology enables computers to translate human speech into a written format. Students who use speech recognition need to be tested in individual settings so as not to distract other test takers. Currently, speech recognition only works for some people, while others, especially those who are not native English speakers or those with speech impairments, can be frustrated by the software’s lack of ability to differentiate many of the sounds that they make. Some second language learners have accents that do not work well with speech recognition software (e.g., speakers of tonal languages tend to carry those tones into English and the software often does not recognize them). However, this technology is improving rapidly to recognize speakers with a wider variety of regional and second language accents (Williams, 2002). For example, according to Williams (p. 44):

Until recently, research on speech recognition for children used standard acoustic models based on a blend of adult voices. In order to improve the accuracy of recognition for children, researchers at IBM’s T.J. Watson Research Center have created a children’s acoustic model based on data collected from 800 children interviewed at multiple locations across the United States.

Research is also underway to allow students to speak naturally, rather than the current practice of pausing slightly between words. High-quality microphones improve recognition. Students who have tests presented in their native language may have a difficult time responding using an English alphabet keyboard if they are responding in a non-alphabet language. For example, in Chinese, adults need to know thousands of individual characters to read a text like a newspaper. Each character equals a word. So, Chinese computer keyboards may have keys that represent pieces of characters (strokes) that have to be combined together in a precise way to form a specific word.

Additional options that can enable students to select responses independently include simple mouse clicks, using the keyboard, touching the screen and assistive devices to access the keyboard (e.g., mouth stick or head wand).

Scribe. Many of the comments and cautions described in the previous paragraphs also apply here. Students who are able to use speech recognition software may be able to dictate written responses without the aid of a human scribe. Other assistive technology may enable students to compose extended responses, such as communication devices, a mouth stick or head wand.

Brailler. Some students may be able to use speech recognition software (with the cautions described above) in place of a Brailler. Others will continue to require or prefer the use of a Brailler.

Tape recorder. Speech recognition software can take the place of a tape recorder for many students, with the cautions described above.

Paper/pencil response. Some students will not have enough experience or confidence using computers to be able to produce valid assessment responses and may need to use paper/pencil test forms until they become computer literate. Some students will only need paper for solving problems and drafting ideas, while others will need to respond completely using a paper/pencil format, with responses transferred to an electronic test form by a test administrator. Speech recognition, with the cautions described above, may be a viable option for some of these students.

Spell check. The use of a spell check has been controversial on writing tests. It is usually allowed in situations where spelling achievement is not measured, and not allowed when spelling achievement is being measured. Spelling implications need to be considered for students who use speech recognition software.

Calculator. As with the spell check, an online calculator option has been controversial on mathematics tests. Calculator use is often allowed on paper/pencil tests when arithmetic is not the construct being measured (Russell, 2002). However, standardization of the type of calculator used has been very difficult and would be much easier if all students had the same online calculator to use. Use of an online calculator is challenging for some students, especially if they have not had practice with this tool in their daily work. Currently, few teachers use computers in math instruction, so students are not used to working on screens.

English or bilingual dictionary/glossary. Students can self-select a dictionary option, or simply click on key words for definitions in English or other languages. Print copies of dictionaries could continue to be used if this option is not available. And, as with the spell check option, it would need to be disabled when finding the definition of a word is being tested.

 

Timing/Scheduling Accommodations

Timing accommodations reflect changes in the amount of time a student has to complete an assessment, while scheduling accommodations are changes in the time of day in which a student is tested. Table 7 is an expanded list of timing and scheduling accommodations, with considerations and implications.

Table 7. Timing/Scheduling Accommodations

Accommodation Considerations for Computer-based Tests

1*
Built in

2*
Not affected

3*
Other  accom. needed

Extended time Availability/location of computers and peripherals

Flexible, individualized timing

X

X

 

Time of day beneficial to student

 

Capacity of network system

Availability/location of computers and peripherals

Scheduling

X

X

 

Breaks during test

 

Maintaining place and saving completed responses

Capacity to turn off monitor/ blank screen temporarily

Test security

 

X

 

Multiple test sessions, possibly over multiple days Maintaining place and saving completed responses

Test security

 

X

 

Order of sub-test administration

 

Capacity of technology for self-selection of subtest order

Test security

Scheduling

X

X

 

*1 Built-in feature of universally designed technology-based test (available for self-selection by any student)
*2 Need for accommodation not affected by computer-based test
*3 New or different accommodation may be needed because of computer-based test

 

Extended time. Well-designed assessments—those designed for maximum legibility and readability—take less time to complete than poorly designed assessments. Still, it may require more time for students who are not computer literate to take computer-based tests than it does for them to take paper/pencil assessments. Allowing all students time to complete an assessment presents scheduling challenges that need to be considered when planning test administration. For example, groups of students cannot be scheduled for testing in a computer lab every two hours if there are students who cannot finish in that amount of time. It may be difficult for a student to log off one computer and then log back on at another location to complete an assessment. However, with the advent of wireless computers, it may be possible for a computer to be used in any location.

Timing is no longer an issue for most criterion-referenced tests, which tend to be untimed. Computerized adaptive tests, where items are presented based on a student’s previous responses, tend to be shorter in length than traditional large-scale tests, and usually take less time to complete.

Time of day beneficial to student. Currently, it is common for all test takers within a building, district, or even state to be tested at the same time on the same day. With computer-based testing, test times probably need to vary because of the availability of computers and network capacity. This variability may increase opportunities for individual students to be scheduled at test times that are most beneficial for them. For example, a student who is more alert in the morning because of medication could be tested during a morning session.

Breaks and multiple test sessions. Technology is required for multiple test sessions that would allow individual students to submit their completed responses and be able to log out and back on again at another time, starting at the place where they previously left off. For short breaks, it may be possible to simply turn off the monitor or create a blank screen rather than logging out. Careful scheduling is needed for multiple test sessions to make sure that computers are available. Test security becomes an issue if students who have responded to the same test items have opportunities to interact with each other between test sessions. This can be alleviated through the use of item banks large enough to make it unlikely that students would be exposed to the same items. It might also be possible to block access to items completed during a previous session. However, it is important for students to be able to return to items that they skipped or did not complete, just as they can with paper/pencil tests.

Order of subtest administration. Tests can be set up to allow students to self-select the order in which they take each subtest. The security issues described above also apply here. If students within a room are not all working on the same subtest, directions or other guidance from the test administrator would need to be provided individually.

 

Setting Accommodations

A list of commonly used setting accommodations is provided in Table 8. For each accommodation, we provide both considerations and implications for built in accommodations, no effect, and the need for new or different accommodations.

 

Table 8. Setting Accommodations

Accommodation Considerations for Computer-based Tests

1*
Built in

2*
Not affected

3*
Other  accom. needed

Individual or small group administration Availability/location of computers and peripherals

Grouping arrangements

Use of earphones or headphones

Use of individual setting if response method distracts other students (e.g., speech recognition)

X

X

 

Preferential seating

 

Availability/location of computers and peripherals

X

X

 

Special lighting

 

Minimize glare from windows or overhead lights

Contrast and color in item design

X

X

 

Adaptive or special furniture Types of adaptive furniture needed and need for changes in computer location/ peripherals

 

X

X

Hospital/home/non-school administration

 

Availability/comparability/location of computers and peripherals

Test security

 

X

 

*1 Built-in feature of universally designed computer-based test (available for self-selection by any student)
*2 Need for accommodation not affected by computer-based test
*3 New or different accommodation may be needed because of computer-based test

 

Individual or small group administration. Computer-based tests create increased individualization for every student. Each student can be seated at a separate computer station wearing ear/headphones for audio instructions or items. Keyboard noise may be distracting for students not wearing headphones. Students using speech recognition systems or other distracting response methods need to be tested in individual settings.

Preferential seating. This becomes a non-issue when students are seated at individual computer stations and do not need to focus on activity in a certain part of the room. Configuration of the computer lab may influence seating arrangements. For example, some students will need space around their computer for assistive technology; others may need special lighting.

Special lighting. Computer labs are usually set up to minimize glare from windows or overhead lights. Many also contain incandescent lighting, which is less distracting for students with attention deficits and produces better light for students with visual impairments. In designing computer-based tests, it is important to maximize contrast between the print and background and to ensure that text and graphics are understandable when viewed without color, for students who are color-blind or using monochrome monitors. Students should be able to self-select text and background colors and shading that maximizes their ability to read print on the screen.

Adaptive or special furniture. Students need comfortable access to a computer screen and any peripheral presentation or response technology. These arrangements need to be made on an individual basis with sufficient preparation time.

Home/hospital/non-school administration. Computer-based tests present new challenges for students who are tested in non-school locations. Students need access to a laptop computer and a network connection (possibly wireless), along with any individualized accommodations. It is important to make sure that the equipment is comparable to that used by students assessed in school buildings.


Summary

With the reauthorization of Title I, nearly all states are in the process of designing new assessments. As part of this process, several states are considering the use of computer-based testing, since this is the mode in which many students are already learning. Several states have already begun designing and implementing computer-based testing. According to a report to the National Governors Association (2002), “Testing by computer presents an unprecedented opportunity to customize assessment and instruction to more effectively meet students’ needs” (p. 8). Some of the potential opportunities presented by the advent of computer-based testing include: efficient administration, preferred by students, self-selection options for students, improved writing performance, built-in accommodations, immediate results, efficient item development, increased authenticity, and the potential to shift focus from assessment to instruction. Of course, there remain many challenges that must be overcome in order for computer-based testing to be effective for large-scale state assessments. These include: issues of equity and skill in computer use, added challenges for some students, technological challenges, security of online data, lack of expertise in designing accessible Web pages, and prohibitive development costs.

Because many accessibility features can be built into computer-based tests, the validity of test results can be increased for many students, including students with disabilities and English language learners, without the addition of special accommodations. However, even though items on universally designed assessments are accessible for most students, there will still be some specialized accommodations, and computer-based testing needs to be amenable to these accommodations. Students with disabilities will be at a great disadvantage if paper/pencil tests are simply copied on screen without any flexibility. Until the implications of the use of graphics versus text-based user interfaces are considered and resolved, a large number of students will need to continue to use paper/pencil tests, with a possible reduction in the comparability of results, and an increase in administrative time and potential errors when paper/pencil responses are transferred by a test administrator to a computer for scoring.

There are many resources for building accessible computer-based tests in order to keep from reinventing systems from state to state. These are described throughout this report and listed in Appendix A.

Several steps were described to assist groups in the thoughtful development of computer-based tests. These include:

Step 1. Assemble a group of experts to guide the transformation.

Step 2. Decide how each accommodation will be incorporated into the computer-based test.

Step 3. Consider each accommodation or assessment feature in light of the constructs being tested.

Step 4. Consider the feasibility of incorporating the accommodation into the computer-based test.

Step 5. Consider training implications for staff and students.

Skipping any of these steps may result in the design of assessments that exclude large numbers of students.

In conclusion, a report to the National’s Governors Association (2002, p.9) sums up what we need to remember as computer-based testing grows across the United States and throughout the world:

Do not forget why electronic assessment is desired. Electronic assessment will enable states to get test results to schools faster and, eventually, cheaper. It will help ensure assessment keeps pace with the tools that students are using for learning and with the ones that adults are increasingly using at work. The technology will also help schools improve and better prepare students for the next grade, for postsecondary learning, and for the workforce.


References

Anderson, M., Liu, K., Swierzbin, B., Thurlow, M., & Bielinski, J. (2000). Bilingual accommodations for limited English proficient students on statewide reading tests: Phase 2 (Minnesota Report 31). Minneapolis, MN: University of Minnesota, National Center on Educational Outcomes.

Baker, E.L. (1999). Technology: Something’s coming—something good. CRESST Policy Brief 2. Los Angeles, CA: UCLA, National Center for Research on Evaluation, Standards, and Student Testing.

Baker, E.L. (2002). Design of automated authoring systems for tests. In National Research Council, Technology and assessment: Thinking ahead: Proceedings of a workshop. Board on Testing and Assessment, Center for Education, Division of Behavioral and Social Sciences and Education. Washington, DC: National Academy Press.

Bejar, I.I. (1995). From adaptive testing to automated scoring of architectural simulations. In E.L. Mancall & P.G. Bashook (Eds.), Assessing clinical reasoning: The oral examination and alterative methods. Evanston, IL: American Board of Medical Specialties.

Bennett, R.E. (1995). Computer-based testing for examinees with disabilities: On the road to generalized accommodation. In S. Messick (Ed.), Assessment in higher education: Issues of access, student development, and public policy. Hillsdale, NJ: Erlbaum.

Bennett, R.E. (1998). Reinventing assessment: Speculations on the future of large-scale educational testing. Princeton, NJ: Policy Information Center, Educational Testing Service. Retrieved March, 2002, from the World Wide Web: www.ets.org/research/pic/bennett.html

Bennett, R.E. (1999). Using new technology to improve assessment. Educational Measurement Issues and Practice, 18 (3), 5-12.

Bennett, R.E. (2001). How the Internet will help large-scale assessment reinvent itself. Education Policy Analysis Archives, 9 (5). Retrieved March, 2002, from the World Wide Web: http://epaa.asu.edu/epaa/v9n5.html

Bennett, R.E. (2002). An electronic infrastructure for a future generation of tests. In H.F. O’Neil, Jr. & R. Perez (Eds.), Technology applications in education: A learning view. Mahwah, NJ: Erlbaum.

Bennett, R.E., Goodman, J., Hessinger, J., Ligget, J., Marshall, G., Kahn, H., & Zack, J. (1999). Using multimedia in large-scale computer-based testing programs. Computers in Human Behavior, 15, 283-294.

Bloodsworth, J.G. (1993). Legibility of print. Columbia, SC: ERIC Accession No: ED 355497.

Bolt, D. & Crawford, R. (2000). Digital divide: Computers and our children’s future. New York: TV Books.

Bowe, F. (2000). Universal design in education: Teaching nontraditional students. Westport, CT: Bergin & Garvey.

Brown, P.J., & Augustine, A. (2001). Screen reading software as an assessment accommodation: Implications for instruction and student performance. Paper presented at the American Education Research Association Annual Meeting, Seattle, WA, April, 2001.

Brown-Chidsey, R., & Boscardin, M.L. (1999). Computers as accessibility tools for students with and without learning disabilities. Amherst, MA: University of Massachusetts.

Brown-Chidsey, R., Boscardin, M.L., & Sireci, S.G. (1999). Computer attitudes and opinions of students with and without learning disabilities. Amherst, MA: University of Massachusetts.

Burk, M. (1999). Computerized test accommodations: A new approach for inclusion and success for students with disabilities. Washington, D.C.: A.U. Software.

Bushweller, K. (2000, June). Electronic exams: Throw away the No. 2 pencils—here comes computerized testing. Electronic School, 20-24.

Calhoon, M.B., Fuchs, L.S., & Hamlett, C.L. (2000). Effects of computer-based test accommodations on mathematics performance assessments for secondary students with learning disabilities. Learning Disability Quarterly, 23, 271-282.

Campbell, L.M. & Waddell, C.D. (1997). Technology-based curbcuts: How to build an accessible Web site. CAPED Communiqué, California Association on Postsecondary Education and Disability.

Center for Universal Design. (1997). What is Universal Design? North Carolina State University: Center for Universal Design. Retrieved March, 2002, from the World Wide Web: www.design.ncsu.edu

Chishold, W., Vanderheiden, G., & Jacobs, I. (1999). Web content accessibility guidelines. Madison, WI: University of Wisconsin, Trace R & D Center. Retrieved March, 2002, from the World Wide Web: http://www.w3.org/TR/1999/WAI-WEBCONTENT-19990505

Chung, W.K., Baker, E.L., & Cheak, A.M. (2001). Knowledge mapper authoring system prototype. (Final deliverable to OERI). Los Angeles: University of California, National Center for Research on Evaluation, Standards, and Student Testing.

Chung, W.K., Klein, D.C.D., Herl, H.E., & Bewley, W. (2001). Requirements specification for a knowledge mapping authoring system. (Final deliverable to OERI). Los Angeles: University of California, National Center for Research on Evaluation, Standards, and Student Testing.

Computer Science and Telecommunications Board. (1997). More than screen deep: toward every-citizen interfaces to the nation’s information infrastructure. Washington DC: Commission on Physical Sciences, Mathematics, and Applications, National Research Council, National Academy Press. Retrieved March, 2002, from the World Wide Web: http://www.nap.edu/readingroom/books/screen

Daiute, C. (1985). Writing and computers, Reading, MA: Addision-Wesley.

Dolan, R.P., & Hall, T.E. (2001). Universal design for learning: Implications for large-scale assessment. IDA Perspectives 27(4), 22-25. Retrieved March, 2002, from the World Wide Web: http://www.cast.org/udl/index.cfm?i=2518

Fuchs, L.S., Fuchs, D., Eaton, S., Hamlett, C.L., & Karns, K. (2000). Supplementing teacher judgments of mathematics test accommodations with objective data sources. School Psychology Review, 29, 65-85.

Gershon, R., & Bergstrom, B. (1995). Does cheating on CAT pay: NOT! ERIC ED392844.

Gitomer, D.H., Steinbert, L.L., & Mislevy, R.J. (1995). Diagnostic assessment of troubleshooting skills in an intelligent system. Princeton, NJ: Educational Testing Service.

Greenwood, C.R., & Rieth, H.J. (1994). Current dimensions of technology-based assessment in special education. Exceptional Children, 61(2), 105-113.

Goldberg, L., & O’Neill, L.M. (2000, July). Computer technology can empower students with learning disabilities. Exceptional Parent Magazine, 72-74.

Haaf, R., Duncan, B., Skarakis-Doyle, E., Carew, M., & Kapitan, P. (1999). Computer-based language assessment software: The effects of presentation and response format. Language, Speech, and Hearing Services in Schools, 30, 68-74.

Haas, C,. & Hayes, J.R. (1986) What did I just say? Reading problems in writing with the machine. Research in the Teaching of English, 20 (1), 22-35.

Hamilton, L. S., Klein, S. P., & Lorie, W. (2001). Using Web-based testing for large-scale assessment. Santa Monica: RAND. Retrieved March, 2002, from the World Wide Web: www.rand.org/publications/IP/IP196/IP196.pdf

Hollenbeck, K., Tindal, G., Harniss, M., & Almond, P. (1999). Reliability and decision consistency: An analysis of writing mode at two times on a statewide test. Educational Assessment, 6 (1), 23-40.

Joint Committee on Standards for Educational and Psychological Testing. (1999). Standards for educational and psychological testing. Washington, DC: Author.

Kerrey, B. & Isakson, J. (2002). The power of the internet for learning: moving from promise to practice—Report of the Web-based Education Commission. Washington, DC: Web-based Education Commission. Retrieved March, 2002, from the World Wide Web: http://interact.hpcnet.org/webcommission/index.htm.

Lahm, E.A., & Nickels, B.L. (1999). Assistive technology competencies for special educators. Teaching Exceptional Children, 32(1), 566-63.

Lewis, A. (2001). New directions in student testing and technology. APEC 2000 International Assessment Conference, Los Angeles.

Liu, K., Anderson, M., Swierzbin, B., & Thurlow, M. (1999). Bilingual accommodations for limited English proficient students on statewide reading tests: Phase I (Minnesota Report 20). Minneapolis, MN: University of Minnesota, National Center on Educational Outcomes.

Lunz, M.E., & Bergstrom, B.A. (1994). An empirical study of computerized adaptive test administration conditions. Journal of Educational Measurement, 31 (3), 251-263.

McBride, J.R. (1985). Computerized adaptive testing. Educational Leadership, 43 (2), 25-28.

Menlove, M., & Hammond, M. (1998). Meeting the demands of ADA, IDEA, and other disability legislation in the design, development, and delivery of instruction. Journal of Technology and Teacher Education, 6 (1), 75-85.

Mislevy, R.J., Steinberg, L.L., & Almond, R.G. (1999). Evidence-centered assessment design. Princeton, NJ: Educational Testing Service.

Morocco, C.C., & Neuman, S.B. (1986). Word processors and the acquisition of writing strategies. Journal of Learning Disabilities, 19(4), 243-248.

Mourant, R.R., Lakshmanan, R. & Chantadisai, R. (1981). Visual fatigue and cathode ray tube display factors. Human Factors, 23 (5), 529-546.

National Governors Association. (2002). Using electronic assessment to measure student performance. Education Policy Studies Division: National Governors Association. Retrieved March, 2002, from the World Wide Web: http://www.nga.org/cda/files/ELECTRONICASSESSMENT.pdf

National Research Council. (2001). Knowing what students know: The science and design of educational assessments. Washington, DC: Board on Testing and Assessment, Center for Education. Division of Behavioral and Social Sciences and Education, National Academy Press.

National Research Council. (2002). Technology and assessment: Thinking ahead: Proceedings of a workshop. Washington, DC: Board on Testing and Assessment, Center for Education. Division of Behavioral and Social Sciences and Education, National Academy Press.

Newman, F. & Scurry, J. (2001). Online technology pushes pedagogy to the forefront. The Chronicle of Higher Education, 47 (44). Retrieved March, 2002, from the World Wide Web: http://chronicle.com/weekly/v47/i44/44b00701.htm

Olson, L. (2002). Ed. dept. hints Idaho’s novel testing plan unacceptable. Education Week, 21 (21) 18,21. Retrieved March, 2002, from the World Wide Web: http://edweek.com/ew/newstory.cfm?slug=21Idaho.h21&keywords=Idaho

Ommerborn, R., & Schuemer, R. (2001). Using computers in distance study: Results of a survey amongst disabled distance students. FernUniversität-Gesamthochschule in Hagen. Retrieved March, 2002, from the World Wide Web: http://www.fernuni-hagen.de/ZIFF

Peters-Walters. S. (1998). Accessible Web site design. Teaching Exceptional Children, 30(5), 42-47.

Quality Counts (2002). Building blocks for success. Retrieved March, 2002, from the World Wide Web: www.educationweek.org.

Quenemoen, R., Thurlow, M., & Bielinski, J. (2002). Rethinking design and levels approaches to federal inclusive assessment and accountability requirements (Working Paper). Minneapolis, MN: University of Minnesota, National Center on Educational Outcomes.

Rivera, C., Stansfield, C.W., Scialdone, L., & Sharkey, M. (2000). An analysis of state policies for the inclusion and accommodation of English language learners in state assessment programs during 1998-1999. Arlington, VA: George Washington University Center for Equity and Excellence in Education.

Rose, D. (2000). Universal design for learning. Journal of Special Education Technology, 15 (4). Retrieved March, 2002, from the World Wide Web: http://jset.unlv.edu/15.4/issuemenu.html

Russell, M. (2002). How computer-based technology can disrupt the technology of testing and assessment. In National Research Council, Technology and assessment: Thinking ahead: Proceedings of a workshop. Washington, DC: Board on Testing and Assessment, Center for Education. Division of Behavioral and Social Sciences and Education, National Academy Press.

Russell, M. & Haney, W. (1997). Testing writing on computers: An experiment comparing student performance on tests conducted via computers and via paper-and-pencil. Educational Policy Analysis Archives, 5 (3). Retrieved March, 2002, from the World Wide Web:http://epaa.asu.edu/epaa/v5n3.html

Russell M. & Haney.W. (2000). Bridging the gap between testing and technology in schools. Education Policy Analysis Archives, 8 (19). Retrieved March, 2002, from the World Wide Web:http://epaa.asu.edu/epaa/v8n19.html

Russell, M. & Plati, T. (2001). Effects of computer versus paper administration of a state-mandated writing assessment. Teachers College Record. Retrieved March, 2002, from the World Wide Web: http://www.tcrecord.org

Schriver, K. (1997). Dynamics of document design. New York: John Wiley & Sons.

Stocking, M. (1996). Revising answers to items in computerized adaptive testing: A comparison of three models. ETS Report Number ETS-RR-96-12. Princeton, NJ: Educational Testing Service.

Thompson, C. (1999). New word order: The attack of the incredible grading machine. Linguafranca, 9 (5). Retrieved March, 2002, from the World Wide Web: http://www.linguafranca.com/9907/nwo.html

Thompson, S.J., Johnstone, C.J., & Thurlow, M.L. (2002). Universal design applied to large-scale assessments (Synthesis Report 44). Minneapolis, MN: University of Minnesota, National Center on Educational Outcomes.

Thurlow, M.L., Lazarus, S., & Thompson, S.J. (2002). 2001 state policies on assessment participation and accommodations. Minneapolis, MN: University of Minnesota, National Center on Educational Outcomes.

Tindal, G. & Fuchs, L.S. (1999). A summary of research on test changes: An empirical basis for defining accommodations. Lexington, KY: University of Kentucky, Mid-South Regional Resource Center.

Tindal, G., Heath, B., Hollenbeck, K., Almond, P., & Harniss, M. (1998). Accommodating students with disabilities on large-scale tests: An experimental study. Exceptional Children, 64, 439-450.

Tinker. (1963). Legibility of print. Ames, IA: Iowa State University Press.

Trotter, A. (2001). Testing computerized exams. Education Week, 20 (37) 30-35. Retrieved March, 2002, from the World Wide Web: www.edweek.org/ew/ewstory.cfm?slug=37online.h20

Vanderheiden, G. (2000). Fundamental principles and priority setting for universal usability. Trace Research & Development Center, Madison, WI. Retrieved March, 2002, from the World Wide Web: http://trace.wisc.edu/docs/fundamental_princ_and_priority_acmcuu2000/index.htm

Waddell, C.D. (1997). Technology-based curbcuts for government Web sites: Making your Web site accessible. ADA Update, National League of Cities.

Wainer, H. (1993). Some practical considerations when converting a linearly administered test to an adaptive format. Educational Measurement, Issues and Practice, 12, 15-20.

Web Accessibility Initiative, World Wide Web Consortium. Retrieved March, 2002, from the World Wide Web: http://www.w3.org/WAI/

WebAIM (2001). Introduction to Web accessibility. Retrieved March, 2002, from the World Wide Web: www.webaim.org/intro/

Williams, S.M. (2002). Speech recognition technology and the assessment of beginning readers. In National Research Council, Technology and assessment: Thinking ahead: Proceedings of a workshop. Washington, DC: Board on Testing and Assessment, Center for Education. Division of Behavioral and Social Sciences and Education, National Academy Press.

Wissick, C.A., & Gardner, J.E. (2000). Multimedia or not to multimedia? That is the question for students with learning disabilities. Teaching Exceptional Children, 32 (4), 34-43.


Appendix A

Assistive Technology and Electronic Testing Resources


A-Prompt
Checks Web pages for barriers to accessibility and making repairs to correct any problems, A-Prompt will ensure that you are reaching the widest possible audience. http://aprompt.snow.utoronto.ca/

Ability Hub
Site designed for people with disabilities who find operating a computer difficult or impossible. Web site directs user to adaptive equipment and alternative methods available for accessing computers. www.abilityhub.com

AbleData
Comprehensive directory of assistive technology products and vendors. Searchable database of approximately 25,000 assistive devices. Fact sheets and consumer guides. www.abledata.com

Adaptive Environments
In addition to numerous other services, Adaptive Environment’s New England ADA & Accessible IT Center provides free technical assistance on: The Americans with Disabilities Act (ADA), Sections 504 and 508 of the Rehabilitation Act, Section 255 of the Telecommunications, Accessibility of education-based information technology (IT) www.adaptenv.org

Adaptive Solutions
Provides services and technology for people who have: blindness, visually impairments, and physically disabilities. www.adaptsol.com

Adaptive Technology Resource Centre
Provides information, support and training which will allow individuals to make informed decisions and build the skills required to both access and employ technical tools. www.utoronto.ca/atrc

AI Squared
Providers of Zoom Text Xtra 7.1, Big Shot Magnifier, Zoom Text for DOS, and VisAbility screen magnification programs. www.aisquared.com

Alliance for Technology Access
Network of community-based Resource Centers, Developers and Vendors, Affiliates, and Associates dedicated to providing information and support services to children and adults with disabilities, and increasing their use of standard, assistive, and information technologies. www.ataccess.org

AlphaSmart, Inc.
Producers of AlphaSmart and Co:Writer; devices that support writing by word prediction and spelling correction. www.alphasmart.com

American Educational Research Association (AERA)
A professional organization comprised of scholars in all of the social sciences related to educational research. Division D is concerned with measurement and methodology and Division H addresses evaluation concerns. www.aera.net

American Statistical Association
Homepage for professional organization has keyword search that links user to related articles in Journal of Statistics Education. www.amstat.org

Apple Computer, Inc.
Provides application software guidelines that include information useful for designing accessible software applications. http://developer.apple.com/techpubs/mac/HIGuidelines/HIGuidelines-2.html

Arizona State University College of Education
Homepage covers all aspects of education but see especially the directory entitled “Scholarly Resources for Educational Research and Technology in Education.”
www.ed.asu.edu/coe

Assistive Media
Provides free-of-charge, copyright-approved, high caliber audio literary works to the world-wide disability community via the Internet. www.assistivemedia.org

Assistive Technology Data Collection Project
Research project that compiles reports on use and access of assistive technology.  www.infouse.com/atdata/activities.html

Assistive Technology, Inc.
Provide innovative solutions to help people with learning, communication, and access difficulties lead more independent and productive lives. www.assistivetech.com

Assistive Technology Industry Association
Serves as the collective voice of the Assistive Technology industry and represents the interests of its members to business, government, education, and the many agencies that serve people with disabilities. www.atia.org

Association for the Advancement of Assistive Technology in Europe
Organization that focuses on:
creating the awareness of assistive technology, promoting Research & Development of assistive technology, contributing to knowledge exchange within the field of assistive technology e.g. by arranging conferences, and promoting information dissemination. www.fernuni-hagen.de/FTB/AAATE.html#resources

Bartimaeus Group Adaptive Technology
Provides access solutions for people who are blind or visually impaired and for agencies and companies that must meet section 501, 504 and 508 requirements through:
on-site adaptive technology training, JAWS scripting for inaccessible application, technical support and troubleshooting, and web page and usability testing.   www.bartsite.com

 

BrightEye Technology
Provide Scan-A-Word and Scan-A-Page products that read out loud any typed text, such as books, magazines, newspapers, letters, and forms. This also includes any text shown on the computer screen. www.brighteye.com

BrookesTalk
Developers of a Web browser for people with visual impairments called BrookesTalk (in four languages), and currently developing interaction modes for people with severe disabilities. www.brookes.ac.uk/schools/cms/research/speech/btalk.htm

California State University, Northridge Center on Disabilities
Offers training programs in assistive technology and sponsors an annual conference entitled “Technology and Persons with Disabilities.” www.csun.edu/cod/

CAP (Computer/Electronic Accommodations Program)
Provides assistive technology accommodations and services to persons with disabilities at the Department of Defense (DoD) and other Federal agencies at no cost to the requesting organization. www.tricare.osd.mil/cap/

Center for Advanced Research on Language Acquisition (CARLA)
Language acquisition research center that has a page on computer-adapted testing.http://carla.acad.umn.edu/

Center for Applied Special Technology (CAST)
Creators of “Bobby” and other tools to help Web page authors identify and repair significant barriers to access by individuals with disabilities. www.cast.org

Center for Computer Assistance for the Disabled
Provide evaluation, information, referral and training where adaptive access is concerned. www.c-cad.org

Center for Evaluation, Standards, and Student Testing (National) (CRESST)
National research center dedicated to K-12 research on student testing. Provides technical reports on testing, accommodation of testing, and technology. www.cresst96.cse.ucla.edu/index.htm

Closing The Gap
Product descriptions, prices and contact information. Searchable database of computer-related products and services to assist persons with disabilities. www.closingthegap.com

disABILITY Information and Resources
Resource directory of assistive technology products and vendors. www.makoa.org

DREAMMS for Kids, Inc.
A non-profit parent and professional agency that specializes in assistive technology related research, development and information dissemination. www.dreamms.org

Dyslexic.com
Provide software, gadgets, and other products for people with dyslexia, visual impairments and other disabilities. www.dyslexic.com

Educational Testing Service
Site has a variety of files including a set on computer-based testing (CBT). The ETS Presidential Files contain extensive information on “What Every School Should Know About Testing Students with Disabilities.” www.ets.org

Equal Access to Software and Innovation (EASI)
Provides information and guidance in the area of access-to-information technologies by individuals with disabilities through on-line courses. www.isc.rit.edu/~easi/easi/alleasi.htm

ERIC Clearinghouse on Information & Technology (ERIC/IT)
Eric Infoguides are available at this site with titles like: Authentic Assessment, Outcome-Based Education, Technology-Plans, and Testing. www.ericit.org

ESL Café.com
On-line discussion site with discussion about computer adapted tests. www.eslcafe.com/discussion/db/index.cgi?read=2248

Frank Audiodata
Provides Blindows and other screen and text reading technologies. www.audiodata.de

Freedom of Speech
Providers of assistive technologies such as Naturally Speaking and other products aimed at assisting people with: blindness, low vision, learning disabilities, necessity for augmentative communication, and mobility impairments. www.freedomofspeech.com

Freedom Scientific
Provider of computer-based technology for people with low vision and blindness. Products include JAWS (Job Access with Speech) screen readers and WYNN (What You Need to kNow). www.freedomscientific.com

GW Micro, Inc.
Producers of Vocal-Eyes and Window-Eyes screen readers and screen Brailler programs. www.gwmicro.com

Helen A. Keller Institute for Human disAbilities, George Mason University
Conducts training and research and offers graduate degree programs in assistive technology. http://condor.gmu.edu/proto/gse/proto6/pages/keller.html

Humanware, Inc.
Specializes in assistive technology for persons who have difficulties reading print due to blindness, low vision, or learning and/or reading disabilities. Programs provided include: Braille Note, Voice Note, JAWS for Windows, Window-Eyes, Dolphin, outspoken, Mountbatten Pro, Kurzweil, textHELP and SmartView2. www.humanware.com

IBM Accessibility Center
Provides accessibility guidelines for Web sites and applications. www.ibm.com/able/accessweb.htm

Infinitec, Inc.
A non-profit corporation formed to help people with disabilities and their families access life-enhancing technology. www.infinitec.org

Institute for Matching Person & Technology
Works to better match users of technologies with the most appropriate devices for their use. The Institute works to enhance the situation of technology users through research, assessment, training and consultation. http://members.aol.com/IMPT97/MPT.html

Kurzweil Educational Products
Producers of Kruzweil 1000 and 3000, which have both screen reader and voice recognition capabilities as well as the Magna Reader that scans and enlarges printed information onto a computer screen. www.kurzweiledu.com

Lernout & Hauspie
Provider of advanced translation technologies such as: speech recognition, voice synthesis, sound compression, and language-to-language instant translation. www.lhsl.com

Matias, Inc.
Inventors of the “half keyboard” for one-handed typing. www.halfkeyboard.com

Microsoft Accessibility
Resource for finding assistive technology solutions on Windows-based computer. www.microsoft.com/enable

MultiWeb (Deakin University, Australia)
Free, downloadable software for sighted users to access the World Wide Web. www.austehc.unimelb.edu.au/asaw/exhib/awvs/multimedia/deakin.htm#top

National Center for Accessible Media
Researches and promotes the development of technologies that create access to public mass media and media policies. www.ncam.wgbh.org

On the Internet Magazine
On-line magazine with a special feature on guidelines for computer-based testing. www.isoc.org/oti/articles/0500/olsen.html

Open Group
Provides information on
application software guidelines that include information useful for designing accessible software applications. www.opengroup.org/publications/catalog/mo.htm

Question Mark Computing
A site devoted to Computer Aided Assessment. Features QM Web, a system for delivering tests, exams, tutorials and surveys on the World Wide Web. www.qmark.com

Rehabilitation Engineering and Assistive Technology Society of America (RESNA)
An interdisciplinary association of people with a common interest in technology and disability. Our purpose is to improve the potential of people with disabilities to achieve their goals through the use of technology. www.resna.org

RehabTool.com
Assistive and adaptive technology information, products and services for children and adults with disabilities. www.rehabtool.com

Society for Technical Education’s “Usability” Special Interest Group
A forum to share information and experiences on issues related to the usability and user-centered design. www.stcsig.org/usability/index.html

SoundLinks
Provide training, consultancy, installation and support for a range of speech-based internet products such as Home Page Reader and pwWebSpeak, as well as other alternative access methods. www.soundlinks.com

Synapse Adaptive
Creators of Synapse Adaptive Workstations that provide universal computer access to users regardless of their disability as well as other language translation, speech recognition and screen reading programs. www.synapseadaptive.com

Telesensory
Provider of video magnifiers and scanners (“reading machines”) for assisting people with visual impairments and blindness with reading. www.telesensory.com

TESOL Testing and Evaluation Special Interest Group
Web site with pages dedicated to best practices of evaluating English as a Second or Other Language using computers. www.taesig.8m.com/createx.html

TOEFL.org
Provides information on the computer-adapted version of the Test of English as a Foreign Language test. www.toefl.org/educator/edcomptest.html

Trace Center
Non-profit research center focused on making standard computer technologies and systems more accessible for people with disabilities. www.trace.wisc.edu

Washington Assistive Technology Alliance
Consumer advocacy network for information and referrals, consultation and training on selected AT devices. www.wata.org

Web Accessibility Initiative
Pursues accessibility of the Web through five primary areas of work: technology, guidelines, tools, education and outreach, and research and development. www.w3.org/WAI/

WebABLE
Provides accessibility technology and services to corporate, government, educational, and non-profit clients. www.webable.com

Web AIM (Accessibility in Mind)
Provides background information, training courses and information about products related to making the World Wide Web accessible to people with varying disabilities. www.webaim.org

World Wide Web Consortium
The World Wide Web Consortium (W3C) develops interoperable technologies (specifications, guidelines, software, and tools) for the World Wide Web. www.w3.org


Appendix B

Section 508 of the Rehabilitation Act of 1973, as amended (29 U.S.C. 794d). PART 1194 -- ELECTRONIC AND INFORMATION TECHNOLOGY ACCESSIBILITY STANDARDS

Subpart A -- General

§ 1194.1 Purpose.

The purpose of this part is to implement section 508 of the Rehabilitation Act of 1973, as amended (29 U.S.C. 794d). Section 508 requires that when Federal agencies develop, procure, maintain, or use electronic and information technology, Federal employees with disabilities have access to and use of information and data that is comparable to the access and use by Federal employees who are not individuals with disabilities, unless an undue burden would be imposed on the agency. Section 508 also requires that individuals with disabilities, who are members of the public seeking information or services from a Federal agency, have access to and use of information and data that is comparable to that provided to the public who are not individuals with disabilities, unless an undue burden would be imposed on the agency.

 

§ 1194.2 Application.

(a) Products covered by this part shall comply with all applicable provisions of this part. When developing, procuring, maintaining, or using electronic and information technology, each agency shall ensure that the products comply with the applicable provisions of this part, unless an undue burden would be imposed on the agency.

(1) When compliance with the provisions of this part imposes an undue burden, agencies shall provide individuals with disabilities with the information and data involved by an alternative means of access that allows the individual to use the information and data.

(2) When procuring a product, if an agency determines that compliance with any provision of this part imposes an undue burden, the documentation by the agency supporting the procurement shall explain why, and to what extent, compliance with each such provision creates an undue burden.

(b) When procuring a product, each agency shall procure products which comply with the provisions in this part when such products are available in the commercial marketplace or when such products are developed in response to a Government solicitation. Agencies cannot claim a product as a whole is not commercially available because no product in the marketplace meets all the standards. If products are commercially available that meet some but not all of the standards, the agency must procure the product that best meets the standards.

(c) Except as provided by §1194.3(b), this part applies to electronic and information technology developed, procured, maintained, or used by agencies directly or used by a contractor under a contract with an agency which requires the use of such product, or requires the use, to a significant extent, of such product in the performance of a service or the furnishing of a product.

 

§ 1194.3 General exceptions.

(a) This part does not apply to any electronic and information technology operated by agencies, the function, operation, or use of which involves intelligence activities, cryptologic activities related to national security, command and control of military forces, equipment that is an integral part of a weapon or weapons system, or systems which are critical to the direct fulfillment of military or intelligence missions. Systems which are critical to the direct fulfillment of military or intelligence missions do not include a system that is to be used for routine administrative and business applications (including payroll, finance, logistics, and personnel management applications).

(b) This part does not apply to electronic and information technology that is acquired by a contractor incidental to a contract.

(c) Except as required to comply with the provisions in this part, this part does not require the installation of specific accessibility-related software or the attachment of an assistive technology device at a workstation of a Federal employee who is not an individual with a disability.

(d) When agencies provide access to the public to information or data through electronic and information technology, agencies are not required to make products owned by the agency available for access and use by individuals with disabilities at a location other than that where the electronic and information technology is provided to the public, or to purchase products for access and use by individuals with disabilities at a location other than that where the electronic and information technology is provided to the public.

(e) This part shall not be construed to require a fundamental alteration in the nature of a product or its components.

(f) Products located in spaces frequented only by service personnel for maintenance, repair, or occasional monitoring of equipment are not required to comply with this part.


§ 1194.4 Definitions.

The following definitions apply to this part:

Agency. Any Federal department or agency, including the United States Postal Service.

Alternate formats. Alternate formats usable by people with disabilities may include, but are not limited to, Braille, ASCII text, large print, recorded audio, and electronic formats that comply with this part.

Alternate methods. Different means of providing information, including product documentation, to people with disabilities. Alternate methods may include, but are not limited to, voice, fax, relay service, TTY, Internet posting, captioning, text-to-speech synthesis, and audio description.

Assistive technology. Any item, piece of equipment, or system, whether acquired commercially, modified, or customized, that is commonly used to increase, maintain, or improve functional capabilities of individuals with disabilities.

Electronic and information technology. Includes information technology and any equipment or interconnected system or subsystem of equipment, that is used in the creation, conversion, or duplication of data or information. The term electronic and information technology includes, but is not limited to, telecommunications products (such as telephones), information kiosks and transaction machines, World Wide Web sites, multimedia, and office equipment such as copiers and fax machines. The term does not include any equipment that contains embedded information technology that is used as an integral part of the product, but the principal function of which is not the acquisition, storage, manipulation, management, movement, control, display, switching, interchange, transmission, or reception of data or information. For example, HVAC (heating, ventilation, and air conditioning) equipment such as thermostats or temperature control devices, and medical equipment where information technology is integral to its operation, are not information technology.

Information technology. Any equipment or interconnected system or subsystem of equipment, that is used in the automatic acquisition, storage, manipulation, management, movement, control, display, switching, interchange, transmission, or reception of data or information. The term information technology includes computers, ancillary equipment, software, firmware and similar procedures, services (including support services), and related resources.

Operable controls. A component of a product that requires physical contact for normal operation. Operable controls include, but are not limited to, mechanically operated controls, input and output trays, card slots, keyboards, or keypads.

Product. Electronic and information technology.

Self Contained, Closed Products. Products that generally have embedded software and are commonly designed in such a fashion that a user cannot easily attach or install assistive technology. These products include, but are not limited to, information kiosks and information transaction machines, copiers, printers, calculators, fax machines, and other similar types of products.

Telecommunications. The transmission, between or among points specified by the user, of information of the user's choosing, without change in the form or content of the information as sent and received.

TTY. An abbreviation for teletypewriter. Machinery or equipment that employs interactive text based communications through the transmission of coded signals across the telephone network. TTYs may include, for example, devices known as TDDs (telecommunication display devices or telecommunication devices for deaf persons) or computers with special modems. TTYs are also called text telephones.

Undue burden. Undue burden means significant difficulty or expense. In determining whether an action would result in an undue burden, an agency shall consider all agency resources available to the program or component for which the product is being developed, procured, maintained, or used.

 

§ 1194.5 Equivalent facilitation.

Nothing in this part is intended to prevent the use of designs or technologies as alternatives to those prescribed in this part provided they result in substantially equivalent or greater access to and use of a product for people with disabilities.


Subpart B -- Technical Standards

§ 1194.21 Software applications and operating systems.

(a) When software is designed to run on a system that has a keyboard, product functions shall be executable from a keyboard where the function itself or the result of performing a function can be discerned textually.

(b) Applications shall not disrupt or disable activated features of other products that are identified as accessibility features, where those features are developed and documented according to industry standards. Applications also shall not disrupt or disable activated features of any operating system that are identified as accessibility features where the application programming interface for those accessibility features has been documented by the manufacturer of the operating system and is available to the product developer.

(c) A well-defined on-screen indication of the current focus shall be provided that moves among interactive interface elements as the input focus changes. The focus shall be programmatically exposed so that assistive technology can track focus and focus changes.

(d) Sufficient information about a user interface element including the identity, operation and state of the element shall be available to assistive technology. When an image represents a program element, the information conveyed by the image must also be available in text.

(e) When bitmap images are used to identify controls, status indicators, or other programmatic elements, the meaning assigned to those images shall be consistent throughout an application's performance.

(f) Textual information shall be provided through operating system functions for displaying text. The minimum information that shall be made available is text content, text input caret location, and text attributes.

(g) Applications shall not override user selected contrast and color selections and other individual display attributes.

(h) When animation is displayed, the information shall be displayable in at least one non-animated presentation mode at the option of the user.

(i) Color coding shall not be used as the only means of conveying information, indicating an action, prompting a response, or distinguishing a visual element.

(j) When a product permits a user to adjust color and contrast settings, a variety of color selections capable of producing a range of contrast levels shall be provided.

(k) Software shall not use flashing or blinking text, objects, or other elements having a flash or blink frequency greater than 2 Hz and lower than 55 Hz.

(l) When electronic forms are used, the form shall allow people using assistive technology to access the information, field elements, and functionality required for completion and submission of the form, including all directions and cues.

 

§ 1194.22 Web-based intranet and internet information and applications.

(a) A text equivalent for every non-text element shall be provided (e.g., via "alt", "longdesc", or in element content).

(b) Equivalent alternatives for any multimedia presentation shall be synchronized with the presentation.

(c) Web pages shall be designed so that all information conveyed with color is also available without color, for example from context or markup.

(d) Documents shall be organized so they are readable without requiring an associated style sheet.

(e) Redundant text links shall be provided for each active region of a server-side image map.

(f) Client-side image maps shall be provided instead of server-side image maps except where the regions cannot be defined with an available geometric shape.

(g) Row and column headers shall be identified for data tables.

(h) Markup shall be used to associate data cells and header cells for data tables that have two or more logical levels of row or column headers.

(i) Frames shall be titled with text that facilitates frame identification and navigation.

(j) Pages shall be designed to avoid causing the screen to flicker with a frequency greater than 2 Hz and lower than 55 Hz.

(k) A text-only page, with equivalent information or functionality, shall be provided to make a web site comply with the provisions of this part, when compliance cannot be accomplished in any other way. The content of the text-only page shall be updated whenever the primary page changes.

(l) When pages utilize scripting languages to display content, or to create interface elements, the information provided by the script shall be identified with functional text that can be read by assistive technology.

(m) When a web page requires that an applet, plug-in or other application be present on the client system to interpret page content, the page must provide a link to a plug-in or applet that complies with §1194.21(a) through (l).

(n) When electronic forms are designed to be completed on-line, the form shall allow people using assistive technology to access the information, field elements, and functionality required for completion and submission of the form, including all directions and cues.

(o) A method shall be provided that permits users to skip repetitive navigation links.

(p) When a timed response is required, the user shall be alerted and given sufficient time to indicate more time is required.

 

Note to §1194.22: 1. The Board interprets paragraphs (a) through (k) of this section as consistent with the following priority 1 Checkpoints of the Web Content Accessibility Guidelines 1.0 (WCAG 1.0) (May 5, 1999) published by the Web Accessibility Initiative of the World Wide Web Consortium:


Appendix C

Section 508 Web Accessibility Checklist

(Updated March 29, 2001)

WebAIM (Web Accessibility in Mind) educates and trains web developers, university faculty and administrators on Web Accessibility issues. WebAIM is an initiative of the Center for Persons with Disabilities at Utah State University and is funded through the U.S. Department of Education Fund for the Improvement of Post-Secondary Education (FIPSE) Learning Anytime Anywhere Partnerships (LAAP). No official endorsement is inferred. Copyright 2000-2001 WebAIM. All Rights Reserved.

Part 1: for HTML

The following standards are excerpted from Section 508 of the Rehabilitation Act, §1194.22. Everything in the left hand column is a direct quote from Section 508. The other two columns are only meant to serve as helpful guidelines to comply with Section 508. These guidelines are suggestions only, and are not part of the official Section 508 document. For the full text of Section 508, please see http://www.access-board.gov/news/508-final.htm.

 

SEC. 508 STANDARD

Section 1194.22

PASS FAIL
A text equivalent for every non-text element shall be provided (e.g., via "alt", "longdesc", or in element content).

 

[See Note 1]

 

Every image, Java applet, Flash file, video file, audio file, plug-in, etc. has an alt description.

Complex graphics (graphs, charts, etc.) are accompanied by detailed text descriptions.

The alt descriptions succinctly describe the purpose of the objects, without being too verbose (for simple objects) or too vague (for complex objects).

 

Alt descriptions for images used as links are descriptive of the link destination.

Decorative graphics with no other function have empty alt descriptions (alt= ""), but they never have missing alt descriptions.

A non-text element has no alt description.

 

Complex graphics have no alternative text, or the alternative does not fully convey the meaning of the graphic.

Alt descriptions are verbose, vague, misleading, inaccurate or redundant to the context (e.g. the alt text is the same as the text immediately preceding or following it in the document).

Alt descriptions for images used as links are not descriptive of the link destination.

Purely decorative graphics have alt descriptions that say "spacer, "decorative graphic," or other titles that only increase the time that it takes to listen to a page when using a screen reader.

(b) Equivalent alternatives for any multimedia presentation shall be synchronized with the presentation Multimedia files have synchronized captions.

 

Multimedia files do not have captions, or captions which are not synchronized.
(c) Web pages shall be designed so that all information conveyed with color is also available without color, for example from context or markup. If color is used to convey important information, an alternative indicator is used, such as an asterisk (*) or other symbol.

Contrast is good.

The use of a color monitor is required.

 

 

Contrast is poor.

 

(d) Documents shall be organized so they are readable without requiring an associated style sheet. Style sheets may be used for color, indentation and other presentation effects, but the document is still understandable (even if less visually appealing) when the style sheet is turned off. The document is confusing or information is missing when the style sheet is turned off.

 

(e) Redundant text links shall be provided for each active region of a server-side image map. Separate text links are provided outside of the server-side image map to access the same content that the image map hot spots access. The only way to access the links of a server-side image map is through the image map hot spots, which usually means that a mouse is required and that the links are unavailable to assist. tech.
(f) Client-side image maps shall be provided instead of server-side image maps except where the regions cannot be defined with an available geometric shape. Standard HTML client-side image maps are used, and appropriate alt tags are provided for the image as well as the hot spots. Server-side image maps are used when a client-side image map would suffice.
(g) Row and column headers shall be identified for data tables. Data tables have the column and row headers appropriately identified (using the <th> tag)

Tables used strictly for layout purposes do NOT have header rows or columns.

Data tables have no header rows or columns.

 

Tables used for layout use the header attribute when there is no true header

(h) Markup shall be used to associate data cells and header cells for data tables that have two or more logical levels of row or column headers. Table cells are associated with the appropriate headers (e.g. with the id, headers, scope and/or axis HTML attributes). Columns and rows are not associated with column and row headers, or they are associated incorrectly.
(i) Frames shall be titled with text that facilitates frame identification and navigation. Each frame is given a title that helps the user understand the frame's purpose. Frames have no titles, or titles that are not descriptive of the frame's purpose.
(j) Pages shall be designed to avoid causing the screen to flicker with a frequency greater than 2 Hz and lower than 55 Hz. No elements on the page flicker at a rate of 2 to 55 cycles per second, thus reducing the risk of optically-induced seizures. One or more elements on the page flicker at a rate of 2 to 55 cycles per second, increasing the risk of optically-induced seizures.
(k) A text-only page, with equivalent information or functionality, shall be provided to make a web site comply with the provisions of this part, when compliance cannot be accomplished in any other way. The content of the text-only page shall be updated whenever the primary page changes.

 

[See Note 2]

A text-only version is created only when there is no other way to make the content accessible, or when it offers significant advantages over the "main" version for certain disability types.

The text-only version is up-to-date with the "main" version.

The text-only version provides the functionality equivalent to that of the "main" version

An alternative is provided for components (e.g. plug-ins, scripts) that are not directly accessible.

A text-only version is provided only as an excuse not to make the "main" version fully accessible.

 

 

 

The text-only version is not up-to-date with the "main" version.

The text-only version is an unequal, lesser version of the "main" version.

 

No alternative is provided for components that are not directly accessible.

 

(l) When pages utilize scripting languages to display content, or to create interface elements, the information provided by the script shall be identified with functional text that can be read by assistive technology.

 

[See Note 3]

 

Information within the scripts is text-based, or a text alternative is provided within the script itself, in accordance with (a) in these standards.

All scripts (e.g. Javascript pop-up menus) are either directly accessible to assistive technologies (keyboard accessibility is a good measure of this), or an alternative method of accessing equivalent functionality is provided (e.g. a standard HTML link).

Scripts include graphics-as-text with no true text alternative.

 

 

 

Scripts only work with a mouse, and there is no keyboard-accessible alternative either within or outside of the script.

(m) When a web page requires that an applet, plug-in or other application be present on the client system to interpret page content, the page must provide a link to a plug-in or applet that complies with §1194.21(a) through (l).

 

[See Notes 4-6]

A link is provided to a disability-accessible page where the plug-in can be downloaded.

 

All Java applets, scripts and plug-ins (including Acrobat PDF files and PowerPoint files, etc.) and the content within them are accessible to assistive technologies, or else an alternative means of accessing equivalent content is provided.

No link is provided to a page where the plug-in can be downloaded and/or the download page is not disability-accessible.

Plugins, scripts and other elements are used indiscriminately, without alternatives for those who cannot access them.

(n) When electronic forms are designed to be completed on-line, the form shall allow people using assistive technology to access the information, field elements, and functionality required for completion and submission of the form, including all directions and cues. All form controls have text labels adjacent to them.

Form elements have labels associated with them in the markup (i.e. the id and for, HTML elements).

Dynamic HTML scripting of the form does not interfere with assistive technologies.

Form controls have no labels, or the labels are not adjacent to the controls.

There is no linking of the form element and its label in the HTML.

 

 

Dynamic HTML scripting makes parts of the form unavailable to assistive technologies.

(o) A method shall be provided that permits users to skip repetitive navigation links. A link is provided to skip over lists of navigational menus or other lengthy lists of links. There is no way to skip over lists of links.
(p) When a timed response is required, the user shall be alerted and given sufficient time to indicate more time is required. The user has control over the timing of content changes. The user is required to react quickly, within limited time restraints.

Note 1:  Until the longdesc tag is better supported, it is impractical to use.

Note 2:  "Text-only" and "accessible" are NOT synonymous. Text-only sites may help people with certain types of visual disabilities, but are not always helpful to those with cognitive, motor or hearing disabilities.

Note 3:  At this time, many elements of Dynamic HTML (client-side scripted HTML, which is usually accomplished with Javascript) cannot be made directly accessible to assistive technologies and keyboards, especially when the onMouseover command is used. If an onMouseover (or similar) element does not contain any important information (e.g. the script causes a button to "glow"), then there is no consequence for accessibility. If this scripted event reveals important information, then a keyboard-accessible alternative is required.

Note 4:  When embedded into web pages, few plug-ins are currently directly accessible. Some of them e.g. RealPlayer) are more accessible as standalone products. It may be better to invoke the whole program rather than embed movies into pages at this point, although this may change in the future.

Note 5:  Acrobat Reader 5.0 allows screen readers to access PDF documents. However, not all users have this version installed, and not all PDF documents are text-based (some are scanned in as graphics), which renders them useless to many assistive technologies. It is recommended that an accessible HTML version be made available as an alternative to PDF.

Note 6:  PowerPoint files are currently not directly accessible unless the user has a full version of the PowerPoint program on the client computer (and not just the PowerPoint viewer). It is recommended that an accessible HTML version be provided as well.

 

Part 2: for Scripts, Plug-ins, Java, etc.

The following standards are excerpted from Section 508 of the Rehabilitation Act, §1194.21. For the full text of Section 508, please see http://www.access-board.gov/news/508-final.htm.

SEC. 508 STANDARD 

Section 1194.21

(a) When software is designed to run on a system that has a keyboard, product functions shall be executable from a keyboard where the function itself or the result of performing a function can be discerned textually.
(b) Applications shall not disrupt or disable activated features of other products that are identified as accessibility features, where those features are developed and documented according to industry standards. Applications also shall not disrupt or disable activated features of any operating system that are identified as accessibility features where the application programming interface for those accessibility features has been documented by the manufacturer of the operating system and is available to the product developer.
(c) A well-defined on-screen indication of the current focus shall be provided that moves among interactive interface elements as the input focus changes. The focus shall be programmatically exposed so that assistive technology can track focus and focus changes.
(d) Sufficient information about a user interface element including the identity, operation and state of the element shall be available to assistive technology. When an image represents a program element, the information conveyed by the image must also be available in text.
(e) When bitmap images are used to identify controls, status indicators, or other programmatic elements, the meaning assigned to those images shall be consistent throughout an application's performance.
(f) Textual information shall be provided through operating system functions for displaying text. The minimum information that shall be made available is text content, text input caret location, and text attributes.
(g) Applications shall not override user selected contrast and color selections and other individual display attributes.
(h) When animation is displayed, the information shall be displayable in at least one non-animated presentation mode at the option of the user.
(i) Color coding shall not be used as the only means of conveying information, indicating an action, prompting a response, or distinguishing a visual element.
(j) When a product permits a user to adjust color and contrast settings, a variety of color selections capable of producing a range of contrast levels shall be provided.
(k) Software shall not use flashing or blinking text, objects, or other elements having a flash or blink frequency greater than 2 Hz and lower than 55 Hz.
(l) When electronic forms are used, the form shall allow people using assistive technology to access the information, field elements, and functionality required for completion and submission of the form, including all directions and cues.

Appendix D. Guidelines for Accessible Web Page Design

Computer Accommodations Program at the University of Minnesota.

Design Considerations

Introductory Screens
The first screen of a site which contains only graphics and no introductory text provides little, if any, information about the site for the users of some screen-readers. A lack of introductory text may also be problematic for individuals who are using a text-only browser, a browser with picture loading disabled or a portable wireless device such as a cellular phone or Personal Digital Assistant (PDA). The first screen should contain at least some text.

Design Consistency
The use of consistent design strategies for all related documents will make navigation easier for everyone. A consistent look and feel, across all pages of a site, aids visitors in identifying ownership of a page. Provide a method for bypassing navigational controls at the top of each page, allowing users of adaptive technology to jump directly to the content of the page. Once you have designed an accessible and effective page, use it as a template for all other pages of the site. 

Document Length
Different users will access documents differently. One user may access the site more easily in smaller sections, while another may find a larger document easier to manage. Present larger documents in smaller sub-units and offer complete text-only versions for download.

Frames
Frames used to divide a browser screen into smaller and separate sub-units can be very inaccessible to persons using screen-readers or screen magnification applications. Some browsers may not be able to handle frames. Avoid the use of frames, use the HTML <noframes> element or include clear alternative methods (e.g., a link to a no-frames page) that provides the user with all of the information presented in the frames-based version.

Browser-Specific HTML Tags
Some HTML tags are specific to a particular browser. Use of such browser-specific tags may cause page elements to display incorrectly or not at all. Remember, your site is for conveying information to visitors with a variety of skills, interests, equipment and abilities. Do not use HTML constructs (tags) that are specific to (and only supported by) one Web Browser. Test your Web pages with a variety of Web browsers. You might be surprised to see how the page you designed for one browser looks when using another.

Cascading Style Sheets (CSS)
Cascading Style Sheets allow Web site designers to produce Web pages with a consistent look that can be easily updated. Because style sheets can be used to affect the appearance of an entire page, they can be used to enhance accessibility. However, Web pages that use CSS should degrade gracefully in order that the information will be accessible to browsers that do not support CSS and browsers in which CSS support has been disabled.

Scripts
Web page authors have a responsibility to provide script information in a fashion that can be read by assistive technology. Screen-readers will often read the content of a script as a meaningless jumble of numbers and letters, when functional text that conveys an accurate description as to what is being displayed by the script is not included.

Example:

If the function of a script is to fill the contents of an HTML form with basic default values, the text inserted into the form by the script should be accessible to a screen-reader. In contrast, if a script is used to display menu choices when the user moves the pointer over an icon, functional text for each menu choice cannot be specified and a redundant text link must be provided for each menu item.

 

Automated Functions

Roll-over Controls (onmouseover)
Roll-over controls that move the user from the current location can make navigation difficult or impossible for visitors using a screen-reader, those who have trouble controlling a mouse and those whose equipment does not support a mouse or similar pointing device. Do not use roll-overs in a drop-down list. Instead, use a separate button to initiate a drop-down menu selection.

Roll-overs that change the appearance of a control or cause additional information to be displayed do not cause a problem for screen-reader users and may provide useful feedback for users with learning disabilities or mobility impairments. However, screen-reader users will not be able to access pop-up information or menus. Be sure to include the text of pop-up information in the ALT tag for the graphic and provide redundant links for pop-up menu items.

Screen Refresh
Automatic refreshing of a page may prevent access to the information for users of screen-readers, screen magnification applications and individuals with learning or cognitive impairments. A method for disabling the automatic refreshing of a page or site must be provided.

Timed Responses
When a timed response is required, the user must be alerted and given an opportunity to indicate that more time is necessary.

 

Text Presentation

Font (Face, Size and Color)
Whether it is merely personal preference or necessitated by a visual impairment, individuals may view pages using font sizes or color schemes other than those originally intended. Do not use font face alone to convey information. Be sure that information on a page remains clear and accessible when viewed in different font sizes. It is a good practice to review pages using a variety of font sizes, from the largest available to the smallest.

Visitors must be able to vary the size of the display font. Specify font sizes as relative values rather than absolute. CSS allows font-size to be defined in a number of ways. Specifying font size in ems — rather than pixels — is the preferred method for web accessibility, as it is relative to the user's default font size.

CORRECT INCORRECT
Font-size: 1.5em font-size: 12px

Color alone should not be used to convey information — this information may be inaccessible to individuals who are color-blind, screen-reader users, individuals with low-vision, users of some hand-held devices, and individuals using a monochrome display. When using colored text and/or a colored background, be sure that the contrast between the text and the background is significantly high at all color depths. Some optimal text and background combinations for those with color vision anomalies include black on white, white on black, yellow on black and black on yellow.

Backgrounds and Wallpaper
Graphical backgrounds and wallpaper should not be used to convey information. Highly detailed or "busy" backgrounds and wallpaper should be avoided, as they may make it difficult or impossible to discern the overlying text. Check the readability of text against a background by reviewing the page in black and white and by using a variety of font sizes, color depths, screen resolutions, platforms and browsers.

Blinking Text and Marquees
Blinking text and marquees (text that scrolls automatically on the screen) may be troublesome for persons with visual or cognitive impairments. Blinking text may trigger a seizure in people with photosensitive epilepsy. Do not use the blink or marquee elements. Screen elements that flicker or change must do so at a frequency of less than twice a second (2 Hz) or greater than 55 times a second (55 Hz).

Acronyms and Abbreviations
Acronyms and abbreviations may not be clear to all individuals visiting your site. Screen-readers will attempt to pronounce acronyms and abbreviations that contain vowels — these pronunciations may be misleading or unintelligible to the screen-reader user. The first occurrence of an acronym in the body of a document should be preceded by the full title to which the acronym refers — Computer Accommodations Program (CAP).

When used as part of a link, the <ACRONYM> and <ABBR> elements should be used to denote and expand acronyms and abbreviations. The <ACRONYM> tag will cause the full text to which the acronym refers to be read by a screen-reader and visibly displayed when a mouse pointer is placed on the link containing the acronym. The <ABBR> tag does not visibly display any text — the expanded text is read by screen-readers only.

Examples:

<ABBR title="Minnesota">MN</ABBR>
<ACRONYM title="University of Minnesota">UMN</ACRONYM>

Although it is mostly a matter of personal preference and common sense, the following guidelines may help to determine when to use the <ABBR> tag and when to use the <ACRONYM> tag:

Use the <ABBR> tag for familiar abbreviations and acronyms (e.g., FYI, ASAP, CST/CDT, lbs. and the like).

Use the <ACRONYM> tag any time the acronym refers to a place, organization or other proper noun. This will aid sighted visitors in identifying the acronym.

Note: The <ABBR> and <ACRONYM> elements are part of the HTML 4.0 specifications and may not be interpreted by some browsers — they will probably not be recognized by most text-only browsers, such as Lynx.

Bullets
Use an asterisk (*), a single letter (A) or single number(1) as the alternative text for graphical bullets.

List Tags
Some screen-readers may not automatically detect bullets and numbers created using an HTML list tag — unordered list <UL> and ordered list <OL>. Therefore, avoid the use of the HTML <OL> tag to create numbered lists, when the number is to be referenced elsewhere in the document. Number the list manually as an alternative to using numbered list tags.

Punctuation
The use of punctuation may aid the understanding of Web page information for users of screen-readers. Although section headings and individual list items may be visually distinct, it is often beneficial to screen-reader users to have headings, list items and similar elements end with or be separated by suitable punctuation. The text color for punctuation symbols used in this manner may be the same as the background color on which they appear, when the use of such punctuation is found to be visually distracting.

 

Multiple Column Layout

Tables: static
When tables are coded inaccurately or table codes are used for non-tabular material (e.g., newspaper style columns), some screen-reader users may find it difficult or impossible to access the information. The presentation of materials in a tabular or multicolumn format may be difficult to access for visitors with low-vision, cognitive impairments, visual tracking impairments and for users of some hand-held devices. When tables are used to present information, be sure appropriate coding is used and a de-columnized version or other means of acquiring the information is available.

Tables: dynamic
When the information in a table is created dynamically (generated based on user input/responses), use appropriate coding and provide a de-columnized version or other means of acquiring the information. It may be impractical or technically difficult to provide a de-columnized alternative when a table itself is created dynamically. Where it is not reasonable to accommodate a de-columnized version, alternative options (e.g., telephone, E-mail, postal mail or in-person) for obtaining the information should be made available and noted on the page.

 

Graphics

Alt Tags
Pictures and other graphics cannot be directly accessed by users of screen-readers, foreign language translation applications or some hand-held devices. Similarly, some users choose to turn picture loading off — especially those users with slower dial-in connections. An ALT tag is used to specify alternative text for an image. For example, the tag <IMG SRC="UpArrow.gif" ALT="Up Arrow"> (where UpArrow.gif is the picture of an upward-pointing arrow) will result in the image of an upward-pointing arrow being displayed by graphical browsers with image-loading enabled. The text “Up Arrow” will be spoken by a screen-reader and visibly displayed in place of the image by a text-only browser or a graphics-capable browser with image-loading disabled.

In the absence of an ALT tag, screen-readers will speak the path and file name for the graphic — this rarely provides any useful information. Graphical browsers with picture loading disabled will display an empty gray rectangle. ALT tags are limited to 256 characters.

 

Non-Link Graphics
Images can be a tremendous aid in the understanding of page content for visitors with learning disabilities, cognitive impairments and those whose native language is not that in which the page is presented. Select images carefully and provide a clear, complete and concise description in the ALT tag.

 

INCORRECT CORRECT
ALT="U of M Wordmark" ALT="University of Minnesota"
ALT="Picture of two adults." ALT="Picture of a college student and professor working at a computer."
ALT="Picture of a lake. The lake appears to be frozen, with small piles of snow scattered about its surface. The land and trees in the foreground are also covered by snow." ALT="Picture of a lake in Winter."

 

Tables and Charts
ALT tags may not be adequate for describing graphical tables and charts (e.g., pie charts, line and bar graphs, or tabular information presented as a graphic). There are several methods for conveying the information represented by these types of images:

Convey all of the information in the text body of the document.

Use the graphic as a link to a complete text description of the information being conveyed.

Provide a separate text link to a complete text description of the information being conveyed. These links may be hidden by making the text color the same as the background color on which they appear. However, the additional information may be useful to visitors with learning disabilities and other cognitive impairments.

Animations
Animations cannot be directly accessed by users of screen-readers, language translation applications, some hand-held devices and browsers that do not support animation or have the feature disabled. As with static images, an ALT tag must be included for each animation. ALT tags may not be adequate for animations used to convey information. There are several methods for conveying the information:

Convey all of the information in the text body of the document.

Use the animation as a link to a complete text description of the information being conveyed.

Provide a separate text link to a complete text description of the information being conveyed. These links may be hidden by making the text color the same as the background color on which they appear. However, the additional information may be useful to visitors with learning disabilities and other cognitive impairments.

If the animation contains meaningful audio, a separate, text description of the audio portion must be provided for persons who are deaf or hard of hearing.

 

Links

Text Links
Links must be clear, descriptive and able to stand alone. Do not use single word links — they do not provide an adequate description of the information to be retrieved, nor do they provide an adequate-sized target for persons who have difficulty controlling a pointing device.

Placing long lists of text-based links close together in rows or columns increases the probability of mouse errors for persons with mobility impairments. Use vertical lists of well spaced links whenever possible. Links listed horizontally or in a multicolumn fashion must be visually distinct and separated by vertical bars (|) or graphics with appropriate alternative text (e.g., | or *). Avoid enclosing links in brackets, braces, parentheses or other punctuation.

Imagemaps
An imagemap is a single picture with multiple active regions, each of which take the user to a different page or location based on where the mouse click occured within the image. There are two basic types of image maps: "client-side imagemaps" and "server-side imagemaps."

Client-side imagemaps allow both mouse and keyboard navigation. By specifying an appropriate ALT tag for each active region, a client-side imagemap functions like a series of links for users of adaptive technology, some hand-held devices, text-only browsers or browsers with picture loading disabled.

In contrast, server-side imagemaps do not allow keyboard navigation or the specifying of ALT tags for active regions. Include redundant text links for each active region of a server-side image map in order to ensure access for visitors using adaptive technology, some hand-held devices, text-only browsers or a browser with picture loading disabled.

Multimedia
A text and/or audio description of the visual elements of a multimedia presentation must be available for users with visual impairments. Audio presentations must be accompanied by text captioning in order to provide access for people who are deaf or hard of hearing. Text alternatives for a multimedia presentation must be synchronized with the presentation. Providing captioning does not preclude posting a transcript of the presentation that can be searched and/or downloaded.

Remember: individuals, with or without a disability, may not have the equipment or software necessary to access multimedia presentations.

© 2007 by the Regents of the University of Minnesota.
The University of Minnesota is an equal opportunity educator and employer.

Online Privacy Statement
This page was last updated on May 20, 2013

NCEO is supported primarily through a Cooperative Agreement (#H326G050007) with the Research to Practice Division, Office of Special Education Programs, U.S. Department of Education. Additional support for targeted projects, including those on LEP students, is provided by other federal and state agencies. Opinions expressed in this Web site do not necessarily reflect those of the U.S. Department of Education or Offices within it.