NCEO Synthesis Report 45
Published by the National Center on Educational Outcomes
Sandra J. Thompson • Martha L. Thurlow • Rachel F. Quenemoen • Camilla A. Lehr
Any or all portions of this document may be reproduced and distributed without prior permission, provided the source is cited as:
Thompson, S. J., Thurlow, M. L., Quenemoen, R. F., & Lehr, C. A. (2002). Access to computer-based testing for students with disabilities (Synthesis Report 45). Minneapolis, MN: University of Minnesota, National Center on Educational Outcomes. Retrieved [today's date], from the World Wide Web: http://education.umn.edu/NCEO/OnlinePubs/Synthesis45.html
Called the “next frontier in testing,” computer-based testing is being promoted as the solution to many of states’ testing problems. With pressure to find more cost effective and less labor intensive approaches to testing, states are seeing computer-based testing as a way to address the increasingly challenging prospect of assessing all students in a state at nearly all grades. Computer-based testing is viewed with optimism as an approach that will make testing less expensive in the long run, and that will produce better assessments of the wide range of students who must now be included in state and district assessments.
Unfortunately, most states and testing companies have not specifically considered the needs of students with disabilities as they pursue computer-based testing. Often, the approach has simply been to take the paper and pencil test and put it onto a computer. This is not enough. Poor design elements on the paper test will transfer to the screen, and there will be additional challenges created by the move as well, challenges that may reduce the validity of the assessment results and possibly exclude some groups from participation in the assessment.
This paper recognizes both the opportunities created by the new frontier of computer-based testing, but also identifies the challenges. Research findings and accommodations considerations are also addressed, with the end result being a process and considerations for the initial transformation of paper/pencil assessments to inclusive computer-based testing.
The recommended process for a good transformation of a paper and pencil test to computer-based testing assumes first that the principles of universally designed assessments have been followed. Then, the five step that are recommended (and discussed in the paper) are:
Step 1. Assemble a group of experts to guide the transformation.
Step 2. Decide how each accommodation will be incorporated into the computer-based test.
Step 3. Consider each accommodation or assessment feature in light of the constructs being tested.
Step 4. Consider the feasibility of incorporating the accommodation into the computer-based test.
Step 5. Consider training implications for staff and students.
The paper also presents initial considerations for common accommodations within the categories of timing/scheduling, presentation, response, and setting.
On January 8, 2002, President Bush signed the reauthorization of the Elementary and Secondary Education Act into law as the “No Child Left Behind Act of 2001.” This Act requires states to have annual assessments in place in reading and mathematics for all students in grades three through eight by the end of the 2005-2006 school year, with science assessments added by the beginning of the 2007-2008 school year. Only nine states currently administer standards-based tests in both subjects across grades three through eight (Quality Counts, 2002), setting an unprecedented opportunity for states to enhance the participation of all students as they build and improve their assessment systems. Increased requirements within the law for itemized score analyses, disaggregation within each school and district by gender, racial and ethnic group, migrant status, English proficiency, disability, and income will challenge states to create new and more efficient ways to administer, score, and report assessment results.
Computer-based testing has been called the “next frontier in testing” as educators, testing companies, and state departments quickly work to transform paper/pencil tests into technology-based formats (Trotter, 2001). These efforts have occurred in a variety of ways and for a variety of tests. For example, some educators have transferred all of their classroom quizzes and tests into a computer-based format. The paper/pencil version of the Graduate Record Exam™ has been replaced with a computerized version that is administered across a variety of locations. NCS Pearson™ has developed eMeasurement™ Services—a suite of tools that delivers tests and their results electronically.1 As a result of these advances, states are facing pressure to create computer-based large-scale assessments (Russell, 2002). Some states are investigating the possibility of computerized adaptive testing for their statewide assessments, where the difficulty level of questions are presented and adjusted based on whether students’ responses are correct. According to Bennett (1998), “Whereas there is certainly a concerted move toward technology-based large-scale tests, there is no question that this assessment mode is still in its infancy. Like many innovations in their early stages, today’s computerized tests automate an existing process without reconceptualizing it to realize the dramatic improvements that the innovation could allow. Thus, these tests are substantively the same as those administered on paper” (p. 3).
With the dramatic increase in the use of the Internet over the past few years, and with it, the considerable potential of online learning (Kerrey & Isakson, 2002), assessment will need to undergo a complete transformation to keep pace. According to the Web-based Education Commission, “Perhaps the greatest barrier to innovative teaching is assessment that measures yesterday’s learning goals…Too often today’s tests measure yesterday’s skills with yesterday’s testing technologies—paper and pencil” (p. 3).
Experts suggest that the Internet will be used to develop tests and present items through dynamic and interactive stimuli such as audio, video, and animation (Lewis, 2001). Given this momentum, it is not surprising that there is a trend toward investigating and incorporating the Internet as the testing medium for statewide assessments. Bennett (2001) stated, “The trend is clear: the infrastructure is quickly falling into place for Internet delivery of assessment to schools, perhaps first in survey programs like NAEP (National Assessment of Educational Progress) that require only a small participant sample from each school, but eventually for inclusive assessments delivered directly to the desktop” (p. 10).
As the trend toward computer-based testing moves forward, it is important to focus carefully on the requirements of the newly enacted No Child Left Behind Act of 2001, and on the assessment participation requirements in the 1997 reauthorization of the Individuals with Disabilities Education Act. In addition, a 1996 Department of Justice Policy Ruling states that Titles II and III of the Americans with Disabilities Act requires State and local governments to provide effective communication whenever they communicate through the Internet. The Office for Civil Rights discussed the provision of effective communication:
In further clarification, the Office for Civil Rights lists three basic components of effective communication: “timeliness of delivery, accuracy of the translation, and provision in a manner and medium appropriate to the significance of the message and the abilities of the individual with the disability” (Page 1, 1997 Letter). This clarification presents a significant and timely responsibility in the design of computer-based testing.
For the full benefits of computer-based testing to be realized, a thoughtful and systematic process to examine the transfer of existing paper/pencil assessments must occur. It is not enough to simply transfer test items from paper to screen. Not only will poor design elements on the paper test transfer to the screen, additional challenges may result that reduce the validity of the assessment results and possibly exclude some groups of students from assessment participation.
This paper presents factors to consider in the design of computer-based testing for all students, including students with disabilities and students with limited English proficiency. We begin with the opportunities and challenges presented by this “new frontier” in testing, and then explore research about effective universally designed assessments and technology-based accommodations, and relate this knowledge to computer-based testing design features. Finally, we present a process and consideration for the initial transformation of paper/pencil assessments to inclusive computer-based testing.
Several advocates have articulated the positive merits of computer-based testing. Some of the advantages over paper/pencil tests that have been cited include: efficient administration, preferred by students, self-selection options for students, improved writing performance, built-in accommodations, immediate results, efficient item development, increased authenticity, and the potential to shift focus from assessment to instruction. This section describes each of these prospective opportunities.
Computer-based tests can be administered to individuals or small groups of students in classrooms or computer labs, eliminating timing issues caused by the need to administer paper/pencil tests in large groups in single sittings. Different students can take different tests simultaneously in the same room.
Preferred by Students
In an evaluation of testing experience, students overwhelmingly preferred computerized testing to paper/pencil testing (Brown & Augustine, 2001). Most students, regardless of group or ability, believed that the computer was easier, faster, and more fun. Students also responded that using a computer helped concentration by presenting only one question at a time. A recent survey on computer use by students with disabilities in Germany (Ommerborn & Schuemer, 2001) found several more advantages than disadvantages to computer use.
Brown-Chidsey and Boscardin (1999) interviewed students with learning disabilities and found that the computer helped them deal with limitations that often interfered with the completion of their work. The researchers concluded, “Students’ beliefs about computers are likely to shape the extent to which instructional technology enhances their achievement” (Brown-Chidsey, Boscardin, & Sireci, 1999, p. 4). A study at the Boston College Center for the Study of Testing, Evaluation, and Assessment (Trotter, 2001) found, “Students who are accustomed to writing on computers tend to do better on computerized tests than on paper exams. Conversely, students who don’t use computers often to write tend to do better when they complete their tests on paper” (p. 3).
Self-Selection Options for Students
Students have the option to choose features on computer-based tests, including format features and built-in accommodations. For example, Calhoon et al. found that “teachers are unlikely to provide a reader to meet student needs because teachers prefer test accommodations that require little individualization and do not require curricular or environmental modifications” (p. 272). Other recent work on accommodations for English Language Learners (Anderson, Liu, Swierzbin, Thurlow, & Bielinski, 2000; Liu, Anderson, Swierzbin, & Thurlow, 1999) has shown that students may not want to use certain accommodations (e.g., headphones to have instructions read in English, bilingual dictionaries) unless they are provided in specific ways. Teachers have reported that students with learning disabilities may opt not to use certain accommodations at certain times because they are not seen as helpful. Having the ability to self-select a technology-based reader or other tool may provide students access to a necessary accommodation that may not be offered currently, due to issues of convenience.
Improved Writing Performance
As computers become more common in schools, many of today’s students are accustomed to using computers in their daily work. Students write and calculate on computers as easily and with more speed and efficiency than previous generations could on paper. Research has shown that writing on computers leads students to write more and revise more than writing with paper/pencil (Daiute, 1985; Morocco & Neuman, 1986). Paper/pencil tests that require writing may underestimate the writing ability of students who have grown accustomed to writing on computers (Russell & Haney, 1997). In a survey of computer use by students with disabilities in Germany, Ommerborn and Schuemer (2001) found that the greatest advantage to students was the ease in which computers allowed them to write essays. Several of the students surveyed said that it was very difficult for them to write by hand.
Computer technology has been touted as a tool that can be used to empower students with disabilities (Goldberg & O’Neill, 2000). Specifically, computer-based testing has been viewed as a vehicle to increase the participation of students with disabilities in assessment programs. For example, the windows operating system supports a great variety of adaptive devices (e.g., screen readers, Braille displays, screen magnification, self-voicing Web browsers). According to Greenwood and Rieth (1994), the primary strength of computer-based testing is its “potential for removing traditional barriers to the inclusion of persons with disabilities in the assessment process through adaptations and accommodations as well as through new forms” (p. 110).
Computer-based testing can provide flexibility in administration for students with various learning styles. For example, the National Research Council (NRC, 2001) found computer-based testing to be effective for students who perform better visually than with text, are not native English speakers, or are insecure about their capabilities. According to NRC, “Technology is already being used to assess students with physical disabilities and other learners whose special needs preclude representative performance using traditional media for measurement” (p. 286).
Standardization of accommodated assessment administrations can be facilitated by computer-based testing. According to Brown-Chidsey and Boscardin (1999), “Using a computer to present a test orally controls for standardization of administration and allows each student to complete the assessment at his/her own pace” (p. 2). Brown and Augustine (2001) cited educator appreciation of a computer’s ability to present items over and over, in both written and verbal form, without the need for a non-standard (and sometimes impatient) human reader. Several studies have shown the positive effects of providing a reader for math tests (see Calhoon, Fuchs & Hamlett, 2000; Fuchs, Fuchs, Eaton, Hamlett, & Karns, 2000; Tindal, Heath, Hollenbeck, Almond, & Harniss, 1998).
With the use of audio and video built into computer-based tests, specialized testing equipment such as audiocassette recorders and VCRs could become obsolete (Bennett, Goodman, Hessinger, Ligget, Marshall, Kahn, & Zack, 1999). According to Bennett (1995), “Test directions and help functions would be redundantly encoded as text, audio, video, and Braille, with the choice of representation left to the examinee. The digital audio would allow for spoken directions, whereas the video could present instruction in sign language or speech-readable form. Among other things, these standardized presentations should reduce the noncomparability associated with the uneven quality of human readers and sign-language interpreters” (p. 10).
Finally, just as the use of accommodations on paper/pencil tests has increased awareness and use of accommodations in the classroom, so can opportunities to use the built-in accommodation features of computer-based tests encourage and increase the use of those features in classroom and other environments. For example, Williams (2002) believes, “It is possible that new developments in speech recognition technology could increase opportunities for individual reading practice with feedback, as well as collecting assessment data to inform instructional decision making” (p. 41). In addition, most computer-based tests have built-in tutorials and practice tests. These tutorials provide students with both opportunities for familiarizing themselves with the software and immediate feedback (Association of Test Producers, 2000).
One of the major drawbacks of state testing on paper has been the long wait for results because of the need to distribute, collect, and then scan test booklets/answer forms and hand score open-response items and essays. Students tested in the spring often do not receive their results until fall—nor do their teachers or schools. The results of computer-based tests can be available immediately, providing schools with diagnostic tools to use for improved instruction, and states with information to guide policy. Even open-ended items can be scored automatically, greatly reducing cost and scoring time (Thompson, 1999). According to a report by the National Governors Association (2002), cost savings can result from “the elimination of printing and shipping activities when paper testing ceases” (p. 7).
Efficient Item Development
As computer-based testing becomes more developed, item development will be more efficient, higher quality, and less expensive (National Governors Association, 2002). Bennett (1998) believes that at some point items might be generated electronically, with items matched to particular specifications at the moment of administration. “Test design will also be the focal point for responding to diversity. The effects of different test designs on minority group members, females, …will be routinely simulated in deciding what skills and which task formats to use in large-scale assessments” (Bennett, 1998, p. 9). According to Russell (2002), “already, some testing programs are experimenting with ways to generate large banks of test items via computer algorithms with the hope of saving the time and money currently required to produce test items manually” (p. 65). Baker (2002) cited several research efforts that have significantly advanced the progress of schema or template-based, multiple-choice development and test management systems (see Bejar, 1995; Bennett, 2002; Chung, Baker, & Cheak, 2001; Chung Klein, Herl & Bewley, 2001; Gitomer, Steinbert, & Mislevy, 1995; Mislevy, Steinbert, & Almond, 1999).
Computers allow for increased use of “authentic assessments”—responses can be open-ended rather than just relying on multiple choice. According to Bennett (1998), the next generation of computer-based tests will be “qualitatively different from those of the first generation. This difference will be evident in the test questions (and, in some cases, the characteristics they measure), as well as in development, scoring, and administrative processes” (p. 4, see Table 1). Bennett notes that many Americans are now receiving their news from TV and the World Wide Web, with the expectation that students will increasingly be able to process information from a variety of sources, not just from print. Bennett also suggests that response formats will shift dramatically, perhaps including problems in which a student is not expected to find the best answer, but a reasonable one within certain constraints.Table 1. Three Generations of Large-Scale Educational Assessment
Adapted from: Bennett, R.E. (1998). Reinventing assessment: Speculations on the future of large-scale educational testing. Princeton, NJ: Policy Information Center, Educational Testing Service.
Shifts Focus from Assessment to Instruction
Bennett (1998) believes that eventually large-scale assessment will join with instruction. “Decisions like certification of course mastery, graduation eligibility, and school effectiveness will no longer be based largely on one examination given at a single time but will also incorporate information from a series of measurements” (p. 11). “By virtue of moving assessment into the curriculum, the locus of the debate over performance differences must logically shift from the accuracy of assessment to the adequacy of instruction” (p. 12). Bennett continues this line of thought in a 2001 article, “When well-constructed tests closely reflect the curriculum, group differences should become more an issue of instructional inadequacy than test inaccuracy. As attention shifts to the adequacy of instruction, the ability to derive meaningful information from test performance becomes more critical” (p. 2).
Despite the potential advantages offered by computer-based testing, there remain several challenges, especially in the transition from paper/pencil assessments. First of all, the use of technology cannot take the place of content mastery. No matter how well a test is designed, or what media are used for administration, students who have not had an opportunity to learn the material tested will perform poorly. Students need access to the information tested in order to have a fair chance at performing well. Hollenbeck, Tindal, Harniss, and Almond (1999) strongly caution that the use of a computer, in and of itself, does not improve the overall quality of student writing. They, and other researchers, continue to find significantly lower mean test scores for students with disabilities than for their peers without disabilities. Other challenges that must be overcome in order for computer-based testing to be effective include: issues of equity and skill in computer use, added challenges for some students, technological challenges, security of online data, lack of expertise in designing accessible Web pages, and prohibitive development cost.
Issues of Equity and Skill in Computer Use
Concerns continue to exist in the area of equity, where questions are asked about whether the required use of computers for important tests puts some students at a disadvantage because of lack of access, use, or familiarity (Trotter, 2001). Concerns include unfamiliarity with answering standardized test questions on a computer screen, using buttons to search for specific items, and indecision about whether to use traditional tools (e.g., hand held calculator) vs. computer-based tools. According to Wissick and Gardner (2000), “Students will not take advantage of help options or use navigation guides if they require more personal processing energy than they can evoke” (p. 38).
A survey on computer use by students with disabilities in Germany (Ommerbon & Schuemer, 2001) found the cost of acquiring and using a computer as the greatest barrier, with the second being a lack of training opportunities. Students who needed assistive technology cited high cost and lack of information as barriers to increased computer use.
The gap in access to technology—sometimes referred to as the “Digital Divide”—is continuing to grow. According to Bolt and Crawford, authors of Digital Divide (2000, p. 98):
Added Challenges for Some Students
Some research questions whether the medium of test presentation affects the comparability of the tasks students are being asked to complete. Here are some findings that show added difficulty for some students.
Computers and the Internet do not always work the way we want them to. The word “crash” has taken on a whole new meaning in our technology-oriented world. An issue brief of the National Governors Association listed some of the problems: “testing sessions may be interrupted, proceed so slowly as to interfere with student performance, or encounter difficulties in machine operation or telecommunications that cause data to be lost entirely. Unlike a paper-and-pencil testing system, keeping a computerized system functioning requires significant technical expertise, which many schools lack” (p. 7). Burk (1999) argued, “Computerized testing for students with disabilities is viable but only with appropriate equipment, staff preparation, and student preparation” (p. 6). Some researchers, like Hamilton, Klein, and Lorie (2001), question whether an infrastructure currently exists that can support the use of computers by large numbers of students. They also question the quality of the hardware, especially with our constant evolution of technology, and whether there is sufficient training for staff who must help with administration and technological difficulties that may be encountered. Also, the test program may be device-dependent; for example, there may be a difference in contrast between monitors and speed of the computer. A test presented online may default to the computer’s font, print size, and background color. Graphics may become distorted on small screens, reducing standardization of the assessment presentation. According to a report by the National Governors Association (2002, p. 7):
A constant challenge is ongoing entry of new Web browsers and new versions of existing browsers. In addition, HTML and document converters are constantly being developed and modified. Unfortunately, several features may not be universally accessible and advancements in assistive technology are usually several steps behind new Internet components and tools. For example, using an eye pointing device may increase the time needed to position each eye pointing frame, leading to increased fatigue, boredom, and inattention by the test-taker (Haaf, Duncan, Skarakis-Doyle, Carew, & Kapitan, 1999). As computer-based testing becomes a reality across states and districts, it is important to ensure that the new technology either improves accessibility or is compatible with existing assistive computer technology.
Security of Online Data
Critics question whether online data are secure. In a report by the National Governors Association (2002), security issues related to protecting test questions and ensuring the confidentiality of student data in a computerized system were compared to those encountered with conventional tests and were found to be conceptually similar. Differences were found in mechanisms to accomplish breaches and protect against them. For example, test questions and student data could be stolen from central servers or from local computers. This can be minimized through technical design that encrypts questions and student records and through the careful use of passwords.
Lack of Ability to Design Accessible Web Pages
According to WebAIM, (Web Accessibility in Mind, an initiative of the Center for Persons with Disabilities at Utah State University, 2001), there are 27.3 million people with disabilities who are limited in the ways they can use the Internet: “The saddest aspect of this fact is that the know-how and the technology to overcome these limitations already exist, but they are greatly under-utilized, mostly because Web developers simply do not know enough about the issue to design pages that are accessible to people with disabilities. Unfortunately, even some of the more informed Web developers minimize the importance of the issue, or even ignore the problem altogether” (p. 1).
Prohibitive Development Cost
Development expenses listed in a report by the National Governors Association (2002) include: “central hardware to deliver the test over the Internet, local telecommunications hardware, machines in schools for students to take the tests on, and test authoring and delivery software. Labor expenses include costs for entering questions into the testing software, assuring quality in the test’s operation, extracting student records from the test database and translating the information into a form suitable for analysis, and servicing the technology that runs the system. There are also ongoing connection charges” (p. 7). The National Governors Association recommends that states form consortia, cooperative agreements, or buying pools in order to reduce the costs of “test questions, telecommunications equipment, computer hardware, testing software, and equipment maintenance” (p. 9).
Universally Designed Computer-based Tests
Universal design is defined by the Center for Universal Design (1997) as “the design of products and environments to be usable by all people, to the greatest extent possible, without the need for adaptation or specialized design.” The Assistive Technology Act of 1998 (PL 105-394) addresses universal design through this definition:
A recent report on the application of universal design to large-scale assessments (Thompson, Johnstone, & Thurlow, 2002) found that good basic design, whether on paper or technology-based, increases access for everyone, and poor design can have detrimental effects for nearly everyone. Many accessibility issues relate to content and design features, with content defined as subject matter on the page while design is defined as the organization or arrangement of objects and information on the page.
An important function of well-designed assessments is that they actually measure what they are intended to measure. Test developers need to carefully examine what is to be tested and design items that offer the greatest opportunity for success within those constructs. Just as universally designed architecture removes physical, sensory, and cognitive barriers to all types of people in public and private structures, universally designed assessments need to remove all non-construct-oriented cognitive, sensory, emotional, and physical barriers.
Assessment instructions need to be easy to understand, regardless of a student’s experience, knowledge, language skills, or current concentration level. Directions and questions need to be in simple, clear, and understandable language. It is important for designers of computer-based tests to strive for content that is understandable and navigable. According to WebAIM (2001), “this includes not only making the language clear and simple, but also providing understandable mechanisms for navigating within and between pages” (p. 8).
Legibility is the physical appearance of text; the way shapes of letters and numbers enable people to read text “quickly, effortlessly, and with understanding” (Schriver, 1997, p. 252). Though a great deal of research has been conducted in this area, the personal opinions of editors often prevail (Bloodsworth, 1993; Tinker, 1963). Bias results from items that contain physical features that interfere with a student’s focus on or understanding of the construct an item is intended to assess. Format dimensions can include contrast, type size, spacing, typeface, leading, justification, line length/width, blank space, graphs and tables, illustrations, and response formats (see Table 2).
Table 2. Characteristics of Maximum Legibility
From Thompson, Johnstone, & Thurlow, 2002.
It is important to maintain these aspects of universal design when converting paper/pencil tests to computer-based tests. Poor design on paper will result in poor design on a screen. In addition to the universal design elements described above, computer-based testing can offer several additional features that can increase the accessibility of assessments for all students, including students with disabilities and English language learners. According to WebAIM (2001), “Everyone benefits from well-designed Web sites, regardless of cognitive capabilities. In this context, ‘well-designed’ can be defined as having a simple and intuitive interface, clearly worded text, and a consistent navigational scheme between pages” (p. 8). These features also need to take into account variations in technology available in schools across a district or state, and the other challenges described in the previous section.
The provision of navigation tools and orientation information in pages can maximize access for all users. However, there are users who cannot access visual clues such as image maps, scroll bars, side-by-side frames, or graphics. Some users lose contextual information because they are accessing a page one word at a time through speech synthesis or braille. Ommerborn and Schuemer (2001, p. 21) conducted a survey of German students with disabilities and found that:
Even though items on universally designed assessments will be accessible for most students, there will still be some students who continue to need accommodations, including assistive technology. According to Bowe (2000), “One big advantage of universal design is that it minimizes the need, on the part of people with disabilities, for assistive technology devices and services” (p. 25). Items are biased when they do not allow for adaptation for use with assistive technology that is needed to facilitate use of the student’s primary means of communication. Computer-based tests need to be accessible for a variety of forms of assistive technology (e.g., key guards, specialized keyboards, trackballs, screen readers, screen enlargers) for students with physical or sensory disabilities. Bowe (2000) stated, “If a product or service is not usable by some individual, it is the responsibility of its developers to find ways to make it usable, or, at minimum, to arrange for it to be used together with assistive technologies of the user’s choice” (p. 27). Appendix A describes several resources to assist assessment developers in increasing access to assistive technology.
It is important to note that making computer-based testing amenable to assistive technology does not mean that students will automatically know what to do. Educators, especially special educators, need to be competent in technology knowledge and use. According to Lahm and Nickels (1999), “Educators must become proactive in their technology-related professional development because teacher education programs have only recently begun addressing the technology skills of their students” (p. 56). The Knowledge and Skills Subcommittee of the Council for Exceptional Children’s (CEC) Professional Standards and Practice Standing Committee has developed a set of 51 competencies for assistive technology that cross 8 categories, along with knowledge and skills statements for each category (see Lahm & Nickels, 1999).
Laws Governing Assistive Technology
The use of assistive technology is defined in the Individuals with Disabilities Education Act (IDEA 97), the Rehabilitation Act of 1997, and is implied in the Americans with Disabilities Act (ADA). IDEA 97 defines assistive technology as “any item, piece of equipment, or product system…that is used to improve the functional capabilities of individuals with disabilities; and any service that directly assists an individual in the selection, acquisition, or use of an assistive technology device.” An “assistive technology device” is further defined as “any item, piece of equipment, or product system, whether acquired commercially off the shelf, modified, or customized, that is used to increase, maintain, or improve the functional capabilities of a child with a disability” (20 U.S.C. 1401(1)).
The Rehabilitation Act (reauthorized in 1997) requires institutions receiving federal funds to have accessible Web sites. Similarly, the Americans with Disabilities Act (ADA) requires covered entities to furnish appropriate auxiliary aids and services where necessary to ensure effective communication with individuals with disabilities, unless doing so would result in a fundamental alteration to the program or service or in an undue burden (See 28 C.F.R. 36.303; 28 C.F.R. 35.160). Auxiliary aids include taped texts, Brailled materials, large print materials, captioning, and other methods of making audio and visual media available to people with disabilities. Titles II and III of the ADA require State and local governments and the business sector to provide effective communication whenever they communicate through the Internet. In order to specifically address the needs of people with visual disabilities, an ADA policy ruling determined that a text format rather than a graphical format assures accessibility to the Internet for individuals using screen readers. Without special coding, a text browser will only display the word “image” when it reads a graphic image, and if the graphic is essential to navigating the site (e.g., navigational button or arrow) or if it contains important information (e.g., table or image map) the user can get stuck and not be able to move or understand the information provided.
Assistive Technology Resources
There are several resources available to increase the accessibility of computer-based testing for students with disabilities. These resources are found primarily in the area of general Web content. Chishold, Vanderheiden, and Jacobs (1999) offer guidelines on how to make Web content accessible to people with disabilities. They are quick to point out that following these guidelines can also make Web content more available to all users, including those who use voice browsers, mobile phones, automobile-based personal computers, and other technology. The guidelines, found in Table 3, explain how to make multimedia content more accessible to a wide audience. For more information about Web accessibility, visit http://www.webaim.org, the official Web site of Web Accessibility in Mind (WebAIM). Several additional resources can be found in Appendix A.
Table 3. Web Content Accessibility Guidelines
Computerized Adaptive Testing
In computerized adaptive testing, a student responds to an item, which is followed by more difficult items if the student responded correctly, or easier items if the student responded incorrectly (Hamilton, Klein, & Lorié, 2001). Through this process, a student’s performance level is determined. According to Hamilton, Klein and Lorié (2001), “each response leads to a revised estimate of the student’s proficiency and a decision either to stop testing or to administer an additional item that is harder or easier than the previous one” (p. 12).
The advantages cited for computerized adaptive testing include short and efficient administration time, with the computer selecting the next item immediately after an item is completed. A proficiency level is determined through the completion of fewer items than a test in which students respond to every item on the test. According to McBride (1985), “A well-constructed adaptive test attains a specified level of measurement precision in about half the length of time a conventional test would require to reach the same level. This is attributable to the adaptive feature; by tailoring the choice of questions to match the examinee’s ability, the test bypasses most questions that are inappropriate in difficulty level and contribute little to the accurate estimation of the test-taker’s ability” (p. 26).
However Stone and Lunz (1994) found that the inability of students taking computerized adaptive tests to review items and alter their responses may affect the quality of measurement. Students cannot select the order in which they respond to items, or leave some items blank.
There is some research that suggests that students who change earlier answers may improve their scores by a small margin (Gerson & Bergstrom, 1995; Stocking, 1996). There is also concern that some students may respond to early items wrong on purpose to get easier questions (Wainer, 1993).
The use of computerized adaptive tests for large-scale assessments has come under scrutiny by federal officials who question whether “levels” testing meets accountability requirements of Title I (Olson in Education Week, 2002). Levels testing, which has been defined as testing at a student’s instructional level rather than at his or her grade level, relies on overlapping levels within a single grade level, and common items among the levels. Computerized adaptive testing goes beyond the need for separate booklets by using a variety of complex algorithms that allows the student to move among different “levels” more freely, based on performance (Quenemoen, Thurlow, & Bielinski, in press).
Process for Developing Inclusive Computer-based Tests
The transformation of traditional paper/pencil tests to inclusive computer-based tests takes careful and thorough work that includes the collaborative expertise of many people. As discussed earlier in this paper, in order for the full benefits of computer-based testing to be realized, a thoughtful and systematic process to examine the transfer of existing paper/pencil assessments must occur. It is not enough to simply transfer test items from paper to screen. Not only will poor design elements on the paper test transfer to the screen, additional challenges may result in reducing the validity of assessment results. Some of the challenges traditionally present with accommodations could be minimized through universally designed computer-based tests, while others might remain or present even greater challenges. Here are some steps to follow in addressing these transformation issues.
Step 1. Assemble a group of experts to guide the transformation. This group needs to include experts on assessment design, accessible Web design, universal design, and assistive technology, along with state and local assessment and special education personnel. Table 4 contains a worksheet to use when gathering this group.
Table 4. Assemble a Group of Experts to Guide the Development of Computer-based Tests.
Step 2. Decide how each accommodation will be incorporated into the computer-based test. Examine each possible accommodation in light of computer-based administration. Some of the traditional paper/pencil accommodations will no longer be needed (e.g., marking responses on test form rather than on answer sheet), while others will become built-in features that are available to every test-taker. Some accommodations will be more difficult to incorporate than others, requiring careful work by test designers and technology specialists. The standards and guidelines for accessible Web design found in Appendices B, C, and D should be used when building in these features.
Step 3. Consider each accommodation or assessment feature in light of the constructs being tested. For example, what are the implications of the use of a screen reader when the construct being measured is reading, or the use of a spellcheck when achievement in spelling is being measured as part of the writing process? As the use of speech recognition technology permeates the corporate world, constructs that focus on writing on paper without the use of a dictionary or spellchecker may become obsolete and need to be reconsidered.
Step 4. Consider the feasibility of incorporating the accommodation into computer-based tests. Questions about the feasibility of the accommodation may require review by technical advisors, or members of a policy/budget committee, or may require short-term solutions along with long term planning. According to the Technology Act of 1998 (§ 1194.2 Application):
Construct a specific plan for building in features that are not immediately available, in order to keep them in the purview of test developers. Extensive pilot testing needs to be conducted with a variety of equipment scenarios and accessibility features.
Step 5. Consider training implications for staff and students. The best technology will be useless if students or staff do not know how to use it. Careful design of local training and implementation needs to be part of the planning process. Special consideration needs to be given to the computer literacy of students and their experience using features like screen readers. Information about the features available on computer-based tests needs to be marketed to schools and available to IEP teams to use in planning a student’s instruction and in preparation for the most accessible assessments possible. Practice tests that include these features need to be available to all schools year around. This availability presents an excellent opportunity for students whose schools have previously been unaware of or balked at the use of assistive technology.
Most states have a list of possible or common accommodations for students with disabilities within the categories of timing/scheduling, presentation, response, and setting (Thurlow, Lazarus, & Thompson, 2002). Some states also list accommodations specifically designed for students with limited English proficiency (Rivera, Stansfield, Scialdone, & Sharkey, 2000).
The list of accommodations in Table 5 is an expanded list of presentation accommodations generated to address the needs of students with a variety of accommodation needs—including students with disabilities, students with limited English proficiency, students with both disabilities and limited English proficiency, and students who do not receive special services, but have a variety of unique learning and response styles and needs. For each accommodation, relevant considerations are provided in the table. The three columns to the right of the Considerations Column represent:
Following Table 5 is a summary of considerations for each of the presentation accommodations.
Table 5. Presentation Accommodations
*1 Built-in feature of universally designed
computer-based test (available for self-selection by any student)
Large print and magnification. When type is enlarged on a screen, students may need to scroll back and forth, or up and down to read an entire test item. Text that re-wraps to fit into the screen when magnified is more useful than text that requires horizontal scrolling to be accessible. Some students use a large screen monitor to enlarge pages proportionally. Graphics, when enlarged, may become very pixilated and difficult to view. Students who use hand held magnifiers or monocular devices when working on paper may not be able to use these devices on a screen because of the distortion of computer images. If a graphics user interface is used (versus text based), students will not have the option of altering print size on the screen. However, a text-based user interface may default to a small print size or font on some computers.
Instructions simplified/clarified. Instructions for all students need clearly worded text that can be followed simply and intuitively, with a consistent navigational scheme between pages/items. Students need an option to self-select alternate forms of instructions in written or audio format.
Audio presentation of instructions and test items. Screen readers can present text as synthesized speech. Screen readers need to be operable at variable speeds and need to allow students the option of repeating instructions or items as often as desired. The use of text-to-speech for test items may not be a viable option if the construct tested is the ability to read print. A caution to be aware of is that screen-readers will attempt to pronounce acronyms (e.g., CRT) and abbreviations that contain vowels (e.g., AZ). It is important to avoid these both in the text of test items and in the alternative text or “alt tags” that are used. A text-based user interface is required for the use of screen readers.
Instructions and test items presented in sign language. Since most students who read sign language also read print, this accommodation would apply mostly to the use of multimedia item presentation (e.g., items that use audio or video). Students need to be able to self-select signed versions of audio or video instructions and test items. If sign language is used, it needs to be large enough on the screen and have good resolution for students to be able to determine subtle signs. Students also need to have the option to repeat instructions or items. Reading the speech of a person on a Web video is not feasible. Captioning in addition to signing may be the most feasible option for audio or video presentations.
Instructions and test items presented in a language other than English. Translated items in some languages may significantly increase the length of a test, especially if the language requires phrases or explanations of English words. Some students need English and native language versions of items available at the same time. Computer-based testing may provide an advantage in both of these situations for students who are computer literate and able to scroll across and down long pages, and who can move between two versions of items. For students using screen readers, it is important for the screen reading software to recognize non-Latin based languages (e.g., Chinese, Korean, Hmong). Audio versions in native languages need to be in a dialect familiar to the student (e.g., a student from Mexico may have difficulty understanding a translation from Spain).
The use of machine translations is increasing. Yet, at this time, the translation may not be good enough to produce valid test items. Tests developed in multiple languages, with human rather than machine translation continue to be the most valid. Machine translators may be useful as a dictionary or glossary for specific words or phrases. The disadvantage of a human translator is the lack of standardized translation. For example, an interpreter may change the difficulty of items through word choice, explaining vocabulary for which there is not direct translation, or otherwise coaching students.
Braille. Tests that do not require students to read printed text (e.g., math tests) can be read by a student with a screen reader that converts text into synthesized speech. Tests that do require students to read printed text (e.g., reading tests) could be read by a student with a screen reader that converts text into Braille through a refreshable Braille device attached to the computer. For students who are deaf and blind, all of the content must be in a text format so that it can be converted to Braille. Images must also be accessible. The Technology Act requires that “when an image represents a program element, the information conveyed by the image must also be available in text.” Strategies for this are described in the section on images and graphics below.
Highlighter and place holding templates. Students should be able to self-select the use of a highlighting feature to mark words or phases within test items, just as they might on paper/pencil tests.
Graphics or images that supplement text. The purpose of graphics and images on an assessment is to aid in the understanding of an item, and not purely for decorative purposes. That said, images can aid greatly in the understanding of content, especially for students with learning disabilities and students whose native language is not English. Pictures and other graphics cannot be directly accessed by users of screen-readers or foreign language translation applications. The Assistive Technology Act requires that “When an image represents a program element, the information conveyed by the image must also be available in text.” The Act goes on to state, “A text equivalent for every non-text element shall be provided (e.g., via “alt,” “longdesc,” or in element content).” Images need to be selected carefully, with a concise, yet complete description in an alt tag.
Tactile graphics or three-dimensional models may be needed for images. It is also important to avoid the use of complex backgrounds or wallpaper that may interfere with the readability of overlying text. Simpler versions of any screens with complex backgrounds need to be available.
Paper/pencil test format. Some students will continue to need paper and pencil versions of tests. There are still many students who are not computer literate. These students may, for example, be recent immigrants from countries where computers are not used in instruction, or they may have had little formal schooling in their home country. Other students may have had insufficient opportunities to become computer literate in U.S. schools for a variety of reasons. Some students need accommodations that have not been made available on computer-based tests, especially if the assessments are graphics based rather than text based.
Use of color. Students need to be able to choose a variety of contrasting colors for background and text. According to the Assistive Technology Act, computer “applications shall not override user selected contrast and color selections and other individual display attributes.” In addition, for the assistance of students who are color blind or who are using monochrome monitors, the Assistive Technology Act states, “Color coding shall not be used as the only means of conveying information, indicating an action, prompting a response, or distinguishing a visual element…Web pages shall be designed so that all information conveyed with color is also available without color, for example from context or markup.” If color-coding is used to distinguish information, some other distinguishing feature should also be present (such as an asterisk or other textual indication).
Flashing or blinking text or objects. It is important to avoid text or objects that flash or flicker at rates that may induce seizures in people who are susceptible to them. The Assistive Technology Act requires that “software shall not use flashing or blinking text, objects, or other elements having a flash or blink frequency greater than 2 Hz and lower than 55 Hz.”
Multiple column layout. Items that use columns or tables need to be analyzed carefully to make sure that their linear presentation order is logical, enabling screen readers to access the information.
Captioning. As multi-media begins to be used for assessment presentation, it will be important to provide synchronized captions or transcripts for the audio portion of the content. Closed or open captioning for Web-based multimedia can be provided in the same way as for television shows or movies.
In Table 6 is an expanded list of response accommodations. For each of the accommodaitons, several considerations are listed. In the columns to the right of these considerations we indicate whether the accommodation is a built in feature, the need for the accommodation is not affected by being a computer-based test, or another accommodation (new or different) may be needed.
Table 6. Response Accommodations
*1 Built-in feature of universally designed
computer-based tests (available for self-selection by any student)
Write in test booklet. There are many options for marking responses on computer-based tests that are not available on paper. It would still be possible for a student to dictate responses to a teacher, who would then mark them on the computer. The option of speech recognition software is also becoming more available. Speech recognition technology enables computers to translate human speech into a written format. Students who use speech recognition need to be tested in individual settings so as not to distract other test takers. Currently, speech recognition only works for some people, while others, especially those who are not native English speakers or those with speech impairments, can be frustrated by the software’s lack of ability to differentiate many of the sounds that they make. Some second language learners have accents that do not work well with speech recognition software (e.g., speakers of tonal languages tend to carry those tones into English and the software often does not recognize them). However, this technology is improving rapidly to recognize speakers with a wider variety of regional and second language accents (Williams, 2002). For example, according to Williams (p. 44):
Research is also underway to allow students to speak naturally, rather than the current practice of pausing slightly between words. High-quality microphones improve recognition. Students who have tests presented in their native language may have a difficult time responding using an English alphabet keyboard if they are responding in a non-alphabet language. For example, in Chinese, adults need to know thousands of individual characters to read a text like a newspaper. Each character equals a word. So, Chinese computer keyboards may have keys that represent pieces of characters (strokes) that have to be combined together in a precise way to form a specific word.
Additional options that can enable students to select responses independently include simple mouse clicks, using the keyboard, touching the screen and assistive devices to access the keyboard (e.g., mouth stick or head wand).
Scribe. Many of the comments and cautions described in the previous paragraphs also apply here. Students who are able to use speech recognition software may be able to dictate written responses without the aid of a human scribe. Other assistive technology may enable students to compose extended responses, such as communication devices, a mouth stick or head wand.
Brailler. Some students may be able to use speech recognition software (with the cautions described above) in place of a Brailler. Others will continue to require or prefer the use of a Brailler.
Tape recorder. Speech recognition software can take the place of a tape recorder for many students, with the cautions described above.
Paper/pencil response. Some students will not have enough experience or confidence using computers to be able to produce valid assessment responses and may need to use paper/pencil test forms until they become computer literate. Some students will only need paper for solving problems and drafting ideas, while others will need to respond completely using a paper/pencil format, with responses transferred to an electronic test form by a test administrator. Speech recognition, with the cautions described above, may be a viable option for some of these students.
Spell check. The use of a spell check has been controversial on writing tests. It is usually allowed in situations where spelling achievement is not measured, and not allowed when spelling achievement is being measured. Spelling implications need to be considered for students who use speech recognition software.
Calculator. As with the spell check, an online calculator option has been controversial on mathematics tests. Calculator use is often allowed on paper/pencil tests when arithmetic is not the construct being measured (Russell, 2002). However, standardization of the type of calculator used has been very difficult and would be much easier if all students had the same online calculator to use. Use of an online calculator is challenging for some students, especially if they have not had practice with this tool in their daily work. Currently, few teachers use computers in math instruction, so students are not used to working on screens.
English or bilingual dictionary/glossary. Students can self-select a dictionary option, or simply click on key words for definitions in English or other languages. Print copies of dictionaries could continue to be used if this option is not available. And, as with the spell check option, it would need to be disabled when finding the definition of a word is being tested.
Timing accommodations reflect changes in the amount of time a student has to complete an assessment, while scheduling accommodations are changes in the time of day in which a student is tested. Table 7 is an expanded list of timing and scheduling accommodations, with considerations and implications.
Table 7. Timing/Scheduling Accommodations
*1 Built-in feature of universally designed technology-based test (available
for self-selection by any student)
Extended time. Well-designed assessments—those designed for maximum legibility and readability—take less time to complete than poorly designed assessments. Still, it may require more time for students who are not computer literate to take computer-based tests than it does for them to take paper/pencil assessments. Allowing all students time to complete an assessment presents scheduling challenges that need to be considered when planning test administration. For example, groups of students cannot be scheduled for testing in a computer lab every two hours if there are students who cannot finish in that amount of time. It may be difficult for a student to log off one computer and then log back on at another location to complete an assessment. However, with the advent of wireless computers, it may be possible for a computer to be used in any location.
Timing is no longer an issue for most criterion-referenced tests, which tend to be untimed. Computerized adaptive tests, where items are presented based on a student’s previous responses, tend to be shorter in length than traditional large-scale tests, and usually take less time to complete.
Time of day beneficial to student. Currently, it is common for all test takers within a building, district, or even state to be tested at the same time on the same day. With computer-based testing, test times probably need to vary because of the availability of computers and network capacity. This variability may increase opportunities for individual students to be scheduled at test times that are most beneficial for them. For example, a student who is more alert in the morning because of medication could be tested during a morning session.
Breaks and multiple test sessions. Technology is required for multiple test sessions that would allow individual students to submit their completed responses and be able to log out and back on again at another time, starting at the place where they previously left off. For short breaks, it may be possible to simply turn off the monitor or create a blank screen rather than logging out. Careful scheduling is needed for multiple test sessions to make sure that computers are available. Test security becomes an issue if students who have responded to the same test items have opportunities to interact with each other between test sessions. This can be alleviated through the use of item banks large enough to make it unlikely that students would be exposed to the same items. It might also be possible to block access to items completed during a previous session. However, it is important for students to be able to return to items that they skipped or did not complete, just as they can with paper/pencil tests.
Order of subtest administration. Tests can be set up to allow students to self-select the order in which they take each subtest. The security issues described above also apply here. If students within a room are not all working on the same subtest, directions or other guidance from the test administrator would need to be provided individually.
A list of commonly used setting accommodations is provided in Table 8. For each accommodation, we provide both considerations and implications for built in accommodations, no effect, and the need for new or different accommodations.
Table 8. Setting Accommodations
*1 Built-in feature of universally designed
computer-based test (available for self-selection by any student)
Individual or small group administration. Computer-based tests create increased individualization for every student. Each student can be seated at a separate computer station wearing ear/headphones for audio instructions or items. Keyboard noise may be distracting for students not wearing headphones. Students using speech recognition systems or other distracting response methods need to be tested in individual settings.
Preferential seating. This becomes a non-issue when students are seated at individual computer stations and do not need to focus on activity in a certain part of the room. Configuration of the computer lab may influence seating arrangements. For example, some students will need space around their computer for assistive technology; others may need special lighting.
Special lighting. Computer labs are usually set up to minimize glare from windows or overhead lights. Many also contain incandescent lighting, which is less distracting for students with attention deficits and produces better light for students with visual impairments. In designing computer-based tests, it is important to maximize contrast between the print and background and to ensure that text and graphics are understandable when viewed without color, for students who are color-blind or using monochrome monitors. Students should be able to self-select text and background colors and shading that maximizes their ability to read print on the screen.
Adaptive or special furniture. Students need comfortable access to a computer screen and any peripheral presentation or response technology. These arrangements need to be made on an individual basis with sufficient preparation time.
Home/hospital/non-school administration. Computer-based tests present new challenges for students who are tested in non-school locations. Students need access to a laptop computer and a network connection (possibly wireless), along with any individualized accommodations. It is important to make sure that the equipment is comparable to that used by students assessed in school buildings.
With the reauthorization of Title I, nearly all states are in the process of designing new assessments. As part of this process, several states are considering the use of computer-based testing, since this is the mode in which many students are already learning. Several states have already begun designing and implementing computer-based testing. According to a report to the National Governors Association (2002), “Testing by computer presents an unprecedented opportunity to customize assessment and instruction to more effectively meet students’ needs” (p. 8). Some of the potential opportunities presented by the advent of computer-based testing include: efficient administration, preferred by students, self-selection options for students, improved writing performance, built-in accommodations, immediate results, efficient item development, increased authenticity, and the potential to shift focus from assessment to instruction. Of course, there remain many challenges that must be overcome in order for computer-based testing to be effective for large-scale state assessments. These include: issues of equity and skill in computer use, added challenges for some students, technological challenges, security of online data, lack of expertise in designing accessible Web pages, and prohibitive development costs.
Because many accessibility features can be built into computer-based tests, the validity of test results can be increased for many students, including students with disabilities and English language learners, without the addition of special accommodations. However, even though items on universally designed assessments are accessible for most students, there will still be some specialized accommodations, and computer-based testing needs to be amenable to these accommodations. Students with disabilities will be at a great disadvantage if paper/pencil tests are simply copied on screen without any flexibility. Until the implications of the use of graphics versus text-based user interfaces are considered and resolved, a large number of students will need to continue to use paper/pencil tests, with a possible reduction in the comparability of results, and an increase in administrative time and potential errors when paper/pencil responses are transferred by a test administrator to a computer for scoring.
There are many resources for building accessible computer-based tests in order to keep from reinventing systems from state to state. These are described throughout this report and listed in Appendix A.
Several steps were described to assist groups in the thoughtful development of computer-based tests. These include:
Step 1. Assemble a group of experts to guide the transformation.
Step 2. Decide how each accommodation will be incorporated into the computer-based test.
Step 3. Consider each accommodation or assessment feature in light of the constructs being tested.
Step 4. Consider the feasibility of incorporating the accommodation into the computer-based test.
Step 5. Consider training implications for staff and students.
Skipping any of these steps may result in the design of assessments that exclude large numbers of students.
In conclusion, a report to the National’s Governors Association (2002, p.9) sums up what we need to remember as computer-based testing grows across the United States and throughout the world:
Anderson, M., Liu, K., Swierzbin, B., Thurlow, M., & Bielinski, J. (2000). Bilingual accommodations for limited English proficient students on statewide reading tests: Phase 2 (Minnesota Report 31). Minneapolis, MN: University of Minnesota, National Center on Educational Outcomes.
Baker, E.L. (1999). Technology: Something’s coming—something good. CRESST Policy Brief 2. Los Angeles, CA: UCLA, National Center for Research on Evaluation, Standards, and Student Testing.
Baker, E.L. (2002). Design of automated authoring systems for tests. In National Research Council, Technology and assessment: Thinking ahead: Proceedings of a workshop. Board on Testing and Assessment, Center for Education, Division of Behavioral and Social Sciences and Education. Washington, DC: National Academy Press.
Bejar, I.I. (1995). From adaptive testing to automated scoring of architectural simulations. In E.L. Mancall & P.G. Bashook (Eds.), Assessing clinical reasoning: The oral examination and alterative methods. Evanston, IL: American Board of Medical Specialties.
Bennett, R.E. (1995). Computer-based testing for examinees with disabilities: On the road to generalized accommodation. In S. Messick (Ed.), Assessment in higher education: Issues of access, student development, and public policy. Hillsdale, NJ: Erlbaum.
Bennett, R.E. (1998). Reinventing assessment: Speculations on the future of large-scale educational testing. Princeton, NJ: Policy Information Center, Educational Testing Service. Retrieved March, 2002, from the World Wide Web: www.ets.org/research/pic/bennett.html
Bennett, R.E. (1999). Using new technology to improve assessment. Educational Measurement Issues and Practice, 18 (3), 5-12.
Bennett, R.E. (2001). How the Internet will help large-scale assessment reinvent itself. Education Policy Analysis Archives, 9 (5). Retrieved March, 2002, from the World Wide Web: http://epaa.asu.edu/epaa/v9n5.html
Bennett, R.E. (2002). An electronic infrastructure for a future generation of tests. In H.F. O’Neil, Jr. & R. Perez (Eds.), Technology applications in education: A learning view. Mahwah, NJ: Erlbaum.
Bennett, R.E., Goodman, J., Hessinger, J., Ligget, J., Marshall, G., Kahn, H., & Zack, J. (1999). Using multimedia in large-scale computer-based testing programs. Computers in Human Behavior, 15, 283-294.
Bloodsworth, J.G. (1993). Legibility of print. Columbia, SC: ERIC Accession No: ED 355497.
Bolt, D. & Crawford, R. (2000). Digital divide: Computers and our children’s future. New York: TV Books.
Bowe, F. (2000). Universal design in education: Teaching nontraditional students. Westport, CT: Bergin & Garvey.
Brown, P.J., & Augustine, A. (2001). Screen reading software as an assessment accommodation: Implications for instruction and student performance. Paper presented at the American Education Research Association Annual Meeting, Seattle, WA, April, 2001.
Brown-Chidsey, R., & Boscardin, M.L. (1999). Computers as accessibility tools for students with and without learning disabilities. Amherst, MA: University of Massachusetts.
Brown-Chidsey, R., Boscardin, M.L., & Sireci, S.G. (1999). Computer attitudes and opinions of students with and without learning disabilities. Amherst, MA: University of Massachusetts.
Burk, M. (1999). Computerized test accommodations: A new approach for inclusion and success for students with disabilities. Washington, D.C.: A.U. Software.
Bushweller, K. (2000, June). Electronic exams: Throw away the No. 2 pencils—here comes computerized testing. Electronic School, 20-24.
Calhoon, M.B., Fuchs, L.S., & Hamlett, C.L. (2000). Effects of computer-based test accommodations on mathematics performance assessments for secondary students with learning disabilities. Learning Disability Quarterly, 23, 271-282.
Campbell, L.M. & Waddell, C.D. (1997). Technology-based curbcuts: How to build an accessible Web site. CAPED Communiqué, California Association on Postsecondary Education and Disability.
Center for Universal Design. (1997). What is Universal Design? North Carolina State University: Center for Universal Design. Retrieved March, 2002, from the World Wide Web: www.design.ncsu.edu
Chishold, W., Vanderheiden, G., & Jacobs, I. (1999). Web content accessibility guidelines. Madison, WI: University of Wisconsin, Trace R & D Center. Retrieved March, 2002, from the World Wide Web: http://www.w3.org/TR/1999/WAI-WEBCONTENT-19990505
Chung, W.K., Baker, E.L., & Cheak, A.M. (2001). Knowledge mapper authoring system prototype. (Final deliverable to OERI). Los Angeles: University of California, National Center for Research on Evaluation, Standards, and Student Testing.
Chung, W.K., Klein, D.C.D., Herl, H.E., & Bewley, W. (2001). Requirements specification for a knowledge mapping authoring system. (Final deliverable to OERI). Los Angeles: University of California, National Center for Research on Evaluation, Standards, and Student Testing.
Computer Science and Telecommunications Board. (1997). More than screen deep: toward every-citizen interfaces to the nation’s information infrastructure. Washington DC: Commission on Physical Sciences, Mathematics, and Applications, National Research Council, National Academy Press. Retrieved March, 2002, from the World Wide Web: http://www.nap.edu/readingroom/books/screen
Daiute, C. (1985). Writing and computers, Reading, MA: Addision-Wesley.
Dolan, R.P., & Hall, T.E. (2001). Universal design for learning: Implications for large-scale assessment. IDA Perspectives 27(4), 22-25. Retrieved March, 2002, from the World Wide Web: http://www.cast.org/udl/index.cfm?i=2518
Fuchs, L.S., Fuchs, D., Eaton, S., Hamlett, C.L., & Karns, K. (2000). Supplementing teacher judgments of mathematics test accommodations with objective data sources. School Psychology Review, 29, 65-85.
Gershon, R., & Bergstrom, B. (1995). Does cheating on CAT pay: NOT! ERIC ED392844.
Gitomer, D.H., Steinbert, L.L., & Mislevy, R.J. (1995). Diagnostic assessment of troubleshooting skills in an intelligent system. Princeton, NJ: Educational Testing Service.
Greenwood, C.R., & Rieth, H.J. (1994). Current dimensions of technology-based assessment in special education. Exceptional Children, 61(2), 105-113.
Goldberg, L., & O’Neill, L.M. (2000, July). Computer technology can empower students with learning disabilities. Exceptional Parent Magazine, 72-74.
Haaf, R., Duncan, B., Skarakis-Doyle, E., Carew, M., & Kapitan, P. (1999). Computer-based language assessment software: The effects of presentation and response format. Language, Speech, and Hearing Services in Schools, 30, 68-74.
Haas, C,. & Hayes, J.R. (1986) What did I just say? Reading problems in writing with the machine. Research in the Teaching of English, 20 (1), 22-35.
Hamilton, L. S., Klein, S. P., & Lorie, W. (2001). Using Web-based testing for large-scale assessment. Santa Monica: RAND. Retrieved March, 2002, from the World Wide Web: www.rand.org/publications/IP/IP196/IP196.pdf
Hollenbeck, K., Tindal, G., Harniss, M., & Almond, P. (1999). Reliability and decision consistency: An analysis of writing mode at two times on a statewide test. Educational Assessment, 6 (1), 23-40.
Joint Committee on Standards for Educational and Psychological Testing. (1999). Standards for educational and psychological testing. Washington, DC: Author.
Kerrey, B. & Isakson, J. (2002). The power of the internet for learning: moving from promise to practice—Report of the Web-based Education Commission. Washington, DC: Web-based Education Commission. Retrieved March, 2002, from the World Wide Web: http://interact.hpcnet.org/webcommission/index.htm.
Lahm, E.A., & Nickels, B.L. (1999). Assistive technology competencies for special educators. Teaching Exceptional Children, 32(1), 566-63.
Lewis, A. (2001). New directions in student testing and technology. APEC 2000 International Assessment Conference, Los Angeles.
Liu, K., Anderson, M., Swierzbin, B., & Thurlow, M. (1999). Bilingual accommodations for limited English proficient students on statewide reading tests: Phase I (Minnesota Report 20). Minneapolis, MN: University of Minnesota, National Center on Educational Outcomes.
Lunz, M.E., & Bergstrom, B.A. (1994). An empirical study of computerized adaptive test administration conditions. Journal of Educational Measurement, 31 (3), 251-263.
McBride, J.R. (1985). Computerized adaptive testing. Educational Leadership, 43 (2), 25-28.
Menlove, M., & Hammond, M. (1998). Meeting the demands of ADA, IDEA, and other disability legislation in the design, development, and delivery of instruction. Journal of Technology and Teacher Education, 6 (1), 75-85.
Mislevy, R.J., Steinberg, L.L., & Almond, R.G. (1999). Evidence-centered assessment design. Princeton, NJ: Educational Testing Service.
Morocco, C.C., & Neuman, S.B. (1986). Word processors and the acquisition of writing strategies. Journal of Learning Disabilities, 19(4), 243-248.
Mourant, R.R., Lakshmanan, R. & Chantadisai, R. (1981). Visual fatigue and cathode ray tube display factors. Human Factors, 23 (5), 529-546.
National Governors Association. (2002). Using electronic assessment to measure student performance. Education Policy Studies Division: National Governors Association. Retrieved March, 2002, from the World Wide Web: http://www.nga.org/cda/files/ELECTRONICASSESSMENT.pdf
National Research Council. (2001). Knowing what students know: The science and design of educational assessments. Washington, DC: Board on Testing and Assessment, Center for Education. Division of Behavioral and Social Sciences and Education, National Academy Press.
National Research Council. (2002). Technology and assessment: Thinking ahead: Proceedings of a workshop. Washington, DC: Board on Testing and Assessment, Center for Education. Division of Behavioral and Social Sciences and Education, National Academy Press.
Newman, F. & Scurry, J. (2001). Online technology pushes pedagogy to the forefront. The Chronicle of Higher Education, 47 (44). Retrieved March, 2002, from the World Wide Web: http://chronicle.com/weekly/v47/i44/44b00701.htm
Olson, L. (2002). Ed. dept. hints Idaho’s novel testing plan unacceptable. Education Week, 21 (21) 18,21. Retrieved March, 2002, from the World Wide Web: http://edweek.com/ew/newstory.cfm?slug=21Idaho.h21&keywords=Idaho
Ommerborn, R., & Schuemer, R. (2001). Using computers in distance study: Results of a survey amongst disabled distance students. FernUniversität-Gesamthochschule in Hagen. Retrieved March, 2002, from the World Wide Web: http://www.fernuni-hagen.de/ZIFF
Peters-Walters. S. (1998). Accessible Web site design. Teaching Exceptional Children, 30(5), 42-47.
Quality Counts (2002). Building blocks for success. Retrieved March, 2002, from the World Wide Web: www.educationweek.org.
Quenemoen, R., Thurlow, M., & Bielinski, J. (2002). Rethinking design and levels approaches to federal inclusive assessment and accountability requirements (Working Paper). Minneapolis, MN: University of Minnesota, National Center on Educational Outcomes.
Rivera, C., Stansfield, C.W., Scialdone, L., & Sharkey, M. (2000). An analysis of state policies for the inclusion and accommodation of English language learners in state assessment programs during 1998-1999. Arlington, VA: George Washington University Center for Equity and Excellence in Education.
Rose, D. (2000). Universal design for learning. Journal of Special Education Technology, 15 (4). Retrieved March, 2002, from the World Wide Web: http://jset.unlv.edu/15.4/issuemenu.html
Russell, M. (2002). How computer-based technology can disrupt the technology of testing and assessment. In National Research Council, Technology and assessment: Thinking ahead: Proceedings of a workshop. Washington, DC: Board on Testing and Assessment, Center for Education. Division of Behavioral and Social Sciences and Education, National Academy Press.
Russell, M. & Haney, W. (1997). Testing writing on computers: An experiment comparing student performance on tests conducted via computers and via paper-and-pencil. Educational Policy Analysis Archives, 5 (3). Retrieved March, 2002, from the World Wide Web:http://epaa.asu.edu/epaa/v5n3.html
Russell M. & Haney.W. (2000). Bridging the gap between testing and technology in schools. Education Policy Analysis Archives, 8 (19). Retrieved March, 2002, from the World Wide Web:http://epaa.asu.edu/epaa/v8n19.html
Russell, M. & Plati, T. (2001). Effects of computer versus paper administration of a state-mandated writing assessment. Teachers College Record. Retrieved March, 2002, from the World Wide Web: http://www.tcrecord.org
Schriver, K. (1997). Dynamics of document design. New York: John Wiley & Sons.
Stocking, M. (1996). Revising answers to items in computerized adaptive testing: A comparison of three models. ETS Report Number ETS-RR-96-12. Princeton, NJ: Educational Testing Service.
Thompson, C. (1999). New word order: The attack of the incredible grading machine. Linguafranca, 9 (5). Retrieved March, 2002, from the World Wide Web: http://www.linguafranca.com/9907/nwo.html
Thompson, S.J., Johnstone, C.J., & Thurlow, M.L. (2002). Universal design applied to large-scale assessments (Synthesis Report 44). Minneapolis, MN: University of Minnesota, National Center on Educational Outcomes.
Thurlow, M.L., Lazarus, S., & Thompson, S.J. (2002). 2001 state policies on assessment participation and accommodations. Minneapolis, MN: University of Minnesota, National Center on Educational Outcomes.
Tindal, G. & Fuchs, L.S. (1999). A summary of research on test changes: An empirical basis for defining accommodations. Lexington, KY: University of Kentucky, Mid-South Regional Resource Center.
Tindal, G., Heath, B., Hollenbeck, K., Almond, P., & Harniss, M. (1998). Accommodating students with disabilities on large-scale tests: An experimental study. Exceptional Children, 64, 439-450.
Tinker. (1963). Legibility of print. Ames, IA: Iowa State University Press.
Trotter, A. (2001). Testing computerized exams. Education Week, 20 (37) 30-35. Retrieved March, 2002, from the World Wide Web: www.edweek.org/ew/ewstory.cfm?slug=37online.h20
Vanderheiden, G. (2000). Fundamental principles and priority setting for universal usability. Trace Research & Development Center, Madison, WI. Retrieved March, 2002, from the World Wide Web: http://trace.wisc.edu/docs/fundamental_princ_and_priority_acmcuu2000/index.htm
Waddell, C.D. (1997). Technology-based curbcuts for government Web sites: Making your Web site accessible. ADA Update, National League of Cities.
Wainer, H. (1993). Some practical considerations when converting a linearly administered test to an adaptive format. Educational Measurement, Issues and Practice, 12, 15-20.
Web Accessibility Initiative, World Wide Web Consortium. Retrieved March, 2002, from the World Wide Web: http://www.w3.org/WAI/
WebAIM (2001). Introduction to Web accessibility. Retrieved March, 2002, from the World Wide Web: www.webaim.org/intro/
Williams, S.M. (2002). Speech recognition technology and the assessment of beginning readers. In National Research Council, Technology and assessment: Thinking ahead: Proceedings of a workshop. Washington, DC: Board on Testing and Assessment, Center for Education. Division of Behavioral and Social Sciences and Education, National Academy Press.
Wissick, C.A., & Gardner, J.E. (2000). Multimedia or not to multimedia? That is the question for students with learning disabilities. Teaching Exceptional Children, 32 (4), 34-43.
Appendix A Assistive Technology and Electronic Testing Resources
Adaptive Technology Resource Centre
Alliance for Technology Access
American Educational Research Association
American Statistical Association
Apple Computer, Inc.
Arizona State University College of
Assistive Technology, Inc.
Assistive Technology Industry Association
Association for the Advancement of
Assistive Technology in Europe
Bartimaeus Group Adaptive Technology
California State University, Northridge
Center on Disabilities
CAP (Computer/Electronic Accommodations
Center for Advanced Research on Language
Center for Applied Special Technology
Center for Computer Assistance for
Center for Evaluation, Standards, and
Student Testing (National) (CRESST)
Closing The Gap
disABILITY Information and Resources
DREAMMS for Kids, Inc.
Educational Testing Service
Equal Access to Software and Innovation
ERIC Clearinghouse on Information &
Freedom of Speech
GW Micro, Inc.
Helen A. Keller Institute for Human
disAbilities, George Mason University
IBM Accessibility Center
Institute for Matching Person & Technology
Kurzweil Educational Products
Lernout & Hauspie
MultiWeb (Deakin University, Australia)
National Center for Accessible Media
On the Internet Magazine
Question Mark Computing
Rehabilitation Engineering and Assistive
Technology Society of America (RESNA)
Society for Technical Education’s
“Usability” Special Interest Group
TESOL Testing and Evaluation Special
Washington Assistive Technology Alliance
Web Accessibility Initiative
Web AIM (Accessibility in Mind)
World Wide Web Consortium
Section 508 of the Rehabilitation Act of 1973, as amended (29 U.S.C. 794d). PART 1194 -- ELECTRONIC AND INFORMATION TECHNOLOGY ACCESSIBILITY STANDARDS
Subpart A -- General
§ 1194.1 Purpose.
The purpose of this part is to implement section 508 of the Rehabilitation Act of 1973, as amended (29 U.S.C. 794d). Section 508 requires that when Federal agencies develop, procure, maintain, or use electronic and information technology, Federal employees with disabilities have access to and use of information and data that is comparable to the access and use by Federal employees who are not individuals with disabilities, unless an undue burden would be imposed on the agency. Section 508 also requires that individuals with disabilities, who are members of the public seeking information or services from a Federal agency, have access to and use of information and data that is comparable to that provided to the public who are not individuals with disabilities, unless an undue burden would be imposed on the agency.
§ 1194.2 Application.
(a) Products covered by this part shall comply with all applicable provisions of this part. When developing, procuring, maintaining, or using electronic and information technology, each agency shall ensure that the products comply with the applicable provisions of this part, unless an undue burden would be imposed on the agency.
(1) When compliance with the provisions of this part imposes an undue burden, agencies shall provide individuals with disabilities with the information and data involved by an alternative means of access that allows the individual to use the information and data.
(2) When procuring a product, if an agency determines that compliance with any provision of this part imposes an undue burden, the documentation by the agency supporting the procurement shall explain why, and to what extent, compliance with each such provision creates an undue burden.
(b) When procuring a product, each agency shall procure products which comply with the provisions in this part when such products are available in the commercial marketplace or when such products are developed in response to a Government solicitation. Agencies cannot claim a product as a whole is not commercially available because no product in the marketplace meets all the standards. If products are commercially available that meet some but not all of the standards, the agency must procure the product that best meets the standards.
(c) Except as provided by §1194.3(b), this part applies to electronic and information technology developed, procured, maintained, or used by agencies directly or used by a contractor under a contract with an agency which requires the use of such product, or requires the use, to a significant extent, of such product in the performance of a service or the furnishing of a product.
§ 1194.3 General exceptions.
(a) This part does not apply to any electronic and information technology operated by agencies, the function, operation, or use of which involves intelligence activities, cryptologic activities related to national security, command and control of military forces, equipment that is an integral part of a weapon or weapons system, or systems which are critical to the direct fulfillment of military or intelligence missions. Systems which are critical to the direct fulfillment of military or intelligence missions do not include a system that is to be used for routine administrative and business applications (including payroll, finance, logistics, and personnel management applications).
(b) This part does not apply to electronic and information technology that is acquired by a contractor incidental to a contract.
(c) Except as required to comply with the provisions in this part, this part does not require the installation of specific accessibility-related software or the attachment of an assistive technology device at a workstation of a Federal employee who is not an individual with a disability.
(d) When agencies provide access to the public to information or data through electronic and information technology, agencies are not required to make products owned by the agency available for access and use by individuals with disabilities at a location other than that where the electronic and information technology is provided to the public, or to purchase products for access and use by individuals with disabilities at a location other than that where the electronic and information technology is provided to the public.
(e) This part shall not be construed to require a fundamental alteration in the nature of a product or its components.
(f) Products located in spaces frequented only by service personnel for maintenance, repair, or occasional monitoring of equipment are not required to comply with this part.
The following definitions apply to this part:
Agency. Any Federal department or agency, including the United States Postal Service.
Alternate formats. Alternate formats usable by people with disabilities may include, but are not limited to, Braille, ASCII text, large print, recorded audio, and electronic formats that comply with this part.
Alternate methods. Different means of providing information, including product documentation, to people with disabilities. Alternate methods may include, but are not limited to, voice, fax, relay service, TTY, Internet posting, captioning, text-to-speech synthesis, and audio description.
Assistive technology. Any item, piece of equipment, or system, whether acquired commercially, modified, or customized, that is commonly used to increase, maintain, or improve functional capabilities of individuals with disabilities.
Electronic and information technology. Includes information technology and any equipment or interconnected system or subsystem of equipment, that is used in the creation, conversion, or duplication of data or information. The term electronic and information technology includes, but is not limited to, telecommunications products (such as telephones), information kiosks and transaction machines, World Wide Web sites, multimedia, and office equipment such as copiers and fax machines. The term does not include any equipment that contains embedded information technology that is used as an integral part of the product, but the principal function of which is not the acquisition, storage, manipulation, management, movement, control, display, switching, interchange, transmission, or reception of data or information. For example, HVAC (heating, ventilation, and air conditioning) equipment such as thermostats or temperature control devices, and medical equipment where information technology is integral to its operation, are not information technology.
Information technology. Any equipment or interconnected system or subsystem of equipment, that is used in the automatic acquisition, storage, manipulation, management, movement, control, display, switching, interchange, transmission, or reception of data or information. The term information technology includes computers, ancillary equipment, software, firmware and similar procedures, services (including support services), and related resources.
Operable controls. A component of a product that requires physical contact for normal operation. Operable controls include, but are not limited to, mechanically operated controls, input and output trays, card slots, keyboards, or keypads.
Product. Electronic and information technology.
Self Contained, Closed Products. Products that generally have embedded software and are commonly designed in such a fashion that a user cannot easily attach or install assistive technology. These products include, but are not limited to, information kiosks and information transaction machines, copiers, printers, calculators, fax machines, and other similar types of products.
Telecommunications. The transmission, between or among points specified by the user, of information of the user's choosing, without change in the form or content of the information as sent and received.
TTY. An abbreviation for teletypewriter. Machinery or equipment that employs interactive text based communications through the transmission of coded signals across the telephone network. TTYs may include, for example, devices known as TDDs (telecommunication display devices or telecommunication devices for deaf persons) or computers with special modems. TTYs are also called text telephones.
Undue burden. Undue burden means significant difficulty or expense. In determining whether an action would result in an undue burden, an agency shall consider all agency resources available to the program or component for which the product is being developed, procured, maintained, or used.
§ 1194.5 Equivalent facilitation.
Nothing in this part is intended to prevent the use of designs or technologies as alternatives to those prescribed in this part provided they result in substantially equivalent or greater access to and use of a product for people with disabilities.
§ 1194.21 Software applications and operating systems.
(a) When software is designed to run on a system that has a keyboard, product functions shall be executable from a keyboard where the function itself or the result of performing a function can be discerned textually.
(b) Applications shall not disrupt or disable activated features of other products that are identified as accessibility features, where those features are developed and documented according to industry standards. Applications also shall not disrupt or disable activated features of any operating system that are identified as accessibility features where the application programming interface for those accessibility features has been documented by the manufacturer of the operating system and is available to the product developer.
(c) A well-defined on-screen indication of the current focus shall be provided that moves among interactive interface elements as the input focus changes. The focus shall be programmatically exposed so that assistive technology can track focus and focus changes.
(d) Sufficient information about a user interface element including the identity, operation and state of the element shall be available to assistive technology. When an image represents a program element, the information conveyed by the image must also be available in text.
(e) When bitmap images are used to identify controls, status indicators, or other programmatic elements, the meaning assigned to those images shall be consistent throughout an application's performance.
(f) Textual information shall be provided through operating system functions for displaying text. The minimum information that shall be made available is text content, text input caret location, and text attributes.
(g) Applications shall not override user selected contrast and color selections and other individual display attributes.
(h) When animation is displayed, the information shall be displayable in at least one non-animated presentation mode at the option of the user.
(i) Color coding shall not be used as the only means of conveying information, indicating an action, prompting a response, or distinguishing a visual element.
(j) When a product permits a user to adjust color and contrast settings, a variety of color selections capable of producing a range of contrast levels shall be provided.
(k) Software shall not use flashing or blinking text, objects, or other elements having a flash or blink frequency greater than 2 Hz and lower than 55 Hz.
(l) When electronic forms are used, the form shall allow people using assistive technology to access the information, field elements, and functionality required for completion and submission of the form, including all directions and cues.
§ 1194.22 Web-based intranet and internet information and applications.
(a) A text equivalent for every non-text element shall be provided (e.g., via "alt", "longdesc", or in element content).
(b) Equivalent alternatives for any multimedia presentation shall be synchronized with the presentation.
(c) Web pages shall be designed so that all information conveyed with color is also available without color, for example from context or markup.
(d) Documents shall be organized so they are readable without requiring an associated style sheet.
(e) Redundant text links shall be provided for each active region of a server-side image map.
(f) Client-side image maps shall be provided instead of server-side image maps except where the regions cannot be defined with an available geometric shape.
(g) Row and column headers shall be identified for data tables.
(h) Markup shall be used to associate data cells and header cells for data tables that have two or more logical levels of row or column headers.
(i) Frames shall be titled with text that facilitates frame identification and navigation.
(j) Pages shall be designed to avoid causing the screen to flicker with a frequency greater than 2 Hz and lower than 55 Hz.
(k) A text-only page, with equivalent information or functionality, shall be provided to make a web site comply with the provisions of this part, when compliance cannot be accomplished in any other way. The content of the text-only page shall be updated whenever the primary page changes.
(l) When pages utilize scripting languages to display content, or to create interface elements, the information provided by the script shall be identified with functional text that can be read by assistive technology.
(m) When a web page requires that an applet, plug-in or other application be present on the client system to interpret page content, the page must provide a link to a plug-in or applet that complies with §1194.21(a) through (l).
(n) When electronic forms are designed to be completed on-line, the form shall allow people using assistive technology to access the information, field elements, and functionality required for completion and submission of the form, including all directions and cues.
(o) A method shall be provided that permits users to skip repetitive navigation links.
(p) When a timed response is required, the user shall be alerted and given sufficient time to indicate more time is required.
Note to §1194.22: 1. The Board interprets paragraphs (a) through (k) of this section as consistent with the following priority 1 Checkpoints of the Web Content Accessibility Guidelines 1.0 (WCAG 1.0) (May 5, 1999) published by the Web Accessibility Initiative of the World Wide Web Consortium:
Appendix C Section 508 Web Accessibility Checklist
(Updated March 29, 2001)
WebAIM (Web Accessibility in Mind) educates and trains web developers, university faculty and administrators on Web Accessibility issues. WebAIM is an initiative of the Center for Persons with Disabilities at Utah State University and is funded through the U.S. Department of Education Fund for the Improvement of Post-Secondary Education (FIPSE) Learning Anytime Anywhere Partnerships (LAAP). No official endorsement is inferred. Copyright 2000-2001 WebAIM. All Rights Reserved.
Part 1: for HTML
The following standards are excerpted from Section 508 of the Rehabilitation Act, §1194.22. Everything in the left hand column is a direct quote from Section 508. The other two columns are only meant to serve as helpful guidelines to comply with Section 508. These guidelines are suggestions only, and are not part of the official Section 508 document. For the full text of Section 508, please see http://www.access-board.gov/news/508-final.htm.
Note 1: Until the longdesc tag is better supported, it is impractical to use.
Note 2: "Text-only" and "accessible" are NOT synonymous. Text-only sites may help people with certain types of visual disabilities, but are not always helpful to those with cognitive, motor or hearing disabilities.
Note 4: When embedded into web pages, few plug-ins are currently directly accessible. Some of them e.g. RealPlayer) are more accessible as standalone products. It may be better to invoke the whole program rather than embed movies into pages at this point, although this may change in the future.
Note 5: Acrobat Reader 5.0 allows screen readers to access PDF documents. However, not all users have this version installed, and not all PDF documents are text-based (some are scanned in as graphics), which renders them useless to many assistive technologies. It is recommended that an accessible HTML version be made available as an alternative to PDF.
Note 6: PowerPoint files are currently not directly accessible unless the user has a full version of the PowerPoint program on the client computer (and not just the PowerPoint viewer). It is recommended that an accessible HTML version be provided as well.
Part 2: for Scripts, Plug-ins, Java, etc.
The following standards are excerpted from Section 508 of the Rehabilitation Act, §1194.21. For the full text of Section 508, please see http://www.access-board.gov/news/508-final.htm.
Appendix D. Guidelines for Accessible Web Page Design Computer Accommodations Program at the University of Minnesota.
Browser-Specific HTML Tags
Cascading Style Sheets (CSS)
If the function of a script is to fill the contents of an HTML form with basic default values, the text inserted into the form by the script should be accessible to a screen-reader. In contrast, if a script is used to display menu choices when the user moves the pointer over an icon, functional text for each menu choice cannot be specified and a redundant text link must be provided for each menu item.
Roll-over Controls (onmouseover)
Roll-overs that change the appearance of a control or cause additional information to be displayed do not cause a problem for screen-reader users and may provide useful feedback for users with learning disabilities or mobility impairments. However, screen-reader users will not be able to access pop-up information or menus. Be sure to include the text of pop-up information in the ALT tag for the graphic and provide redundant links for pop-up menu items.
Font (Face, Size and Color)
Visitors must be able to vary the size of the display font. Specify font sizes as relative values rather than absolute. CSS allows font-size to be defined in a number of ways. Specifying font size in ems — rather than pixels — is the preferred method for web accessibility, as it is relative to the user's default font size.
Color alone should not be used to convey information — this information may be inaccessible to individuals who are color-blind, screen-reader users, individuals with low-vision, users of some hand-held devices, and individuals using a monochrome display. When using colored text and/or a colored background, be sure that the contrast between the text and the background is significantly high at all color depths. Some optimal text and background combinations for those with color vision anomalies include black on white, white on black, yellow on black and black on yellow.
Backgrounds and Wallpaper
Blinking Text and Marquees
Acronyms and Abbreviations
When used as part of a link, the <ACRONYM> and <ABBR> elements should be used to denote and expand acronyms and abbreviations. The <ACRONYM> tag will cause the full text to which the acronym refers to be read by a screen-reader and visibly displayed when a mouse pointer is placed on the link containing the acronym. The <ABBR> tag does not visibly display any text — the expanded text is read by screen-readers only.
Although it is mostly a matter of personal preference and common sense, the following guidelines may help to determine when to use the <ABBR> tag and when to use the <ACRONYM> tag:
Use the <ABBR> tag for familiar abbreviations and acronyms (e.g., FYI, ASAP, CST/CDT, lbs. and the like).
Use the <ACRONYM> tag any time the acronym refers to a place, organization or other proper noun. This will aid sighted visitors in identifying the acronym.
Note: The <ABBR> and <ACRONYM> elements are part of the HTML 4.0 specifications and may not be interpreted by some browsers — they will probably not be recognized by most text-only browsers, such as Lynx.
Multiple Column Layout
In the absence of an ALT tag, screen-readers will speak the path and file name for the graphic — this rarely provides any useful information. Graphical browsers with picture loading disabled will display an empty gray rectangle. ALT tags are limited to 256 characters.
Tables and Charts
Convey all of the information in the text body of the document.
Use the graphic as a link to a complete text description of the information being conveyed.
Provide a separate text link to a complete text description of the information being conveyed. These links may be hidden by making the text color the same as the background color on which they appear. However, the additional information may be useful to visitors with learning disabilities and other cognitive impairments.
Convey all of the information in the text body of the document.
Use the animation as a link to a complete text description of the information being conveyed.
Provide a separate text link to a complete text description of the information being conveyed. These links may be hidden by making the text color the same as the background color on which they appear. However, the additional information may be useful to visitors with learning disabilities and other cognitive impairments.
If the animation contains meaningful audio, a separate, text description of the audio portion must be provided for persons who are deaf or hard of hearing.
Placing long lists of text-based links close together in rows or columns increases the probability of mouse errors for persons with mobility impairments. Use vertical lists of well spaced links whenever possible. Links listed horizontally or in a multicolumn fashion must be visually distinct and separated by vertical bars (|) or graphics with appropriate alternative text (e.g., | or *). Avoid enclosing links in brackets, braces, parentheses or other punctuation.
Client-side imagemaps allow both mouse and keyboard navigation. By specifying an appropriate ALT tag for each active region, a client-side imagemap functions like a series of links for users of adaptive technology, some hand-held devices, text-only browsers or browsers with picture loading disabled.
In contrast, server-side imagemaps do not allow keyboard navigation or the specifying of ALT tags for active regions. Include redundant text links for each active region of a server-side image map in order to ensure access for visitors using adaptive technology, some hand-held devices, text-only browsers or a browser with picture loading disabled.
Remember: individuals, with or without a disability, may not have the equipment or software necessary to access multimedia presentations.