Part II: Accessibility and Accommodations Challenges in Technology-Based State Assessments

Led by Martha Thurlow, Ph.D., Director of the National Center on Educational Outcomes, this presentation highlights challenges currently faced by states implementing technology-based assessments.

Transcript: 

- [Voiceover] Hello, this is Tracy Gray from the American Institutes for Research. We are part of the collaboration with FHI 360 on the Center on Technology and Disability. We're just delighted to have you join us today, where we have the opportunity to hear from Martha Thurlow and her outstanding team from the National Center on Educational Outcomes. This is our second webinar that focuses on the issue of accessibility and accommodations. These are challenges in technology-based state assessments. This is a topic that has come up in our discussions with state and district leaders, and we're very fortunate to have Martha and her team here with us today to provide us with some insights and guidance on the challenges associated with state assessments, particularly as they relate to students with disability. I just want to take a moment to direct your attention to the lower right-hand screen. You'll find an area where you can chat. Chat offers us the opportunity for you to please post your questions. We will be taking these questions throughout the webinar, and it allows us the opportunity to make sure that we are meeting your needs. Also this webinar, like the first one, is available on the CTD Institute website, ctdinstitute.org, and you can also find this webinar that we're having today, which will be posted. The last logistical point is that we encourage you to complete the survey at the end of this webinar to give us your feedback on the session, and also to identify other ways that we can be of support to you. So Martha, take it away.

- [Voiceover] Thank you very much, Tracy. Welcome, everyone. We are glad to be back with you. We last spoke, with many of you anyway, at the end of April, and at that point we did some general ABCs about assessments and accessibility and kind of the lay of the land right now. Today our plan is to dig in a little bit more into the topics around accessibility and accommodations challenges in technology-based state assessments, and we pulled information from a couple of different sources. All of us will be talking today and we'll just hand the presentation over to each other as we go. We won't take time to introduce ourselves. We all are working here at the National Center on Educational Outcomes to hopefully in the end improve the assessment systems for students with disabilities. So today we're gonna talk about some general technology challenges. We're gonna get to solutions, so that's why that's there. We'll talk about terminology challenges that have been facing us as we moved into new assessment systems. And then dig more into some of those accessibility challenges, and we hope some of those opportunities that are out there. So then I'm going to start with kind of a highlights of some of the technology challenges, and of course as you all know, we still have that digital divide, and that has been part of what has gone on in the most recent version of the assessment systems out there is that that has impacted what has happened for students and for schools and for teachers. We've had frequent complaints that schools didn't have enough computers, that there's huge scheduling problems when students have to go into technology lab to take their assessment, et cetera. And then, associated with that, complaints about the bandwidth not being sufficient for the needs of the assessments. The assessments are new, technology-based, and have innovative item types. All of that has created some huge challenges. The Council of Chief State School Officers has talked about many of these kinds of things as what they call testing interruptions, and they have indicated that they occur even when the best technology is available. So I pulled their report, shown here, titled Recommendations for Addressing the Impact of Test Administration Interruptions and Irregularities, and I pulled that because I think there's some interesting information in here and if you have time, you may want to refer to it. It does address accessibility issues, so these are ones not that we've pulled from our experiences with the various states, but ones that were highlighted by CCSSO in its document, things like resources not available or not working properly. So resources is often a term indicating they're talking about accessibility features or accommodations, so things like the highlighter not being available for all or for only a portion of the administration time. Another one that they pulled out was that the accessibility features were not working properly. So for example, the read aloud feature wasn't functioning for a portion of the administration time. These are things that CCSSO noted from its experiences with states, and you'll see that we noticed some of these as well. So the importance of these, particularly for students with disabilities, and this is highlighted by CCSSO as well, is that they may affect students with disabilities differently from the way they affect other students. For example, a student with a disability perhaps might get more frustrated and then just stop taking the test or stop trying and answer every one the same way. So it's something that we need to be attending to, is that when we run into challenges, those challenges may in fact impact different students in different ways, and we need to be particularly aware of that for students with disabilities. I pulled from that document the framework that was identified for analyzing test interruptions. I don't expect that you can necessarily read all this, and I'm not going to go through it all, but it asks questions like, how prevalent were these various types of interruptions? Were there slow downs that affected the rate of test completion, which would have an impact on students? So questions like that, I would recommend that you get the document. The link to it is provided here. So, that's just the beginning of our tour through some of the challenges. The next set of challenges really are more related to technology we think, and so I'm gonna turn it over to Vitaliy to take over here.

- [Voiceover] Thank you, Martha. So we thought we would bring up some of the issues that have been coming up in our conversations with states and consortia, around terminology challenges that have been occurring. For some reason my slide has disappeared on my screen, but maybe it's because they're making me a presenter, so here we go. And so as this paradigm shift is occurring in the field, and as students get more tiers of support from their large-scale assessment, they also come with challenges because various terms and sometimes different terms are used to indicate the same support. For example, a strikethrough in one system can be called an answer choice eliminator in another system, or sometimes you have a feature that is called mark for review, and then another vendor will call it flag for review or bookmark, and this of course has implications for how we train our teachers and how our students participate, especially if they have to interact with several platforms in their state when participating in content assessments or English language proficiency assessments. And just the opposite of this, sometimes some terms can be used and they mean different things, the same terms actually. If we talk about text-to-speech, it can mean different things depending on whether we are talking about content assessments or perhaps an ELP assessment or perhaps even within two kinds of content assessments, sometimes text-to-speech is available for certain elements of assessments but not for other elements. So certainly this is an issue, and as we move along, we wanted to look at the accessibility frameworks in general and as you look at this terminology that is currently used in various, large-scales at the consortia, you see that even for the tiers of support, there are various and different terms in place, and we can look at the Partnership for Assessment of Readiness for College and Careers, and you see that they have for all participating students features for all students, for some students with educator input they have accessibility features identified in advance, and then for some students with documented needs, they also have accommodations, and typically these are students with disabilities, but in PARCC's case it's also some English learners, whereas for Smarter Balanced you see that the first tier is called Universal Tools, the second tier is called Designated Supports, and the third tier is called Accommodations. And you also note that some consortia have only two tiers of support. For example, WIDA has two, DLM has two, they also have several implications, they have several specifications and sometimes administer considerations not considered to be in the tier they are usually used in the test administration manuals, and this graph does not capture that. But in all this light as we had this communicating with consortia about these issues, it became apparent that we need to do something and one of the initiatives that we started here at CEO as part of our DIAMOND project is a white paper on common language for accessibility that is currently in use by vendors and assessment consortia, and this white paper actually focuses on some of the paradigm-shift events that have been happening as well as some of the, it talks about stakeholders and implications for each category of stakeholders. It addresses some contextual issues around accessibility and terminology that is used for accessibility, and finally we provide some considerations and some recommendations for what could be done to bring all the stakeholders together and maybe start speaking that common language that we're striving for. And I also wanted to note that we will be presenting this paper at an upcoming pre-session of the National Conference on Student Assessment. You're all invited to attend. If you're planning to attend the conference, the registration is free and I will chat the link here as well. When you register, you will be able to again participate in this session that will include a panel of vendors who will demonstrate some of the accessibility features and accommodations they currently use, and those are primarily Smarter Balanced and PARCC vendors, as well as participate in conversations that are focused on students with disabilities, English learners, and other general education students who are now able to take advantage of these accessibility supports. So this will actually happen on June 20th, between 8:30 and 11:30 AM at the National Conference on Student Assessment. And finally, we wanted to note that the product that was developed by the Office of Educational Technology, which is the National Education Technology Plan in the Future of Assessment section, they are comparing traditional assessments and next generation assessment, and they emphasize the need for more flexibility, responsiveness, and contextualization, and perhaps again, keeping in mind universal design approaches and thinking about as many students or hopefully all students from the beginning as we design those assessments, and they note that when both assistive technologies and assessments effectively interoperate, students are better able to demonstrate what they know and how to apply this knowledge. And with this I'm going to invite Sheryl Lazarus to talk to us about the accessibility challenges.

- [Voiceover] Thank you, Vitaliy. There's so many things that can be challenging when it comes to accessibility, and as we shifted to technology-based assessments, some of these, there's some new challenges, some challenges we've always had but also some new ones that are really occurring as a result of that shift of technology, and one of the new ones is really the inconsistent testing platforms across different assessments. Things work differently from one assessment to the next, as Vitaliy talked about. The accommodations and accessibility framework vary from one to the next, and that just makes it really challenging for everyone, for educators, IEP teams, the technology people to just stay on top of all these differences. It also creates challenges for the students, as navigation features vary, as icons differ from one assessment to the next. It just makes things more difficult, especially for some struggling learners, including some students with disability. Now some of the accessibility challenges that I particularly want to highlight are issues related to scrolling. In the study that I previously mentioned on the last webinar, scrolling issues were identified by teachers as one of the areas that were particularly problematic and just in general as we talked, people in districts and states, scrolling just keeps arising as a problem. It also can be all students have difficulty with it. It's a widespread problem, it can be a particular issue for kids with physical disabilities and really understanding, figuring out how to navigate, scroll in ways that work. Similarly, it can be a particular challenge for English learners. Another problem that was identified in the 2015 administration and continues, unfortunately, to be a challenge sometimes with the administrations this year, and that is accessibility features and accommodations just don't always work as intended. This can be very, very frustrating, the whole way, everywhere, from the teachers who are working with students, that they're not working as intended. The students get frustrated and then it all comes back to those of you at the states and who are working with the test vendors, so that's frustrating and hopefully as the new assessments become more established, some of these glitches will get worked out, hopefully very, very soon. Another thing that we've identified as sometimes being an issue is that the systems do not always allow universal features to be turned off. For some students with disabilities, something that's universally available to all students can cause problems, be confusing to the students, make the assessment more difficult for them. For example, if an assessment allows all students to adjust color contrast and a particular student might get really distracted doing that. May want to just turn it off beforehand. And so that's problematic if there's no capability to turn off unneeded or universal features as an accommodation. Another area that's sometimes has been identified as a problem for students with disabilities is if there's a limitation on the number of times something can be re-read and that that was an accommodation if that did not activate, for some reason. Another challenge, other challenges are that there aren't always easy-to-understand, easy-to-use materials sometimes available, sometimes no materials at all. For example, there isn't always guidance on how to configure computers and other devices to handle videos, and sometimes tutorials just aren't really very kid friendly or don't really show students how to use all of the accessibility features and accommodations that they may be using. It's also something that we have been particularly concerned about here at NCEO is that the use of accessibility features and accommodations are not tracked by the assessment systems that prior to the development of technology-based assessments, there was a lot of discussion about how we can track everything and see what students are using, what students aren't using, and if that was a huge benefit in moving away from paper-pencil tests to technology-based assessments. Do students actually use the accommodations that are assigned to them? Do they only use it for the first two questions and then quit, or what's happening? But we've been disappointed that no consortium assessments have been able to do this, that they can only track through accessibility features and accommodations that were selected for a certain student, but they cannot track whether or not the student actually used them, which is so important to evaluate, what worked, what went well, what didn't go well. It's important to know. Did a student actually use the accommodations that were selected for him or her? Another concern that arises are issues related to the use of assistive technology related to security, that sometimes because of security concerns, students are not allowed to use certain types of assistive technology that they regularly use in instruction. Sometimes they are instead required to use embedded technology, for example, the embedded read aloud, rather than the text-to-speech that they normally use. And also this takes place that they just can't access this because of basically two things. One's concerns about security, and the other's just the incompatibility between the AT and the assessment, so that's been another challenge. I'm now gonna turn it over to Laurene Christensen, and she'll tell you a little bit about some pollution. Laurene.

- [Voiceover] Thanks, Sheryl. So in thinking about the solutions, I think it's important to recognize that of course we're still early in the process of transitioning to these new assessments, and so the solutions are really ongoing, and I think that your input to the possible solutions to the challenges is really important, as well. So I'm thinking about solutions. It's really important of course to have good test security policies and procedures that really address some of the AT issues that have been raised, so making sure that the features that are not allowed on the assessment, for example, are able to be turned off. Also, of course, that the student is able to use the AT that they regularly use during instruction, unless of course test validity is compromised, and I know for example, with Smarter Balanced there's a long list that's always being added to that really does get out all of the compatible AT that can be used. Some additional solutions of course are that we really need to be successful at using assistive technology, and so students need to be familiar with the AT that they use on the assessment and that they can have access to a practice test session where they can use the AT to see that it works as it is intended to work in the testing environment. And of course ideally, we want to check for compatibility issues before the test day, and make sure that any tools and procedures are developed to help us report compatibility issues back to the assessment consortia, so that these issues can be addressed more broadly. I think also it's important to make sure that vendors are able to demonstrate their capabilities before signing a contract. I know there's been some issues that have arisen due to potential mismatches between what we hoped for and what actually happened, so I think having that demonstration is really important, and along those lines, making sure that practice tests have the same embedded features as the actual assessment, so that that way students can actually practice what will really be on the assessment itself. Similarly, when students do practice the assessment, it's important for them to check out the technology, not just the assessment items, because we really wanna make sure that they have practice both with what the items are as well as how the technology will work, and so along those lines, I think that it's important to have students involved in cognitive labs and usability studies. Here at NCEO, we've done some cognitive labs related to how students perceived some of the features, such as read aloud or sign videos, and that was really enlightening I think in seeing how students interacted with both the read aloud on the computer as well as the sign video, and there's reports from that work that we did that are available on our website. Also, it's important to include students with disabilities in infrastructure trials prior to the testing. So I think that there are ways that we're making progress, and along these lines, I wanted to alert us all to the question and answer that came up in the chat, where Laura mentions that the accessibility challenges are important to bring to the vendor's attention, and again Vitaliy emphasizes that the pre-session that we're having at the National Conference on Student Assessment really is an opportunity for states and assessment vendors to come together and talk more about these issues. So the link is listed there, and hopefully you all can attend. And with that, I'm gonna turn it back to Martha to conclude our session today.

- [Voiceover] Thank you. We have sped along today, but I wanted to end with this and hope that will jump in with some chats about what you're experiencing or what you've heard or what your questions are. I wanted to make sure that one, we do remember there were lots of positives that happened during this last year or so when the assessments were mostly turning to be technology-based. There were lots of successes, lots of positive comments about student engagement when the test was on the computer, rather than on paper, and just, I'd say, many positives that came out. We're worried most about the challenges that we've faced and so that's what most people are talking about, all those challenges and what needs to be done, and definitely there are things that need to be done, but I think that we've learned that moving to technology-based is probably going to be the best thing for our students, not all of them, but for most of them. So I wanted to end with some opportunities that are still out there. I'd say the Race-to-the-Top Assessment Consortia were kind of a first step, pushed lots of states into technology-based assessments, and it's why we have so much great information about the challenges. Things were done at break-neck speed, may not have seemed like it all the time, but it really was fast for vendors to kind of slip from working on developing their paper-and-pencil tests and getting those all shipped out, et cetera, to creating technology-based systems that really were supposed to work for all students, including those who needed various accessibility features and accommodations, so we've made big progress there, but there's still some opportunities out there, and one I wanted to highlight and then another one I just found out about, is not indicated here in the slides, but I was contacted the other day by a group who is received a $20 million award from the U.S. Department of Education to increase access to information and communication technologies for individuals with disabilities, so it seemed to me that this was right along the same pathway as the other efforts that are going on, and perhaps some of you know of Global Public Inclusive Infrastructure, GPII, large international non-profit group working on increasing the accessibility and the availability of the individualized needs for everybody through whatever technology they're using. Sounds good, sounds like it would address many of our assessment issues. I think that's just beginning and we'll be looking forward to hearing more about that. Oh look Jim, you're on the phone, thank you. So to go along, if you'd like to say more about that, that would be great. I did want to highlight the Innovative Assessment Program, which is now a funding opportunity from the U.S. Department of Education, which is now in a comment period, so it's what I kind of see as the next opportunity for the large-scale assessments and thinking about how to improve those. The competition really focuses on that innovative aspect of it, but I think we have to remember that it doesn't do any good to be innovative if it's not appropriately including all students. I think in the Race-to-the-Top assessment program, that became very clear, and I hope that as people think about this new opportunity for funding, for innovative assessments, they will be forgetting that they need to be appropriate for all students, including those with disabilities. So what I wanted to do was just quickly highlight, the competition is going to have three areas of focus, or three priorities I guess. So the first priority area is developing innovative assessment item types and design approaches, and you can see the kind of goals here, to develop, evaluate, and implement new, innovative item types for use in summative assessments in reading/language arts, mathematics, or science, and to develop approaches that transform traditional, end-of-the-year summative assessment forms with many items into a series of modular assessment forms with fewer items. These are great goals for an assessment system. And then disseminate so that others know how to do it. So this is an opportunity for states to apply and work on innovative item types. Now, as you know, there've been many challenges related to the innovative item type, particularly for students who have visual disabilities and not being able to drag-and-drop, for example, so lots of thinking needs to go into this opportunity as states apply for it. Then the second priority is improving the scoring of assessments and score reporting, and again here talking about developing innovative tools that leverage technology to score assessments, to propose projects, and there's some long language here, but to address the needs related to score reporting and improve that utility of the information about student performance, and again highlighting disseminating how that gets done. The third priority I haven't highlighted. It's a priority that focuses more on what I would call state audits, for states to be able to look at all of the assessments going on in their systems to determine whether all the assessments are needed, is there a way to prioritize assessments, et cetera, so that notion of addressing the problem, the perceived problem of too much testing. With that, I am ready to turn it over to all of you. Hope that we can get lots of interaction, questions, and your thoughts about the kinds of challenges you heard about that maybe we didn't identify or solutions that you found as you heard about assessments going on during this past year.

- [Voiceover] Great, thank you so much to Marth and her team. We've got an interesting question that was posted by John from Maine. He's asking did anyone measure and report the number or percentage of student tests that were essentially invalidated by all of the snafus that your team has identified, Martha.

- [Voiceover] Oh that's a great question, John. I do not know of anyone who's reported that for sure. I'm trying to think back in my head whether, for example, states have documented that. I would certainly hope so, but I haven't seen. I have not seen any reports about those numbers. Maybe some of you on the phone know whether in your state, for example, they have information on that available. That would be great to know.

- [Voiceover] This is Sheryl, and depending upon state policies, they may somehow roll into some of the participation rates, but it wouldn't be reported separately, but if whatever occurred somehow resulted in a test score that could not be reported, and depending upon policy, it may have some sort of effect on participation rates.

- [Voiceover] That is true. We do know that there were states that had huge problems because of things that no one would've ever expected, so you probably heard about the backhoe incident in Kansas that wiped out the availability of the assessment in Alaska, I think it was, and Alaska decided just to cancel all testing. I don't know if they're sticking with that decision, but there were some really, really challenging things that did occur like that. So, Laura has asked about when we were collecting information about challenges. Did the deaf and hard of hearing and visual impairment groups report their specific challenges? So I'll look to others to jump in here. I don't remember that there were any specific reports, but I sure do know that there was conversation about the challenges for those students. Some of the challenges relate to the policies that were implemented by the various consortia or states in their assessments, such as what kind of sign language could be used, for example, or what needed to be done for a student who needed to use Braille, for example, and whether that was available. So there's, yes, definitely a discussion. I don't think we've been able to collect any systematic data on the challenges that occurred.

- [Voiceover] But it certainly a really good point, and obviously information that would be of enormous value to the field.

- [Voiceover] Yes, agreed. And as Laura's typing, maybe she has an answer, I do know that some of the consortia is continually looking at how they are addressing accessibility for these populations in particular. I would say that moving from paper to technology has been a challenge, particularly for these students, but it opens up some wonderful opportunities that take a little time to realize, I think.

- [Voiceover] And in the survey of teachers in 2015, definitely some of the responses from teachers definitely highlighted that there were some issues specifically related to technology for the visually impaired and deaf/hard of hearing students. Many of them, I don't have the raw data in front of me at the moment, but my memory is that many of those concerns were related to technology either not being available or being sort of available but not working quite right, and I believe that many of those issues have been resolved prior to 2016, but sure that there are still some lingering ones.

- [Voiceover] And Laura noted that at the state level in Illinois they had lots of discussions with various stakeholder groups about those two populations, and it would be interesting to see what was going on nationwide. I agree. Great question, it would be good to be able to collect some more systematic data I think.

- [Voiceover] Yeah, I think that that would really be of great value. It is surprising that OCEP and other funders didn't seek to get that information, but it may be that they felt that it was just too new to start gathering all of that data and information.

- [Voiceover] It could be and we always have a little bit of worry that, maybe I shouldn't even say this, we have a little bit of worry that sometimes decisions were made just not to have those students take the test at this time, while everybody's figuring it out still.

- [Voiceover] It is quite possible, Martha.

- [Voiceover] Yes.

- [Voiceover] Some other questions. This presentation raised so many interesting issues. I'm gonna put one out there. Martha, we've heard from talking with many of our stakeholders in the field that there are states that are not using the consortium tests and are turning to the SAT, the ACT, while they're trying to figure out what kind of state assessments they're going to develop. Could you talk about that as an emerging challenge?

- [Voiceover] Yes, it is. It's definitely an emerging challenge, and we've looked most specifically at the high school level, I would say, and the challenge is in terms of the policies that ACT and SAT, for example, have for accommodations that students may use on the assessment. I think we will see things changing a bit over time, but in the past at least, and I'll ask Sheryl to jump in as soon as I stop talking, in the past at least, we've seen that those college entrance exams have pretty much decided what they're going to allow or not allow, that there's a kind of a very intricate procedure for requesting the use of accommodation, and even after a request, it may not be granted, and the issue that's coming up now is that when states adapt to the assessments so that students are able to get college-reportable scores, for students with disabilities who need an accommodation that doesn't get approved and doesn't produce a college-reportable score, they're not getting the same benefit from participation. Sheryl, do you want to jump in?

- [Voiceover] Right, it's exactly as Martha just said, that there is a complicated request process, the IEP team is not the final decision maker on accommodations for these assessments in most states, and so they send off, complete the application materials requesting the accommodation and it's not transparent at all why some students may receive an accommodation, others not. I think I sense sometimes that perhaps it depends more upon how well the form is completed than the characteristics of the student, and then as Martha said, ACT, SAT typically makes the decision whether or not the student is allowed an accommodation for a college-reportable score, and then as she said, if they do not get the accommodation allowed by the test senders and the state is still allowed to give the student the accommodations, and then there's a state-allowed score, and the score can count for accountability or graduation purposes, but it would not count for college entrance purposes.

- [Voiceover] And those of you who are in this situation, where your state is using one of those tests or is considering using one of those tests, the new version of the Elementary and Secondary Education Act, Every Student Succeeds Act specifically allows for locally-selected, I can't remember what the term is that they use but locally-selected tests, mentions ACT, SAT, as well as some others, but it specifically talks about that the use of the test cannot result in differential benefit to, for example, students with disabilities. So if you're interested in that topic, it's worth looking at the language that came out as they were negotiating regulations related to those assessments.

- [Voiceover] That's very helpful. So do we have any other questions? Any other issues that have come up as you're working in your area's districts and states that you'd like addressed? Jim, do you want to just take a moment to give us an update on the great work that you all are doing? Jim, I think you are on mute.

- [Voiceover] Oh, sorry, I can't speak, he said.

- [Voiceover] Ah, okay. Well, I'll just speak for him. The GPII is the project of raising the floor, which is just a terrific consortium of academia and industry leaders and NGOs, nongovernmental organizations and individuals, and they have really been leading the charge in furthering equity of both digital devices and learning and really working to ensure that everybody benefits from new technologies that are emerging, and they're just a terrific group doing a lot of great work.

- [Voiceover] Sounds so promising. We're keeping our fingers crossed.

- [Voiceover] Yes, we are, yes we are. Jim is typing. Thank you, and you are so very welcome, and thank you for the great work that you do. Any other thoughts or feedback? Well, if there isn't, we would very much appreciate if you just take a moment, if you look there on the right-hand in the chatbox, Jillian Reynolds has posted the short survey to give us feedback. That would be enormously helpful and then Devon Wellington has also included the link to the survey. So on that note, our thanks to Martha and her team at NCEO, and we look forward to working with you and seeing you online again. Take care.

- [Voiceover] Thank you.

- [Voiceover] Bye bye.