Don't Wait to Communicate: Why Your Child Needs AAC

This webinar is led by Dana Nieder of the Uncommon Sense blog. It focuses on Dana's journey as a parent of a child with special needs. Dana talks about the importance of early access to Alternative and Augmentative Communication (AAC) devices and the impact it has on family life. She also discusses the relationship between speech acquisition and AAC, and various technology options. Dana argues for the importance of presuming competence in a child’s communicative abilities and the need to plan and build a rich vocabulary for a child using AAC.

 

 

 

Transcript: 

- Hello everyone, welcome to another Center on Technology and Disability Cafe Event. My name is John Newman, I'm with the PACER Center, we are a proud partner of the Center on Technology and Disability. And I'm just here to welcome and introduce tonight's speaker. Before I get into introductions here, I would just like to remind everyone if they could please mute their lines to make sure that no side conversation comes in. You can do that by pressing star 6 on your phone, or if you are using your computer microphone, you can look for that microphone icon and press that to mute the speaker. So now onto the presentation here. We are so, so excited to present and hear from tonight's speaker, who is Dana Nieder.

Here at the Center on Technology and Disability, it's really our job to bring you all all the voices in the assistive technology field. And that includes parents, and really, we could not think of a better parent to bring to speak here tonight. As many of you know, Dana Nieder operates a very popular blog, titled "Uncommon Sense", it's written with such great insight, and also humor, which makes a big difference, and through her experiences as a parent she just has a lot to speak to and teach both parents and professionals in this field about the importance of alternative and augmentative communication and what that means to a child, their family, and kind of, you know, just their opportunities. So we're so excited, and I'm going to turn the floor over now to Dana Nieder, and we'll get started here.

- Hi everyone, can you all hear me ok? I see people typing, so I'm hoping that means yes. Perfect. So I'm really excited to be here today, I love talking about AAC, and without it, my daughter's life and my family's life would be very different than it is today. Before I get started, could I just get a little poll of who you guys are, so I know who I'm talking to, if you could just tell me parent or professional, or both, I'm just going to take a second to see what comes in. Alright, that's a nice mix of people. Wow, ok, so I'm glad to see parents and professionals, because the more people that are ready to provide early access to big AAC systems, the better it will be for all of our kids. So let me jump in and tell you what we're going to talk about. I'm going to introduce myself, talk about what AAC is, I'm going to talk, my kind of big, main idea is early access to a nice, big, robust AAC system.

And so for the early access part, I'm going to talk about why we shouldn't wait, and about the relationship between speech acquisition and AAC, because that's often a concern, will AAC prevent my child from developing speech? Then, I'm going to talk about the robust system half of the equation, which is kind of the old AAC hierarchy and my feeling that kids need all of the words all of the time. Then I'll talk about the myth of mind-reading and the impact that early access has had on my family's life. And I've got some videos, and some participation, and hopefully this will be really fun. And you should feel free to type questions in the chat window, I might not see them right away, but feel free to use that. So, a little about me, I am a former middle-school science teacher and a current speech language pathology graduate student, due to my experiences with AAC, but I'm really here because I'm Maya's mom. And I'm Will's mom, too, but he's kind of getting short-changed in this equation.

But there's a little picture of Maya when she was a baby, and with her walker, and now today. Maya has a rare kinetic syndrome which causes global challenges and severe apraxia. She started receiving speech services around 10 or 11 months old, but our speech sessions looked primarily like arts and crafts time. There was very little progress, if any, in the way of functional communication, and I was so hungry to hear what she was thinking. One day in a parent support group meeting, the leader said, maybe you should look into assistive technology. And I was intrigued, because I had never heard that phrase before. I read up, and I started trying to figure out ways to use pictures or symbols or apps or anything with her so that she would have a successful way of communicating with us. And over here in this picture, see if I can make this happen, so here are some little, there's some cards we used, we use picture cards and I glued them to foam because she couldn't, she didn't have the fine motor skills to pick them up.

And we tried, over here, we tried using a binder with symbols, there's a picture of Maya picking out a new binder, when the symbols got too big, and using an app for the first time, and that's the kind of blurry picture on the, oh, look at that, now I got the arrow working. These pictures are from a field trip, and you can see in her hands, she's using a little communication board. So I'm going to kind of talk about some of the flaws of these things here. Picture cards, indication boards, and talk about the high-techs that we buy. Sorry about that-- I will try to speak louder into the phone, is that any better? Hopefully. Perfect, sorry, I didn't know I was so quiet. I haven't held a phone in a really long time. So let me first talk about what AAC is. So, for those of you who are totally new to AAC, it stands for Augmentative and Alternative Communication. And this is what I learned about it when I started doing research.

I learned, first, that all people are multimodal communicators. So we don't just communicate with the words coming out of our mouth, but we communicate with, over here, we see, our facial expressions, our body language, the gestures that we use, and it makes sense, but I had never really thought about it before. AAC is used to either augment the verbal speech that a person has, or to provide an alternative to verbal speech, if they're not able to access it yet, or sometimes with both. I learned that we were already using one type of AAC with Maya which was sign language. We had started signing to her, probably around six months old, which I think is pretty common with a lot of parents now.

But we were ready to move into this different phase of AAC, which would be some sort of aided AAC, when you actually use materials to help the communication. There's different types of AAC, like no-tech, which would be some of the communication cards or boards that I showed you, low-tech, which would be a recordable button, kind of like the Staples button that says, that was easy! Or kind of recordable boards, and you'll see a picture of that later, or high-tech, which would be the apps or devices. AAC does not remove connection or emotion from conversations. I've seen this come up in some dialogue between parents online saying, I don't mind using some signs or some picture cards, but I don't want talking with my child to be just staring at a screen. And so I wanted to show you my first clip of Maya and I, this is recent, talking about what games she wanted to play over break. And I think it provides a good example of how AAC is definitely not just staring at screens. Let's see if I can make this work. Tell me some ideas you have for things you want to do, that we might be able to do them before you go back to school?

- [Computer] Duck, duck, goose.

- You want to play duck, duck, goose?

- [Maya] Yeah!

- Who should play?

- [Maya] Daddy!

- Daddy?

- [Daddy] I love duck, duck, goose.

- Daddy said he wants to play duck, duck, goose. Do you think that maybe we could teach it to Will?

- [Maya] Yeah.

- Does Will know how to play?

- [Maya] No.

- Not yet? Ok, so duck, duck, goose would be a good thing that we could do. What else do you want to do while we're on vacation?

- [Computer] Duck, duck, goose.

- What else?

- [Computer] Guess Who.

- You want to--

- [Computer] Play.

- Play Guess Who?

- You do? Ok, we can try. Remember, I made all of those little, the little cards that have the Disney characters, but then some of them popped off. Do you want to try to fix it?

- [Maya] Yeah!

- Ok. Ok, am I back? Ok, good. Sorry, I see comments about audio and hopefully somebody else will answer them, because I don't know how to do that. But in the video, if you couldn't hear it, these are all available online on YouTube and I can send you links if you weren't able to see it. But this video showed Maya and I talking and we both have iPads, and we're talking with the iPads, but Maya is using gestures and intonation to kind of add what I think of as punctuation, to the conversation. There's eye contact, there's excitement, there's kind of exclamation points, and she also uses speech when she can, which is something that I'm going to come back to, shortly. So let's talk about early access. The bottom line here is do not wait. If you are here because you're wondering if the child in your life could benefit from AAC, the answer is probably yes. The time to start looking for how to meet that need, is now. Not right now, stay until the end of the webinar, but after that.

When I started searching for resources on using AAC systems with toddlers, I found very little online, especially stuff targeted for parents to understand. Our SLP's had no experience using AAC with young children. Although they had heard of AAC, and knew of it as a thing that may be introduced in school if there was still a speech need. As time passed, I've learned that this is an unfortunately common experience. No children should have to wait until they're old enough for kindergarten to start expressing their thoughts. There are a range of reasons that children are deemed not ready for AAC, all of which are fundamentally flawed. Let's look at a few of them here. There's no such thing as too young to start using AAC. And I put that picture of that cute little baby, because when I see it, I just want to be like, oh hello! And that's kind of the point. We talk to babies from the moment they're born using tons of language, because we're hoping that they will be able to mimic it. If we have a child that's not able to mimic speech, we need to start talking to them with an AAC system to provide them with accessible language if they're going to be able to give back. There is no such thing as too cognitively impaired for AAC.

When Maya was two and a half, she had her first cognitive evaluation. And at the time, they reported that her cognitive functioning was in the 0.4th percentile compared to her same age peers. So that means that she was in the bottom half of the bottom percentile, cognitively, according to this evaluation. The truth is that there is no reliable way to measure cognitive functioning in children who are primarily non-speakers. The picture tests that they use welcome non-speaking children to point to the pictures and use them to communicate. One of my favorite stories about that was when Maya was eighteen months old, and we had an evaluation, they used a flip book, and they had a picture, like four pictures on one page, and one of them was a purple toothbrush. And they said to her, where's the hairbrush? But she pointed to the purple toothbrush. The evaluator had no way of knowing that we had read a story earlier that day about Elmo brushing her teeth. The evaluator had no way of knowing that Maya had a purple toothbrush at home.

Maya was trying to communicate something about the toothbrush, but instead the evaluator thought that she didn't know what a hairbrush was. These cognitive evaluations don't work on kids who are going to need pictures to talk, because they use the tests. There's also no such thing as too behavioral. If you take a look at the picture on this slide, I think the little boy is communicating pretty clearly. He might be saying, give me that. He might be saying, you just took that from me. He might be saying, this isn't supposed to be out here, it's supposed to stay in my room. But he's definitely saying something with his behavior. If you can't talk, how else are you supposed to get your point across? Behavior is communication. Instead of focusing on possible reasons not to provide access to AAC, we are going to presume competence. Presuming competence needs to be the mantra of anyone who works with or lives with a child who can't reliably express themselves with speech.

Speaking children convey their competence through their speech. They ask questions, they argue, make connections, they label things. They boss people around. They ask more questions. I see volume, I will talk louder again, trying. Non-speaking children can't speak to give clues about what is happening in their heads. And so we must presume competence and give them as many tools as possible to allow them to show us what they're thinking. For example, if I see Maya looking at a picture of a zebra, and I assume that she's thinking the word "animal", I'm having a fairly low expectation of what she might be thinking. If I see her looking at a picture of a zebra and assume she's thinking "zebra", "stripes", "black and white", those would be kind of a more average level of expectation. But what if she's looking at that picture and thinking "soft", "fuzzy", "prickly", "smelly", "lions eat you", "what do you taste like?" "Do people ever eat zebras?" "Who knows?" That's having open expectations.

Presuming that she is just as competent as anyone, and she could be thinking kind of anything at all. So one of the most common concerns with families and some professionals who might not have AAC experience, is that they want to see if speech is coming. There is no reason to wait to see if speech is coming before providing access to AAC. AAC will not impede speech development and every child has the right to communicate right now in whatever way they can. This opinion that AAC will not impede speech development is not actually just my opinion, it's also supported by the parents of many non-verbal children who use AAC. And research, which is important. For some, AAC is a bridge, something to use until they have enough speech to no longer need AAC. For others, AAC is a long-term tool in their communication toolbox. I've never heard anyone say, "I wish we hadn't started with AAC so early." But I have heard many, many people say, "I wish we hadn't waited."

When speech is an available option, it seems to be used. And that can mean available motorically, as in, when a child can form the word, they will say the word, but it could also mean from a sensory perspective or a stress perspective. I've heard from some older users of AAC who are primarily speakers, but when they are in an overwhelming situation, they have difficulty accessing their speech, and choose to use AAC instead. I haven't met anyone who can speak and just chooses to use AAC for fun, because it's painfully slow. I'm an adult who is literate, who has pretty good fine motor control, and it takes me a long time to use a very well designed system to communicate with Maya. It's much, much faster to speak. And she's shown me this kind of tendency to use speech as the default, because as soon as she has the ability to say a word, she will say it, instead of using her talker. For example, a few months after we started using the device, she wanted yogurt, and so she said the sentence at the bottom of the screen here, but she code-switched in the middle. So the words that are in purple are the words that she said with her voice, and the yogurt in the middle is what she said with the talker. So she said "I want" then switched to the device to say "yogurt" with the device, and then said "please" again with her voice. Because she had clear ways of saying "I want please", but she didn't have any easily understandable way of saying yogurt.

The worst case of starting early is kind of gaining a few extra months of hearing your child's thoughts before functional speech emerges. And in the best case, sometimes early access to AAC, actually, well not even early access, I'm sorry, just access, has been shown to increase the acquisition rate of verbal speech. It seems that the reasons this may happen are because AAC, that has voice output, provides a consistent auditory model, and what that means is, they can hear the word said the same way all of the time. So for example, they could say "button", "button", "button", with the device, and they can hear the word said the same way, which gives them kind of a chance to get extra modeling, extra input to practice forming that word. It also removes the pressure to try to form a word spontaneously, on the spot, which, sorry, which is kind of, I'm looking at this radio picture, which is why I put the radio there, it removes the pressure to form a plan spontaneously and instead gives them a chance to practice when they are ready.

And gives them a different way to communicate in the moment. And the radio picture was there because it reminded me of, I don't know if those radios even exist anymore, but I used to record songs off the radio to try to learn the words, and I'd play them again and again and again, and that kind of consistent auditory model on demand helped me learn the songs. But, it's not just my theory on learning songs, there is actual research about this. And I did have a chance to kind of put my money where my mouth is, of saying AAC won't impede speech development when Maya's little brother was born, and I gave him his own AAC device before he was speaking. And you'll see video of him later. So, I'm going to switch and talk a little bit about the type of system that we need to provide early access to, because all systems are not created equal. When I started researching AAC as a parent, I came across the idea of the AAC hierarchy. And I have to admit, it really made sense to me.

So, in the hierarchy, you kind of start with something, oh here's my arrow, can I pull it down, ok, I can't get to it. So, these start with something like this picture in the upper left, which is kind of picture cards that, oh, thank you, no, I still can't get it. I don't know what's happening with my arrow. It's not important. The arrow's not the important part. So we start with these pictures here in the upper left corner which have little kind of texture representations, and physical representations of the object. So a child would learn, this card means book, because there's a picture of a book on it. Then you switch over to a board that has abstract pictures, so it might not be the exact book that they're used to seeing, I see a trampoline there, and the child can make a choice from it. Then you move to something like the bottom left corner, and this is one of those low-tech devices I was talking about where there's 32 words available here and you can record the name of each.

So you know, "pizza", "french fries", you move to something like that, and then if a child proves that they can handle that sort of device, you move to that glowing yellow square, which is an actual speech generating device with a large vocabulary in it. Now, it made sense. I mean, start small, with something that's accessible, move to something bigger, then low-tech, then a bigger device when the child is ready. But there's a big problem. Let's imagine that we're using something like that bottom left corner with 32 words, and you get to pick the words that your child is going to use as you sit to do play-doh with them. So think about the words, if you were going to sit and pull out a basket of play-doh, and start playing with play-doh with your child, go ahead and type in the chat box, what are some words that you would think you need for play-doh? Don't eat it.

Alright, ok, I see color, texture, I saw a lot of adjectives which were, you guys know too many play-doh eaters. If you tried it, it's salty like I kind of get the appeal. Don't get it on the carpet. See these are, you only have 32 squares in that device, guys, where are you going to put all these words? Let's see, I went online and I googled play-doh communication board, and I, that yes, I see that, that is where core vocabulary and fringe vocabulary comes into play. But, if you only have 32 squares, you don't have a lot of space for any vocabulary no matter what it is. So let's see, this was an example of a play-doh board online. Ok, there's some colors, look at this, I like that, at least there's something interesting, there's a, there's not a lot of space. This isn't even a 32 board, I don't know what this is, I didn't count it. So I had Maya just for fun sit down with me and I started playing with play-doh with her, this is just over the weekend, to get a little clip of what she wanted to say with play-doh. So, let's see if I can do this. Ok, here we go. Alright, I opened two new ones. What should we make?

- What's that?

- A chick? Yeah? Ok, what else should we make besides a chick? Because I think the chick needs a friend. So if we make a chick, who else should we make?

- [Computer] Elmo.

- The chick is friends with Elmo?

- Yeah.

- So we're going to make one chick and one Elmo.

- Yes.

- Alright. What are they going to do together?

- Play?

- Boopy-doo.

- They're going to sing boopy-doo? Alright, they like to sing songs together?

- Yeah.

- Alright. Anything else we should make?

- [Computer] Polar bear.

- A polar bear too?

- Yeah!

- Ok. Polar bears live where it's cold, did you know that?

- Yeah.

- Do you think he'll be cold?

- Yeah.

- Should we make anything to help him?

- Yeah.

- [Computer] Gloves.

- Oh, he needs gloves. Ok, well let's see. I can make a green polar bear, and then how about you make him orange gloves?

- Yeah.

- Will that be good? And Elmo and the chick will go with him?

- Yeah.

- Um, ok wait let me see if I can come back here. Oh, wait, I'm seeing these comments now. I did unmute during the video, so I see somebody said it worked for them. Yes, I did unmute during the video so I'm sorry for the people who were having trouble with it. Ok, good. Alright, I'm just trying to figure out whether I'm muted or unmuted right now. Ok, so. Oh wait a minute. Let's see this. Ok. I see a question about motor planning, and yeah, yes, the answer is yes. I think that the motor plan of where a word is is generally more important in general than the symbol is. But I'm going to get back to what I was doing, but yes, the short answer is, I do think that that's more important and that's the way that, there are several apps and communication books now that focus on really trying to keep motor-planning as consistent as possible for the reason that you're bringing up. So back to the video, she said Elmo, a chick, polar bear, a hat, I don't know how I could have planned for those words when we were using communication boards.

And I was very stressed when we were using boards and picture cards because I didn't know what to do in terms of providing the right words at the right time. If she wanted to make a cat, she would have needed an animal board for that. If she wanted a train, I would have had to have a transportation page, or what if she wanted to make grandma out of play-doh, it was overwhelming, the pressure for me to try to pick the right words at the right time. So this, the hierarchy is not how speech development naturally works. We naturally communicate to babies with all of the words and kids need all of the words all of the time if they're going to have genuine conversations. We can't limit them to play-doh words during play-doh time. It's not how, it's not how communication works.

If we use limited words, a lot of the communication seems to center around requesting. For example, it's easy to put out a board of food choices at snack time, but then after the child requests the food, you're kind of at the limit of that board. You might be able to say bite, or chew, or swallow, but it'd be difficult to incorporate all of the words that you might want for food, like "What color is it?" "Is it crunchy or salty, or smelly or disgusting?" or "Throw it in the garbage." It's also more likely, I feel, to lead to rejection. Either from boredom at being forced to say the same things, or from frustration that they're using something that's not making the words available that they want.

Kids want to say weird things, because kids are funny. They want poop, and Thomas the train, disgusting, or hilarious, they want to request polar bears to be made out of their play-doh, and that was my attempt at a polar bear, there on the left. So all of the words, all of the time. Robust systems can be accessible to very young children. I'm going to show you a clip of my son Will, you might be thinking that all of the words sounds like a really nice idea, but if you gave a young child a device with all of the words, they're just going to be all kinds of overwhelmed. I'm not suggesting that we bombard a new user with several thousand words, but I am suggesting that we plan for the future. One of the habits, quote unquote of highly successful people is to begin with the end in mind.

And that kind of plays into that motor planning question I saw pop up a few minutes ago. If you start a system knowing that you're going to expand to several thousand words, you can make decisions like to try to keep motor planning consistent that will lead to success. So, this is my son Will at seventeen months old. This was his very first morning using his own talker. And this clip shows him learning the word "drink". You'll see here that he has maybe eight words open on the main screen and it also is kind of a nice example of what AAC looks like with a brand new user. So let me go ahead and, you know what, I kind of lost, hold on one second. Let me rewind this. And I'm just going to drag this and make it bigger. Oh, thank you for helping, whoever is helping me. There I'm going to do it like that, because I lost the bar the other way. So here we go. Will, why don't you use the pink talker?

- Smoothie? Smoothie? Maybe later. Do you know what we do with the smoothie? We, look Will, we drink. We drink a smoothie. Do you need a drink? Here's your milk. You need a drink? Here's your milk. You need a drink? Here's your milk. You need a drink? Here's your milk. Oh my goodness. You need a drink? Here's your milk. You need a drink? Here's your milk. Can you hear me again? I'm hoping that I'm getting this better now with these videos. Ok, good. So that is Will learning "drink". And he is using a system now that both of my kids have available to them and it has I think around 6,000 words right now, but he never needed to relearn drink. So this shows that even a system that is, this is what Maya uses and she will use it until she doesn't need AAC anymore, and yet my son was able to start using it at seventeen months.

So making choices about what to provide a young child with, it doesn't have to be limited to a small number of words and then relearning a new system. We have to begin with the end in mind when we're providing AAC to a young user. Ok. And now, I'm going to start my last topic. And we talked before about how behavior is communicating, and so, I'm going to show you a picture and ask what do you think this child is communicating? Alright I see some requesting, we want cookies, hungry, I can't reach those, what is that, someone said hot, I want it, what's up there. We still can't peg it, and you guys are really good at thinking outside of the box with me. What about, is it warm up there? Is the tray still hot? Is my finger long enough to reach that? What's on that pan? How many cookies did we end up making? Did one fall off in the oven? What color is the icing? It could be a lot of stuff. We know that it's something about the cookies or something about reaching up high, but we don't know what exactly it is. We are not mind readers.

Now, there's a natural dynamic between parents and children, where we hit the I know what you're thinking phase. Although, many toddlers would be happy to tell anyone who would listen how wrong their parents are at predicting their thoughts. If, you know, we see a baby who reaches towards the cabinet, and we think they want something in that cabinet, so we start pulling stuff out and giving it to them until they don't have a tantrum or look happy. But if we get stuck in this predicting mindset, especially as our children are aging past the young babyhood, we are underestimating our kids. We will naturally and unintentionally lower our expectations of what they are thinking and saying. We're basically saying that their thoughts are so simplistic, that we are certain we could predict all of them. I'm going to give you one example. This is a story about a school bus.

Maya went through a phase where she was totally obsessed with school buses. So if we're playing outside after school, and a yellow school bus drives by and she jumps up, points at the bus, looks from the bus to me and yells "bus!", she's excited, clearly something is happening, she's connecting with the bus. Now, I know she loves buses. So I could say, "Wow, a bus! I love that bus!" "I know you're excited to see the bus, too!" Except the big problem is, I don't know if she's excited to see the bus. I don't know what she's saying about the bus other than the word "bus". She could be saying, I see the bus, I like the bus, the bus is yellow, that bus is big, that looks like my school bus, that is not my school bus, I want to go on that bus, I have a toy bus that looks like that, I want to play with my toy bus, look at the wheels on the bus, that looks like a bus from a movie. I had fun on the bus today, did you see that bus? Excuse me? I heard somebody for a minute, and I wasn't sure if it was there, but I lost it. I see people typing, I'm waiting to see. Oh, ok. I heard it. That's just my voice. I'm hearing myself, can people hear me ok right now?

Ok, I just heard myself talking in my own ear. Ok, if somebody, ok. Thank you. So, these are all different things about the bus, and this is the one that scares me most of all, before Maya had a way of communicating. What if she's trying to say something happened on the bus? Our kids who can't talk have to have a way of communicating about their day. We have to find ways to make it happen, because it's too scary to think about the alternative. I once heard a story from somebody who knows about AAC and is a lot smarter than me, and they told me that there had been a study done about familiar communication partners and how reliably we can predict what our friends or family were thinking.

So for example, if I went out with my husband and we went to the diner and sat down, what would be the odds that I could predict what he was saying? And the odds turned out to be about 10%. Which is not a great number. So we have to find ways to make sure we are not speaking on behalf of our children, but rather finding ways to advocate for them to find ways to speak for themselves. So, here's what early access to robust AAC would really look like. First, providing AAC as early as possible. Then, ensuring that the AAC provided allows for comprehensive, interesting, motivating vocabulary right now, and with room to grow. The third thing is probably the most important thing, and that's modeling. Model a lot. Use the child's system to talk to them the same way that we use speech to talk to children that are going to speak back to us. We have to immerse them in language with the device to provide them with accessible language.

The last thing is to respond purposefully and enthusiastically to any use of the device. And I know that this webinar is primarily about advocating for early access and big systems, not so much a step-by-step how-to of how to do it, but I had to sneak this one in there. You saw me respond purposefully and enthusiastically in that video to Will. So he didn't know what drink meant the first time that he pushed that button, but he certainly knew what it meant by the third time. Because I was responding in a way that said, when you push this drink button, you're talking about drinking, so here, have a drink. And here's some of the ways that early access to robust AAC has changed Maya's life. The first big one is that I get to know what's on her mind. Which, for her, means that she gets to share what's on her mind. I got to know that she was interested in the weather, which was so strange, but when she was three, it turned out she loved the weather. I guess they talked about it at preschool a lot and she loved talking about it at home. I know who she plays with at school.

She can tell me things about her day. If someone's not nice to her, she can tattle, which might sound strange to reward, but I like it. If something makes her laugh, she can tell me what it is. I found out that she is hilarious, and she also remembers everything. The connections that she made to things that happened years ago were shocking to me. Also, she has it to prove herself. Which, is unfortunate, but kids who can't speak perpetually have to prove themselves. I wish that I could wave a wand and make everyone presume competence and believe in children who can't speak, but it is sadly not yet the reality. By having access to a device and being able to show the things that were happening in her head, she was able to change the course of her academic life, and therefore, her entire life. She would have undoubtedly been in a classroom that was, had very low academic expectations. And instead, she's in a classroom that is pretty much age-appropriate.

She was able to show us, beyond a doubt, that she understood abstract ideas, and that she was reading, and other kinds of very impressive things that no one ever would have known. And I simply do not believe that she is an anomaly, with regards to this stuff. I think that if more children who weren't able to speak had access to early AAC, we would see that they have the same complexity of thoughts and ideas and humor that many other children their age have. So, with regards to reading, Maya can read with her voice, but it sounds just like sounds, you would not be able to argue beyond a reasonable doubt that she was definitely reading words. So I was able to get some video of her reading with speech, and then later reading with her device, and so here is that video. Ready? No peeking, no peeking. Ready? No peeking, no peeking. Can you read that one for me? Ok, now let's say it with mini, ok? I'm going to clear out your stuff with this. Is that funny? Oh, it's too funny. It's too funny! Ok first, that word.

- [Computer] I.

- Good job, now read the whole sentence. What does that one say? Where is that in mini?

- [Computer] Want.

- [Maya] Want.

- Want? Very nice, what does that say? Go ahead.

- [Computer] To.

- That one. That's a big word, where is that one, do you have it?

- No.

- That's right, that's the wrong screen, press the blue button to go back, try again. Do you remember where it is?

- [Computer] Chocolate.

- And what is this? Tell me with mini.

- [Computer] Ice cream.

- And that last one, what does that say?

- [Computer] Tomorrow. Ok. So if you weren't able to see the video, because I'm just not sure how all the videos are working here, the main point was that her reading speech sounds just like starting sounds. It's clear she's identifying some sounds in the words, but it's certainly not clear whether she's saying the word "chocolate" or just the letter "C". But with her talker, with her communication device, she can really prove herself to anyone who is doubting or needs the extra proof. And this is just a cartoon that I really like that I think sums up the idea of presuming competence. This is my contact information, and I want to say, I'm happy that I finished now, because we have time for a few questions, but first, importantly, we are going to have a tweet chat on Thursday using the hashtag CTD Tech Talk, and so I would love it, they're going to have some great topics for us all to talk about at 7:30pm Eastern time on Thursday. This is my contact information.

And there, I see John put the hashtag in there, CTD Tech Talk for the hashtag on Thursday. So, I'm here to do some questions now, but wow, this little box moves really fast. So I'll try to, and I honestly, I haven't seen anything that's come in so far, because this is my first webinar and it was hard to get the hang of it. So I'm looking now. Ok, so I could scroll back, I think it would be better if you're still here if you want to ask questions now, to put them in the box and I will try to get to them now. I think I will get lost if I scroll back. Certificate of attendance, I have no idea. So, maybe somebody from CTD, oh there we go. Talk about getting started, ok. Great, ok, yes, getting started. So, getting started. I want to say again, I am not a speech and language pathologist, but I will talk to that question. Can I do a webinar on getting started? I don't know, maybe. I would be open to doing that. Sorry, ok. So, getting started. I will talk a little bit about getting started at home, from a home perspective.

I think that we should, whenever you get overwhelmed, think about how you would use normal speech. So, normal speech, we don't say, "Say button! Now I'm going to squeeze your mouth to make you say button." So it wouldn't make great sense for us to do that using an AAC device, either. If we are trying to target a word, say the word "eat" through AAC, I think that the best way to do it, is to model, model, model, model, model. To model the word, which means to use their device to say the word, to show them, look how great this is, this is how this device can say it. To, that would be the first thing, modeling. So, now modeling is only you using it. Hand over hand is a really tricky concept. With a lot of children who are very young, or who have access issues, I had to help steady Maya's hand when she wanted to first use her device. She then later used a key guard, which was very helpful, but she needed a little support, physical support getting started.

I don't believe in hand over hand as in you take a child's hand and you make them push the button anymore than I would for normal speech. You can't really force someone to communicate. Sometimes they just don't want to. So that's what I mean by modeling. The second important thing is to respond intentionally, like I did with Will. Some people would say, he didn't know what he was saying, he just pushed the button. But he will know what he's saying when I respond to that button. And so, by responding to anything that the child is saying, that is how they are going to learn what the word means. Show them visually on the screen, right. So you push the button and you say, see, this is how you do it. Can kids who can't point use AAC? Yes, they can. There are switches that can be used to access, those are kind of like buttons that can be accessed with different parts of the body, and there are also systems for children who can use devices with their eyes, and there's also something called a P-O-D-D it's a Podd book, I'm spelling it so you know what I'm saying, Podd, P-O-D-D book, and that is, uses partner assisted scanning, when a partner will actually flip through pages and talk to them so that they can use it.

Why an iPad instead of a dedicated device? That's a good question. When we first started out, I had the dedicated device makers to my home to show me their product lines, and there wasn't anything that was a good fit for us. If there had been, we would have used a dedicated device. A few months later, a speech app came out that I felt like was a very good fit for us, and so that is what we chose to use. Yes, dedicated devices are much more expensive than iPad apps, but I think that most parents, well price is a real consideration. I can say that for us, we would have fundraised to get a device if the device would have been the best thing for us. But it just wasn't, the app was a much better fit. Maya is using an app called Speak for Yourself.

Modeling how many words? As many, I think that's a really personal question. I think that you want to use interesting words. A combination of functional words and interesting words to get started, I think when Will started, we had maybe 15 words open the first day, I have a list somewhere, and it was like family member's names, eat, drink, more, yes, no, help, want, I don't remember what else. Yes, only a few words to get started with, we started with three words, I don't remember how many with Maya, it was too long ago. I know I had the whole animal screen open, which is a little ridiculous, but she loved it. Do I use a bluetooth speaker for the iPad we heard? I do have a bluetooth speaker that I use with Will's iPad if we are out, he only uses AAC part time, he is a typically developing child who prefers speech most of the time, Maya's iPad mini has an amplified case on it so that she can be heard in loud environments, and yes, I do not think that an iPad speaker system by itself is enough for most children to use as AAC, or adults either.

Did I consider ASL? Yes, we did use sign language, I actually have a blog post about the limitations of sign language for non-verbal children. I think it's an excellent tool, and I think anything that you can use for communication is fantastic. But I think that it's very limiting in terms of who a child can speak to using ASL, many kids who have trouble speaking, motoric trouble, also have trouble forming the signs with their hands, and so even people who are fluent in ASL might not understand their signs, which was definitely the case with Maya. And plus, she has, like I said, she has like 6,000 words now, and I just don't even know if I could remember 6,000 signs or remember to teach them. I'm trying to read down here. Which amplified case? We use a case from iAdapter, I only know of two amplified cases, which are the line from iAdapter and then the line from ChatWrap. ChatWrap is newer and I haven't tried them yet, they're a bit pricier. And we just had iAdapter, and it has been great. How did we transition from modeling to using? She came to using it herself by the time that we were on a device, we had used pictures for awhile in different ways. But I think kids get technology, you know? Kids get, press the button and it does something, and they want to press buttons and see what they do, if they're able to.

Maya wasn't able to isolate a finger in the beginning, so we actually used a winter glove and I cut the finger out so that only that one finger hole would make the screen work, but she wanted to press the buttons pretty much right away, and that's when I started responding intentionally. Avnet has an amplified case, I did not know that. Is it a good idea to let the child play with the screen and babble? Yes. Absolutely. They, Maya can, and Will, they both go to bed with their talkers, Will doesn't as much anymore, but Maya still does, and they tap through and find all sorts of interesting words that I would never have known. Maya knew "cubicle" one day in speech therapy. Somebody pointed to a desk and said what's that and she said cubicle. And I said I have no idea. So yes, they should babble, they should definitely babble, especially, children can't read, they need to press all of the buttons to learn what they're saying. They can't read the label. So that was a great question. Yes, babble.

Can Maya enter new words yet? She is an active part of adding words, but she does not have access to the editing controls yet, because she still does have fine motor issues and I feel like she might toggle on the delete accidentally and clear things out. But she chooses where new words go, she chooses the icons for the new words, and she has more than one button that tells me to add something. So, "I need a new word in my talker" or "Add that word", so that she can tell me when to add things in. Why does Will use AAC? Because his sister is an AAC user, and we are an AAC family. And so, simply, he was a baby and she had a shiny technology device that she carried around all the time so he wanted it too and it was easier to let him use the extra iPad. But also I truly believe that AAC is an important language development tool and he used it, I have some great videos up of him using it to repair communication breakdowns, to explain things to me when I wasn't understanding him, I think that we ended up with a lot less toddler tantrums because he had a way of clearly communicating lots of stuff.

Like I said, he doesn't use it very much anymore, but he will still run for it if I don't understand something that he's saying. He's now two and a half. Survey Monkey, I don't know what that is. Would it be wrong to ask them to show you something? I think that that's probably pretty child-dependent. So if they seem interested in playing that, it could kind of be like pointing to a picture in a book, which I do with Will all the time. Where's the cloud, show me the cloud. If that is making them turn away from the talker, then you could model something like no thanks, or don't want to, because teaching ways to refuse, teaching the language of refusing, I think is also very important to do. Hey thanks for sharing that, that's really interesting.

About, I'm reading R. Tilton's comment, I find it a slower and more effective way to communicate. Teaching kids who are used to the iPad meaning fun and games? Yeah, if you're going to use an iPad for communication, you probably need to have a separate iPad that is a communication device. That's the way that most AAC families do it who use iPad apps, so that their child is locked in using guided access to their communication app on one iPad and they use a second app to play with, because they should be able to play games on one iPad and talk about them at the same time using their talker.

When we did this, when we made the switch to using the iPad as a dedicated talker, we got a new case for it, and I don't think that, Maya was young, she was three at the time, I don't even think that she really realized it was the same thing. How did I balance an active typical child while taking the time to speak with Maya? It was really hard when Will was like six months old, to like, seventeen months old and I gave him his own talker, and then it got easier. Oh, and we're kind of wrapping up. Getting the other children involved. That's what I would say. That would be my quick bottom-line answer. To get the other children involved with modeling, because peers are the best models anyway. And I hope to see you guys at the tweet chat on Thursday, using the hashtag CTD Tech Talk, I will be there, thanks so much, this was really fun.