ATFAQ004 – Augmentative and Alternative Communication (AAC) with special guests Craig Burns and John Effinger Q1: What is AAC? Q2: High Tech or Low Tech? Q3: Literacy vs Picture Based? Q4: iPads vs Dedicated Devices? Q5: How is AAC funded? Q6: What About Do It Yourself AAC?

Play
ATFAQ Logo
Show Notes: Augmentative and Alternative Communication (AAC) with special guests Craig Burns and John Effinger Q1: What is AAC? Q2: High Tech or Low Tech? Q3: Literacy vs Picture Based? Q4: iPads vs Dedicated Devices? Q5: How is AAC funded? Q6: What About Do It Yourself AAC?
Panel: Brian Norton, Mark Stewart, Wade Wingler, Craig Burns & John Effinger
——-transcript follows ——

BRIAN NORTON: Hello, and welcome to ATFAQ, Assistive Technology Frequently Asked Questions. I’m your host Brian Norton, Manager of Clinical Assistive Technology at Easter Seals Crossroads. This is a show in which we address your questions about assistive technology: the hardware, software, tools and gadgets that help people with disabilities lead more independent and fulfilling lives. Have a question you’d like answered on our show? Send a tweet with the hashtag #ATFAQ or call our listener line at 317-721-7124. The world of assistive technology has questions, and we have answers.

And today we have several guests in our studio today. Today our show topic, we’ve been getting lots of questions from lots of folks. We had started to gather quite a few augmentative communication questions, so today’s show topic is going to be about augmentative communication.

In our studio today, we’ve got a couple of new folks here with us. Obviously there are some regulars here as well. Mark Stewart, who is a regular on the show, is here. Obviously I’m here. And then Wade Wingler, who is also here, will be kind of running the board and participating today. But we do have a couple of new folks. Craig Burns who is an assistive technology specialist here at Easter Seals Crossroads. And then also John Effinger who is a program coordinator for the Missouri Tech Act Project.

I just wanted to first give John and Craig the opportunity to tell our listeners a little bit about them and what they do.

CRAIG BURNS: Hi, well, I’m Craig. I started in the augmentative communication field back in 1996 with a company called Sentient Systems Technology, which has merged or grown into DynaVox Systems eventually, and is now part of a larger company called TobiiDynaVox. I worked there for about 10 years and then started out on my own doing some tools for speech pathologists in hospitals and schools, and then eventually wound up at Easter Seals Crossroads where I’ve been for a well since then.

BRIAN NORTON: Great. How about you, John?

JOHN EFFINGER: My name is John, and I’m a speech pathologist. I got started a long time ago and schools. I worked in schools as an AT/AAC guy for about 20 years, went into private practice, and then started my own DME company, which led me to work for various device manufacturers for about 10 years. I kind of wanted to get back on the clinical side of things so I came here to the Tech Act Project in Missouri.

BRIAN NORTON: Great. Well, we are really excited to have you guys here on the show. For those regular listeners out there, Belva Smith isn’t here today, but she will be joining us back in the next couple of weeks. We’ll miss her personality today.

WADE WINGLER: The other thing that we’re trying today is we have somebody in the fifth seat today. That seat is online. John isn’t in the studio with us today, but we are excited to be able to add to our panel by bringing people in via the Internet. John, we are excited to have you as our inaugural fifth chair panelist. We are glad to have you.

JOHN EFFINGER: Awesome, thanks.

BRIAN NORTON: This is fun. So just for our listeners out there, just kind of a little bit about how this show works. So we gather questions from folks. I’ve mentioned a few ways that we gather those questions earlier on. There are three ways for folks to submit questions to us about their assistive technology needs. The first one is the listener line. That number is 317-721-7124. They can email us at tech@eastersealscrossroads.org. Or they can actually go out there into the world of Twitter and post a question with the hashtag #ATFAQ, and we kind of monitor Twitter for those types of questions, and we kind of gather those up as well.

As far as the release and frequency of the show, the show is released every second and fourth Monday of each month. So we sit here and record on the first and third, and we release it on the second and fourth. If you are looking for our show, a few places to be able to find that shelf would obviously be iTunes. Go to iTunes and do a query for ATFAQ show, and/or you can actually just go to our website, www.eastersealstech.com, to be able to find that. You can also go to Stitcher. So there are a few different ways to be able to find a show.

So without further ado, I think just for sake of time, I know we’ve got quite a few questions in the lineup today. I’ll just jump into it.

***

Okay, so first question: what is augmentative communication? It’s kind of a broad question, but I’ll just throw it out to the panel here. As you guys think about augmentative communication, is there a technical definition of that, of what it really is? What does it all encompass?

JOHN EFFINGER: There is a technical definition for everything, Brian. But not exactly pure I knew there was.

JOHN EFFINGER: Well, ideally it kind of goes into the acronym AAC, augmentative alternative communication, and the idea being to augment someone’s communication, which unfortunately is a practice underutilized. To augment communication means somebody can communicate, and then they might use another system to enhance, improve the communication that they are currently — an example of that would be somebody who’s super dysarthric and difficult to understand, so they might use a communication system to be better understood. It could be kids who have limited communication and need to say more, so they might use a communication device. The alternative being they don’t have speech at all, and they need a way to communicate with the world. Now, unfortunately it doesn’t define whether that’s tech or not tech, and I know you’re going to get into that later. So a lot of people confuse AAC with synonymous with technology, but it’s not. It’s everything from no-tech to high-tech.

BRIAN NORTON: The question that pops into my mind with that, I had a couple of experiences, or many experiences over the years. I’ve been doing this for about 18 years, doing job accommodations and other kinds of things. When I think about augmentative communication, I think about dedicated speech devices and things like that, and again we’re going to get into that a little. But I also think about ways to enhance a person speech, if maybe they just have a soft voice, and those kinds of things. Do those types of devices also kind of fit into augmentative communication? Is that separate?

JOHN EFFINGER: I think they do. We don’t want to be so rigid that we don’t consider the possibilities for anybody. So from my perspective — ASHA, they have an article that they produce several times a year called The ASHA Leader. They had an article in there about how augmentative communication is used for lots and lots of people who struggle with communication, whether they are language delay, speech delay, speech impaired. When you think about the possibilities of what it can do to enhance their ability to communicate, yeah. I wouldn’t call voice simplification necessarily augmentative communication because it’s just amplifying a person speech, but if somebody struggles, can’t be heard, can uses a communication device to do that, by all means do it.

BRIAN NORTON: Right.

CRAIG BURNS: I’d say the voice amplification is kind of an offshoot of augmentative communication, just kind of not as powerful or verbose, I guess you could say, just allowing an individual to augment their existing voice.

BRIAN NORTON: Right.

JOHN EFFINGER: And for people who just want to change it up and use a different voice — and we will get into that at the end because I have a couple of really cool applications where, if you want to change your current voice, you certainly can.

WADE WINGLER: Wait a minute, so you can sound like Brian Norton?

JOHN EFFINGER: You can sound like a pirate.

BRIAN NORTON: Nice. Excellent. Any other comments on that?

MARK STEWART: So things like Dragon NaturallySpeaking, right? Alternative ways of communicating. Just by definition, is that — how about that one? Does that kind of fall into its own category, or is that kind of under the umbrella?

JOHN EFFINGER: I would put it in a different category of speech-to-text. Assuming that somebody has speech to be able to convert it to text. But that being said, there are a lot of applications now, especially on iPads, that speak. We joke about it all the time. Hit the print button or the speech button. When is it augmentative communication? If it speaks, it could be used for that purpose. So if you are creative, sure. But not really.

BRIAN NORTON: I see.

WADE WINGLER: We run into that a lot in assistive technology. People will say, well, are reading glasses assistive technology? I get stuck with that little bit because, well, I don’t consider your regular reading glasses what we think of when we think of assistive technology, but is it? Well, yeah, sure.

BRIAN NORTON: I think the lines have kind of blurred as technology has advanced over the years, where sure, you can stick that in a box if you want to stick it in the box. Technology really has blurred the lines of what is meant for what purpose.

CRAIG BURNS: The AAC is really for voice output, whereas the Dragon is input.

BRIAN NORTON: Excellent.

JOHN EFFINGER: I’ve seen a lot of people get really creative about creating voice for people. So I always encourage a little bit of creativity on everybody’s part, especially if somebody has a preference for something and they are motivated to use it. By all means, you said.

BRIAN NORTON: Sure.

***

The next question in our lineup: how do I decide whether a high-tech communication device or a low-tech communication device is better for an individual? And kind of a follow-up to that along with that was, what are important access considerations when looking into an appropriate augmentative communication device?

CRAIG BURNS: That’s a really broad question. Sometimes the technology — there are three levels of technology that I call low-tech, mid-tech, and high-tech. The low-tech is going to be something that is very picture oriented, picture boards for example or notebooks of pictures that students use to communicate. That’s typically — and I think, John, you agree with this – with an individual that is not yet reading or doesn’t have the literacy that requires any composition of a message. They look at a picture and that picture represents the message. in low-tech, we may have to guess exactly what the specific message is. A glass of water, does it mean I want a drink or I want to milk or whatever? But that low-tech is at least a beginning. A lot of devices are picture-based. They may be high tech, but they are picture-based for language development. That’s because the individual may not be a reader, but they go by the symbols and they memorize that combination of symbols that create a specific word or message. Again, non-reading, but creating messages.

JOHN EFFINGER: One of the things we didn’t talk about is that AAC isn’t necessarily using a device. I can be a signer. I might be able to use other modes to communicate without necessarily using a device, but for the purposes of today — so low-tech, it’s a mixed bag because symbol based systems, there’s a lot of different symbol based systems. There is the concept of photos versus visual scene. Those get really interchanged and confused all the time. A photo is just a photo of something, and some people recognize photos easier than they would an abstract symbol of a cup or a man running. So they might recognize the photo of somebody running quicker and easier. Whereas a visual scene might be a thing of somebody’s kitchen, and they want to get an apple out of the refrigerator, so they touch the refrigerator, and they touch the drawer, and they get the apple. It’s all designed around helping people navigate quicker. There’s just so much vocabulary. The low-tech part of it is, from my perspective, the symbol based systems or even writing, where someone is able to communicate that doesn’t involve some form of electricity. There’s just a ton of devices out there that are in the low- to mid-tech range of communication with digitized speech, and then you get to the higher 2510 communication devices, dynamic screen, everything. You name it; it’s on there.

BRIAN NORTON: I guess a question for me would be 00 I’m not sure who asked the question. I didn’t have the information .I just found the question in my question box. It may have been a professional in the field of augmentative communication. But the question was how do I — if I’m a parent or a professional, are there certain folks I should be seeking out if I do have a child or I do know of somebody who needs an augmentative communication. Are there folks that assess for that need? Who do I need to seek out in my local state and area?

CRAIG BURNS: Typically you go to a speech language pathologist. A lot of people like to go to a speech pathologist that has a background in augmentative communication, if that’s the ultimate goal. There are numerous individuals or professionals around that do that. They’ll work with children from 3 to 22. There’s a process. We’ll get into funding later, but that is part of the process that you used to acquire an augmentative communication tool, whether it’s low-tech or high tech or whatever. So those professionals are the first step, and then they move things through the process as determined by the child and what needs need to be met.

JOHN EFFINGER: What parents have to be careful of is having a conversation. Not all SLP’s have a lot of background in AAC. Some very little, some a ton. Some have explored all the different systems that are out on the planet. Some have not. So I think if I were a parent going in and talking to a speech pathologist, I would ask a lot of questions about the experience. What have they used? What are the comfortable with? And then be a good consumer of choice.

BRIAN NORTON: Sure. Part B to that question, we may have touched on it a little bit, is what are important access considerations when looking at appropriate devices? Are there certain considerations with regard to that? I’m sure there are.

CRAIG BURNS: Absolutely. There are all sorts of issues where, if it’s a child, for example, can they touch the device? Do they physically have that ability to touch that device? If they can touch that device, how do they react to that device? Do they tend to push hard on it thinking they had to push the button down, or do they barely touch it, acting like they are not sure they want to touch that screen, if you’re talking about a high-tech device? Or even low-tech device, can they depress that button enough?

And then you get into all sorts of other access issues, auditory scanning, visual scanning, types of scanning. There’s just so much to look at with each individual with regard to access, all the way up to eye tracking where they might use an eye. Typically an older individual or teenager might use an eye tracking tool to access that vocabulary of that device, whether it’s a computer directly or a dedicated device type of product.

BRIAN NORTON: Something I have a lot of experience in, early on in my career here and Easter Seals Crossroads, I worked with our augmentative communication department a lot with the access issue of how do we get this device mounted to a wheelchair? I spent a lot of time underneath wheelchairs trying to figure out where I’m going to stick this clamp on the frame of the chair. I know some of those things have gotten a little bit easier, but it’s really highly dependent —

CRAIG BURNS: Positioning is critical, both for direct selecting or even viewing for switches. Eye tracking is especially critical for positioning.

JOHN EFFINGER: It always comes back to if it’s not working, is there some other way to make it work?

CRAIG BURNS: Right.

JOHN EFFINGER: I encourage people to always explore. Positioning is huge, and a lot of people do not understand that. They lay a device on a table or on a lap tray, and they are confused when somebody doesn’t have immediate ability to touch the screen. Just moving it maybe 30 degrees makes it easier for them to do that. And then exploring switch access. Scanning is super complicated for a lot of people, but it has a lot of potential. I come from the day when you used to use Morse code as an input method for communication devices. It’s underutilized. I really think it should come back because it’s such a cool way to get information quickly if you have limited motor access. Again, there’s a lot of different ways you can do it.

CRAIG BURNS: I know with switch access, people tend to think scanning is a slow way. Well, if it’s the only way, it’s not slow. That’s the way they use.

JOHN EFFINGER: Right.

CRAIG BURNS: I know instances where they’ve tried to do auto scanning for a user or student, and they didn’t need auto scanning. They needed a self-paced, here’s a button that moves it through, and here’s a button that selects it. Yes, no, green — they’re used to saying yes or no with red or green. Those options are way underutilized or had been in my past experience anyway.

WADE WINGLER: We have a buzz phrase around here where we always say slow access is better than no access.

CRAIG BURNS: Yes.

JOHN EFFINGER: Correct. There’s a lot of alternative mice options that people need to consider. We have to remember that eye gaze, eye tracing, eye selection is alternative mouse input. When you have a lot of different alternative mice that somebody might utilize, even with a toe, to access the screen. Keep in mind and consider those options.

***

BRIAN NORTON: So the next question is what’s the difference between literacy based and picture-based systems?

WADE WINGLER: Pictures?

BRIAN NORTON: And words, I’m assuming. Is there a broader difference between those?

JOHN EFFINGER: I think it all gets back to this issue of access. Symbols are put in place when someone is not able to recognize the printed word or a letter. There’s a lot of confusion about this. We could talk about this for days. But literacy based systems tend to be more keyboard-like where I can use a keyboard to generate a message by typing out something. They also might be sight-word, where I recognize a whole word on a button. The term “core” is used quite a bit in the AAC world. I can touch that word without a symbol on it to generate a message. And then there are symbol-word combinations and symbol-only combinations. As we talked about earlier, if symbols aren’t really working, then you try a photo. All of it is a way to give somebody access to a system, because something is prevented them from having that access. But it shouldn’t be synonymous with if there is a picture or a symbol there that you should use it. It’s very easy to push a button and make all the symbols go away so that the only thing you see is words. Literacy versus symbol based, somewhat controversial too.

CRAIG BURNS: Often I point at, as we are trying to grow the literacy skills, we will transition from picture only to picture plus word, maybe eventually into just the words. The core vocabulary may become just graphics, just buttons up there without the picture symbols at all. Whether that makes it easier to comprehend for the user just depends on each user. They may still need the pictures up there regardless of the word.

JOHN EFFINGER: I try to caution people. There are assumptions that are made about someone’s ability to recognize an image versus a word. There is some research that, especially kids and adults can recognize words just as quick and learn those words as they would a symbol, because some symbols — we do this trick all the time where we take all the symbol-based words and put them in a group and ask people to tell us what they are. There’s a fair amount of learning to learn symbols as well as words. It’s the recognition thing that I can recognize that quicker if there is a symbol associated with it and then communicate quicker. By all means, do it.

BRIAN NORTON: As long as your end goal of them being able to communicate is there, for sure.

WADE WINGLER: I’ve got an add-on question. I’ve heard pec symbols years ago, minspeak, are those things still out there and kind of what’s been used? Tell me a little bit about the kinds of symbols.

JOHN EFFINGER: Definitionally, PCS symbols are different from pecs. So pecs is a communication strategy where the Mayor Johnson symbols, communication symbols, people would refer to as pcs symbols, picture communication symbols. They’re still around. So the board maker symbols that we all used years ago are still around, and they are integrated in current communication systems. There’s another symbol set called symbol sticks. Several communication devices use them. Minspeak, the unity-based symbol set, is still around. So a lot of the stuff that used to be is still here; however, there is an addition of new stuff coming out because of the iPad, and that’s another question. Every day there are new symbols being created. The controversy is do kids transition to those new symbols quickly or not.

***

BRIAN NORTON: Next question is about iPads versus dedicated devices. What are the differences and the pros and cons between that. That’s something I’ve kind of seen a little bit from a distance, not really doing augmentative communication as my main kind of discipline here for many years, but kind of looking at it from a distance. It really seems like iPads are offering a lot to folks, but I know there are those still traditional dedicated devices. I think what the question is getting at is what are the differences, and then are the pros and cons to each.

JOHN EFFINGER: What’s interesting about the iPad versus dedicated world is just the definition of what they are. So iPad being off the shelf that I can take, and there’s now 500-something communication apps that I can go out in the world and choose. Of those 500-something apps that are out in the world, there’s probably a handful of five or six that you can find on a dedicated device. By dedicated, I mean it’s durable, medical equipment that is typically funded by a funding source. Not all iPads are covered by funding sources. So dedicated devices covered by Medicaid, Medicare, private insurance; they are durable; they usually have a warrantee that is several years in which they are repaired regardless of what happens to them. They sometimes have more robust speakers. They may have gorilla glass so they are not as easily broken. You make the choice of using an iPad, and I throw it against the wall and it breaks, can I get that device repaired, versus if I do that with a dedicated device, I can. So just talking about the device differences, it’s a matter of durability. In terms of the content, it’s huge. There are so many apps that are out on the market now.

BRIAN NORTON: I would say that cost is a huge factor in all of that.

JOHN EFFINGER: It is and it kind of isn’t. It just depends on the user. A dedicated device, some of the dedicated devices now retail for $5,500. The Medicare allowable is like $7,100, I think. iPads, by the time you put a case on it; you buy a $200 app; you buy a warrantee with it; you’re talking $1,500. If I break it or crack it, that adds up over time. So you can sort of look at it as, does the individual if they have an iPad, will they be safe with it and use it comfortably and not require a lot of maintenance and repair. It could be a really good choice.

CRAIG BURNS: Another consideration is dedicated devices used to be typically larger and heavier and bulkier, although very durable. And then the iPad came out, or the iPhone first with an application on it and then the iPad merged into it. The trend for the major manufacturers of the dedicated devices has been to get those as small and sleek as possible. I’m not sure if it was their intention – I know they always wanted to make them smaller and lighter and thinner, but I think the incoming effect of the iPad was to push that far faster so that they could get those devices to look a little bit sleeker and lighter weight. Because a lot of the comments were an individual user, especially a child, tends to say I’ve got this special communication device that helps me talk, whereas an iPad was more mainstream. So the parents or others looking at devices may say both of them work. The child may lean towards the iPad type device because their friends have it. While that’s not necessarily the best way to make that decision, that has an effect on kids and even some adults. Color of things makes a difference. I’ve had people not buy a device because they didn’t like the color of it. For an end-user, that can help make that decision. There are all sorts of things that effect that.

JOHN EFFINGER: We also have to consider the idea of locked versus unlocked. Dedicated devices tend to be locked, meaning all you can do on them is communicate. There’s a lot of controversy now with Medicare and capped rental. For the most part, devices from Medicaid, Medicare, private insurance are locked, and all you can do is run the communication application on that device. iPads, unfortunately, kids can figure out the home button, and they can go explore and find out the iPad has purposes more than just for communication. You can use guided access to lock the device down so all they use is the communication application, but unfortunately, if they use the iPad for other purposes prior to communication, communication doesn’t become the priority for them, and that can be super frustrating. I think iPads, when used appropriately and sometimes locked down with communication, with a great case, can work brilliantly. We have to remember that a lot of the apps that you can buy for the iPad emulate the exact same software that you would find on a dedicated device. So there is an element of similarity if you use caution.

CRAIG BURNS: Also with the iPad, it opens up an individual. If communication is one of the solutions you’re trying to use for a specific individual, there may be other needs that individual has. So if you have an open device like an iPad, it does say, okay, if I need to work on memory, memory games or a physical dexterity issue or educational solutions for any particular application, than they do have access to those. If communication is the only function we are going for, then you definitely want some kind of locked out device.

BRIAN NORTON: Right.

JOHN EFFINGER: I tend to get pretty singular about it, especially with emerging communicators, folks who are learning to communicate. It’s really hard to multitask. I do see the value in it, but I’ve seen many times where once individuals learn that iPads can play videos, the need for communication becomes more complicated. And to multitask back and forth between different applications is difficult as well. Now, there are some manufacturers who have developed dedicated iPad applications. Able Net has a device called Connect It. Forge Rehab has a device called a Pro Slate, and it is an iPad in a clamshell that falls under the durable medical classification. You can get any app you want on it, and they are locked, so all you can do is communicate. So for some people, getting the dedicated version of an iPad without doing that to their own iPad makes sense.

***

BRIAN NORTON: John, what you mentioned kind of leads us into the next question a little bit. You mentioned durable medical model or equipment. The next question is how are augmentative communication devices funded? Really, will Medicaid pay for an iPad? I get that question a lot from folks that I meet with. I’ll throw that out to the group as well.

JOHN EFFINGER: Medicaid will pay for iPads in some states. Given your audience, I would say check with your Medicaid to see if that’s true. In Missouri where I am, they do not pay for iPads. They will pay, however, for a dedicated version. Currently, our Medicaid system does not pay for the DME version of iPads. We do not have a vendor that we can go to to get one. So some funding sources, I’ve even heard of some insurance companies willing to purchase iPads as communication. But it’s the exception, not the rule.

CRAIG BURNS: You’ve got Medicare that pays for them, Medicaid that pays for them. Those two funding sources are pretty — restrictive, but there is a specific process that needs to be addressed. That’s why your starting point is a referral to a speech pathologist. You’d like to pick out one that has an augmentative communication background or an interest, because a lot of them aren’t interested in augmentative communication. Again, that starts the process. The companies, the dedicated device manufacturers, are typically equipped with a funding department that helps you to that. So they make sure you have all the paperwork, or your professional that you’re working with as a family has all the paperwork and the steps that are involved. Because you have to collect insurance cards, Medicaid cards, prescriptions and all that kind of stuff, and reports have to be done. It’s not an overnight process. That’s one of the other differences between a dedicated device and an iPad. You can go buy an iPad at the Apple Store right now or wherever you want to buy it, whereas a dedicated device takes a process to go through in order to acquire that.

JOHN EFFINGER: I was just going to differentiate a little bit. With Medicare now with capped rental, you actually rent the device for 13 months and make copayment on, where Medicaid in some states you can get the device immediately following that evaluation and that prescription. Again, like Craig said, working with vendors is awesome because they understand how the game is played and how to expedite getting funding for a piece of equipment.

WADE WINGLER: You said capped rental a couple of times, and I don’t know that term. Does that mean you have to use and rent the device for a period of time before it’s permanently funded? How does that work?

JOHN EFFINGER: The Medicare requirement is that devices are rented for 13 months, and that’s what the term “capped rental” means. The individual doesn’t on it until the end of that 13 month period, which is a lot of controversy, because a lot of folks before capped rental occurred could get a device; they owned the device; they could call the manufacturer and unlock the device, get an unlock code. Typically they paid a small amount of money, $50, to get the code. And then that device, whether it’s a Windows-based device, Android, whatever, is unlocked and now they can do anything they want on that device. So under capped rental, it has to remain locked. So that’s a Medicare requirement. Tricare also follows that procedure. It is locked down for 13 months, and you don’t own it until the end of that 13 month period.

WADE WINGLER: So it’s like rent to own, but while you are renting you have limited access to the unlocked portion of that device, or no access.

JOHN EFFINGER: Right. And it became somewhat controversial in the ALS world because folks want access to email. They want access to environmental control that typically is no longer built into communication stuff, where several years ago it was integrated into the communication software. I could go why environmental control. I could do email. But that is now the exception, not the rule. That’s what capped rental is somewhat controversial because it pretty much just limits you to face-to-face communication.

BRIAN NORTON: That’s fascinating. Craig and I have worked with a consumer before, and he was using his device to be able to completely access his computer. In fact, we were able to go in there and actually work within the software on his particular device. I forget the type of device he was using, but he was actually being able to fire up programs, maneuver the mouse, be able to enter text, and actually interfacing with these other software programs to be able to help him read materials and those kinds of things. It was kind of a fascinating approach that I hadn’t done before, but it worked really well with these unlocked devices. I guess what I heard you say is the dedicated devices, for Medicare, Medicaid to pay for some of those things, it’s a dedicated device, but then the user would pay to unlock those. Because most of those devices are based off of Windows 7 or some sort of operating system. Is that correct?

JOHN EFFINGER: The newer devices are Android-based for the most part. There are still a few Windows-based devices that are out there. But you are right. That applies to Medicaid, some insurance. A lot of insurance companies are doing capped rental as well. The rule of thumb is contact and find out if you can get the devices unlocked. If so, go for it.

MARK STEWART: I’m sitting here thinking about listening to what everybody is talking about. There’s little bit of a thing that’s popped up about access in the sense of gaining access to the system. I’m considering your background and Craig’s background, if you guys see this as a valuable question. So if I’m a parent coming in and I’m new to this, what is the flow and relationship look like with the speech therapists and the value added vendor and that partnership and how that looks?

CRAIG BURNS: The speech therapist is obviously the individual that makes that connection between the device and the end-user and the family. They are the ones typically that will define or identify the language level that an individual will need to begin with and then grow from there.

JOHN EFFINGER: I think in a dream world, honestly I would love that everyone who comes to the table has the experience they need to have. In my experience, it’s been the willing participants who are willing to implement AAC right. The SLP, it’s required for funding. After that, raise your hand if you want to take on this journey. Because sometimes the most talented people with implementing are the classroom teacher, an OTPT, and it just sort of diversifies from there. The vendors typically know a lot. It creates a conflict a little bit for them, but they usually have a ton of experience in implementing AAC devices, especially with the evaluation piece. But because they work for a company, sometimes they are not viewed as having that relationship direct.

CRAIG BURNS: Their benefit is that they experience a wide variety of individual users, from the early communicators to very long-term communicators. Another question you have to ask is, is that person ready for that device. Because the family and the professional says this might be a device they can use to communicate — and I’m using a child as an example because that’s where we are starting with language development. Are they ready to use that device? That’s an important question that they have to address. When John is talking about locked devices and unlocked devices, communication, to me, is all sorts of tools. You have your face-to-face communication. We all communicate with email and text messaging and Facebook and all those avenues that the typical verbal individual uses. Here the current mode of the funding sources is you only have face-to-face as an option for communication, which is kind of disheartening.

JOHN EFFINGER: I think we also have to look at the history of what’s evolved here. 10 years ago, you had a lot of devices. If you go and look at the CPT codes for funding devices, the codes are all over the place in terms of entry-level devices, all the way through dynamic displays. Back in the day, I thought that was really important. Funding sources were really amenable to doing a lot of variability with different devices. But honestly, since the iPad has come out, I don’t see that anymore. I think individuals can get a high end, 2510 communication device, and pretty much regardless of where they are in communication, be able to functionally learn to use it. We have to remember that with Medicare, they have that device for five years. With Medicaid, it could be three, five, or until the device is no longer able to be repaired. So the decision we make with people in evaluation has to be considered long-term. So because of that, I tend to side on more is better. Find devices that can provide language for a long period of time.

BRIAN NORTON: Right.

CRAIG BURNS: John, could you kind of go into that 2510. You’ve mentioned that a couple of times, where some of our listeners may not understand what that means. That’s a device level. Could you carry that further?

JOHN EFFINGER: In order to get devices funded, an SLP had to recommend and a physician have to prescribe a device by code. There are lots of different codes that classify communication devices. Everything from E25, which is a digitized device, all the way to a 2510, and that definition is speech-generating device that uses synthesized speech, meaning computer-generated speech, permitting multiple methods of message formation and multiple methods of device access. Simply said, you can do anything and everything with the 2510 device. To classify this idea of digitized, digitized is human recorded speech onto a device versus synthesized which is computer-generated, giving the idea that if it’s a synthesized device, it has unlimited capacity to communicate anything, where with digitized, you are restricted to what you can program onto a device. So back in the day, that was super important, but fewer and fewer devices are available on the market and those lower codes, and a majority of devices being made now are all just 2510.

BRIAN NORTON: Thanks for that clarification for everybody.

***

We have one more question today. That is our wildcard question. I’m going to give the reins over to Wade for his wildcard question today.

WADE WINGLER: This is probably my favorite part of the show because nobody knows what the question is going to be. I’m going to hit a hot button. I stand in front of a lot of groups and I visit with a lot of people in the public. Here’s something that I get asked. There are a lot of people wanting to do do-it-yourself AugCom. So let’s say you are dealing with a mother of a child who has autism, dealing with maybe a recent diagnosis, the kid is not verbal as mom thinks he probably might be. She is absolutely convinced that in order to help her child with autism to communicate, an iPad or a do-it-yourself approach is the only way to go because of cost. And maybe there is some lack of knowledge there and lack of how the system works in general. But the question is, what is your advice for the mother of a child with autism who feels that the do-it-yourself approach to AugCom is the only option?

JOHN EFFINGER: That’s a great question. Jeff Higginbotham out of the University of Buffalo said that the iPad was the rise of the individual. Because you don’t need an SLP to evaluate somebody’s ability to communicate anymore. You can just go buy an app. For that purpose — and I have talked to a lot of parents about this. Why not? Get something and learn about it. But at some point, things don’t always work the way you want them to work. They need to explore and find out and network with people who can help them refine and zone in on what will benefit long-term whatever their child or individual is doing. Talk to speech pathologists who have experience with individuals with autism who are using apps and dedicated devices. There is not a one-size-fits-all anymore. It requires a lot of dialogue. So if you want to do it yourself, do-it-yourself but be informed.

CRAIG BURNS: Sometimes I hear the do-it-yourself as I want to do something low-cost. There are tons of apps out there that have a single purpose. They are not all-encompassing. They are not a wide vocabulary range. They can be great for a specific exercise. They may be free with ad pop-ups in the iPhone market. They may be $1.99 or $.99 or something like that. As John alluded to, fine, get started on something. I worked with the VA individuals, it’s good to jump in to something. That’s a plus to the iPad environment where the family has one or they think they want one. So let’s try some of these apps and the fact that. But for a total communication need where there is language skills to develop, or retrieve for stroke patients, you need to be able to have a wider range of available vocabulary at different levels. Your speech professional can help discern which of those apps works best for you. Because to the layman, every app looks the same, with the exception of a couple of different language approaches that we talked about earlier. Starting somewhere is a good point and then eventually yes, now we know that little Johnny or Sally can use the iPad. They can touch the button, get some kind of speech or joke told. Now we need to go further with that, especially as their educational goals improve, so that they can adjust the full communication capacity.

JOHN EFFINGER: If I could add one more quick point. One of the things that I would also caution people is work with apps that are connected to funding. If you have an individual who has access to Medicaid or private insurance, start them on an app that you can find on a dedicated device if your plan is to go that route. Because one of the things we don’t want to do is constantly change the communication system that individuals use. So there’s a handful of them. You can pick from those, start from those, be informed, and go from there.

BRIAN NORTON: Great. I just want to thank everyone for their contribution today. It was really fun chatting about augmentative communication, just kind of an area of AT that is interesting and different for me, just from what my daily work life is. Anyway, I just want to thank everybody for their contribution today. John, I wanted to make sure that listeners get your contact information if there is any follow-up questions that they have. Do you have contact information for folks?

JOHN EFFINGER: I do. They can contact me at john.effinger@att.net.

BRIAN NORTON: And if you are looking to kind of reach anybody here, everybody else in the room, Craig, Mark, myself, or Wade, you can email your questions to tech@eastersealscrossroads.org and we’ll get those distributed. Again for those listeners, a way to find our show. You can search assistive technology questions on iTunes. You can look for us on Stitcher. You can visit our website at www.atfaqshow.com. Also please call and chime in. We love to hear your questions. If there are follow-up questions from today’s show, we love to get those as well. Without your questions, we really don’t have a show. Be a part of the show. We love to hear from you. A couple of ways to contact us and let us know what those questions are would be our listener line. Again that’s 317-721-7124. You can go to atfaqshow.com, find us there. You can tweet us at #ATFAQ. Email us at tech@eastersealscrossroads.org.

MARK STEWART: John and Craig, that was awesome. I learned a lot. Take care.

CRAIG BURNS: Thanks very much.

JOHN EFFINGER: Goodbye, thank you.

WADE WINGLER: Information provided on Assistive Technology Frequently Asked Questions does not constitute a product endorsement. Our comments are not intended as recommendations, nor is our show evaluative in nature. Assistive Technology FAQ is hosted by Brian Norton; gets editorial support from Mark Stewart and Belva Smith; is produced by me, Wade Wingler; and receives support from Easter Seals Crossroads and the INDATA project. ATFAQ is a proud member of the Accessibility Channel. Find more of our shows at www.accessibilitychannel.com.

Leave a Reply

Your email address will not be published. Required fields are marked *