Podcast: Play in new window | Download
Your weekly dose of information that keeps you up to date on the latest developments in the field of technology designed to assist people with disabilities and special needs.
——————————
If you have an AT question, leave us a voice mail at: 317-721-7124 or email tech@eastersealscrossroads.org
Check out our web site: https://www.eastersealstech.com
Follow us on Twitter: @INDATAproject
Like us on Facebook: www.Facebook.com/INDATA—————-Transcript Starts Here—————————-
Deann O’Lenick:
This is Deann O’Lenick, and I am the CEO of SymbolSpeak, producers of Symbol-It, and this is your Assistive Technology Update.
Josh Anderson:
Hello, and welcome to your Assistive Technology Update, a weekly dose of information that keeps you up-to-date on the latest developments in the field of technology designed to assist individuals with disabilities and special needs. I’m your host, Josh Anderson, with the INDATA project at Easterseals Crossroads in beautiful Indianapolis, Indiana. Welcome to episode 464 of Assistive Technology Update. It’s scheduled to be released on April 17th, 2020. On today’s show, we’re excited to have Deann O’Lenick on from SymbolSpeak to tell us all about the Symbol-It app.
Josh Anderson:
We here at INDATA and the Easterseals Crossroads are proud to announce our Accessibility for Web Professionals 2020 webinar. During this, you can join renowned web accessibility professional, Dennis Lembree, for a full day of training. This webinar training begins with a background on disability, guidelines, and the law. Many techniques for designing and developing an accessible website are then explained. Basic through advanced levels are covered. The main topics include content structure, images, forms, tables, CSS, and ARIA. Techniques on writing for accessibility and testing for accessibility are also covered. If you are involved in web design or development, don’t miss this wealth of practical knowledge. Now this is a pretty in-depth training for web professionals, but if you are a web professional and you’re interested in making websites more accessible, you should definitely join us. This will be held on May 13th, 2020 from 11:00 am to 4:00 pm. And we will put a link to the website with more information over in our show notes.
Josh Anderson:
AAC is a big part of assistive technology. There are a lot of devices and apps out there to allow individuals to use symbols and then have that information translated into speech so they can communicate with others. But I’d never seen an app that worked the other way around. That is, until now. Our guest on the show today is Deann O’Lenick, Ph.D, the CEO of SymbolSpeak. And she’s here to talk about the Symbol-It app. Deann, welcome to the show.
Deann O’Lenick:
Thank you. I’m happy to be here.
Josh Anderson:
And we’re happy to talk about the new technology. But before we get into that, could you tell our listeners a little bit about yourself?
Deann O’Lenick:
I’d be happy to. My name is Deann O’Lenick. I have a Ph.D in early child development and education, and a masters in speech pathology, in addition to an MBA with an emphasis in healthcare. I started out my graduate program with a desire for augmentative communication that grew out of one of my first clinical experiences as an undergrad. I was fortunate enough to work with a child with an augmentative communication device, actually a Voice 100, a Handy Voice 100, one of the very first voice output communication systems. So that gives you a sense of my age. But I’ve been fascinated by augmentative communication, and have worked with children and adults over the course of my career. And from that role, I looked back at our progress and wanted to be able to inform and be able to guide the direction that we’re going in to improve language outcomes when that language is through augmentative communication. So it’s from that background that Symbol-It was born.
Josh Anderson:
And you did a perfect lead-in there. Tell us about the Symbol-It app.
Deann O’Lenick:
Symbol-It is an app designed for iOS. We have plans to come out in an Android format in the not too distant future. But for right now, we’re on iOS, providing a way to make language visible in real time. As I finished my Ph.D just a few years ago, as an early childhood specialist, I looked back at the development of language through augmentative communication through a different lens. As I looked at how we were supporting the development of language on voice output communication systems, or speech-generating devices, I was saddened by the statistics that indicated how poor our outcomes are when that language is through augmentative communication. Erickson and Geist, in 2016, published a study that essentially said approximately 80% of those who speak using augmentative communication will only speak at the single word level.
Deann O’Lenick:
And that statistic rang true for me in such a sad way. And as I looked at language development, again from an educator and a developmentalist perspective, what I saw was that we had made augmentative communication be something other than language development. It had become about pushing the button, hitting the switch, finding the symbol, creating a message that somebody else told the symbol speaker that they needed to create. All of these other things other than typical language development. And as a result, I began working on a theory of language development through augmentative communication that really comes in alongside aided language stimulation, but looks at it from the perspective of second language learning and first language learning. Meaning the element that we’re missing is complete immersion in the symbol system being developed. We have not, as a profession, talked about complete immersion. Everyone in the symbol speaker’s life, speaking to them in their symbol system throughout the day, throughout the environments, throughout contexts. And dual-symbol immersion theory was born out of that research, that development of a perspective for language development.
Deann O’Lenick:
So as the goal became immersion for parents, and grandparents, and educators, and educational assistants, and therapists to be able to speak to the symbol speaker in their symbol system at home, at school, on the playground, at the park, at grandma’s house, at the store, everywhere, then the question for me was how do we truly do that? I can do that when I understand the child, or the symbol speaker’s augmentative communication system. I can do that when I have access to their speech-generating device for their low tech system. But what about those who don’t have a proficiency with that? What about peers who may not understand the symbols and the symbol set? What about grandma and grandpa, or aunt and uncle, that you only see occasionally? How do they speak in symbols, and in symbols in a way that’s more advanced than what Erickson and Geist’s study told us? We’re getting single words. We need to speak in complete sentences.
Deann O’Lenick:
And if language development through augmentative communication truly follows along with language development for verbal language, language development for sign language, and language development for second languages, or third and fourth, for that matter, if language is developed in that same way, then our symbol speakers need to be spoken to in complete sentences, with complete grammar, with complete sentence structure. And that is not always possible. That’s actually very difficult in real time. So it was out of that need that symbol-It was born.
Josh Anderson:
And how does Symbol-It work?
Deann O’Lenick:
Symbol-It provides a speech to picture symbol translation in real time. SymbolSpeak, our company, has contracts with the major players in symbols. So we have contracts to license the symbols from Prentke Romich, and Semantic Compaction for Unity, and LAMP Words For Life. We have contracts with Boardmaker for their symbols. We have a contract with Marty’s Symbols for their symbols. And we’re in negotiation for SymbolStix right now. So under these licensing agreements for their intellectual property, we’re able to translate speech into the picture symbols and then display that in sequential language structure on the app, or on the iPad, at this time.
Josh Anderson:
And you said I can actually change the symbol system to whatever the symbol speaker’s using. Is that correct?
Deann O’Lenick:
Absolutely, absolutely. So we have a current model that is in process, and that is to provide a free version, a complete free version, of Symbol-It for our customers. And we’ve done that because this is a brand new solution. Prior to Symbol-It, there was no way to be able to translate speech into picture symbols in real time on a mobile device. And with the advent of that, what we found was consumers didn’t know what to do with it. We had not ever thought about making our language visible, as we would for, say, a sign language translation. I have a vision in my head of speakers on television and a little box down in the corner with a sign language translation happening in real time.
Deann O’Lenick:
That vision did not have a correlate or a counterpart when it came to picture symbols. I could print out my picture symbols and have them as a static display in a sentence. I could preplan my messages. But I didn’t have a way to be able to provide a comprehensive sentence or a comprehensive message in real time. So based on that, we wanted customers to have a chance to experience Symbol-It, to experience the freedom of being able to speak your entire sentence in the moment. For those who had not ever even used a speech-generating device to talk to their symbol speaker, we wanted them to have the ability to be able to do so.
Deann O’Lenick:
So our initial model was to provide Symbol-It free for use. That model is shifting to a freemium model, and that shift will be happening in the next couple of weeks. And with that, there will be a free trial period for people to be able to experience and to be able to explore the different symbol sets. So you can try out the symbols for LAMP Words For Life under the Prentke Romich and Semantic Compaction setting. You can try out the PCS High Contrast Symbols from Boardmaker. And you can switch between those.
Deann O’Lenick:
As the product moves from the freemium time period into a selection, the customer will have two choices. One choice is to select one symbol set, and that we feel like would be useful for families because they know the symbol set of their symbol speaker. The second model would give, where is a Symbol-It Pro, that would give the customer access to all of the symbol sets that we have available, allowing therapists or educators to switch between symbol sets immediately and without a delay in time. So if they’re speaking to a symbol speaker who has, say, Snap + Core First on their speech-generating device, then they would provide the symbols for PCS Boardmaker and the regular classic symbols. And those would align with that. But if their next symbol speaker was using TouchChat and had SymbolStix on there, then they could switch to that symbol set. So in the pro version, the customer will have the ability to align their symbol set almost immediately with their symbol speaker.
Josh Anderson:
Nice. And I can see how that can be really helpful, like you said, for the professional that has to work with the different folks. But I really like how easy you made it when you brought up grandparents or other family members. I mean, you open the app, you tap it once, talk to it, and the symbols just appear. So even for somebody who’s not real tech-savvy, or kind of a novice, or really has no experience, very easy to be able to communicate with that symbol speaker very quickly.
Deann O’Lenick:
Absolutely. That was my goal in designing Symbol-It. It was for there to be no ramp-up time, no delay, no learning that had to happen in order that we can truly immerse the learner in the symbol system that they’re learning. The goal is not to replace speaking on the symbol speaker’s personal speech-generating device, but to give us an additional way to provide more language without delay, but in more contexts. And that, for me, aligns with language development. Language development for verbal language starts actually in utero. And when that baby is born, we’re speaking to that baby in complete sentences. We’re presuming that that infant is going to be able to develop verbal language. And we persist with that. We continue to speak in complete sentences, not reducing the complexity of what we’re saying, but helping the child to understand messages. I wanted us to be able to represent that same verbal language in picture symbols so that we can improve those language outcomes.
Deann O’Lenick:
There’s nothing about being a symbol speaker that means you can’t develop language. And with that perspective, we will develop language to the extent that we provide a symbol system that’s accessible. And that’s what Symbol-It provides, in addition to whatever the symbol speaker’s personal system is.
Josh Anderson:
Nice. And I know, in just doing a little bit of reading and stuff, it said that it also uses the Modified Fitzgerald color-coding system, but I have absolutely no idea what that is. Can you tell me about that?
Deann O’Lenick:
Yeah. So there are several different color-coding systems for parts of speech. What that means is that the picture symbols are going to have a color to them that’s assigned based on what part of speech that symbol represents. It’s a way to be able to enhance the visual experience. And it matches with visual cues on speech-generating systems.
Deann O’Lenick:
The Modified Fitzgerald color-coding system provides that nouns are going to be in yellow. The verbs are going to show up in green. And there’s color coding for each part of speech. So as you look at the picture symbols being displayed, you can see the sentence structure, and you can see that highlighted by color. Modified Fitzgerald is the specific color-coding system. But what it means is that the parts of speech in the sentence are going to show up in different distinct colors so that they help to differentiate the symbols.
Josh Anderson:
Deann, can you tell me a story about someone that you’ve worked with that this has helped?
Deann O’Lenick:
I love that question. Yes. So I have a friend, a little friend who comes over from Ireland for bursts of therapy. This little guy is hyperlexic, meaning he can read well beyond his years and his experience. He’s able to read the messages. He has had a speech-generating device for numerous years, even though he’s only six years old. He’s already had the ability to be able to create his own messages. But he doesn’t speak in complete sentences when he’s producing his own messages.
Deann O’Lenick:
So his last time that he was over for a burst of intervention, I was using Symbol-It in addition to his speech-generating device. And we were looking at a wordless book that doesn’t have a written story to it. The story is all in pictures. And for him, I told the story using Symbol-It, providing him with ideas about how the story could progress. And as I did that, I could see him looking from the story to my iPad, and back to the story, as we read through the entire book.
Deann O’Lenick:
And after that input, I asked him what he wanted to change about the story. And he responded with a complete sentence. The story was the lion and the mouse. He responded with a complete sentence that the mouse did not like the lion, which was much more language than I got from him as I spoke on his speech-generating device, or even as I asked for additional information. Because he got complete sentences in a model that everything that I said was symboled, and in real time, it gave him that expectation for being able to speak in complete sentences because he saw and heard complete sentences, and was able to then align with the information that was inputted for him, that I spoke to him.
Deann O’Lenick:
He continued that with his family, and with his nanny, and with his sibling during the time that we were working together, as well as during the time that the family was here in the United States, that I had a chance to be able to see it. So he expected to be spoken to in picture symbols. And as he was spoken to in picture symbols, he spoke in more complete sentences, providing all of the language. The comment, the description, the directing attention to himself. Language is so much more than wants and needs. And when I can speak to him in complete sentences, I’m using all of the different meanings of communication, all of the different communicative functions with him, and that in turn provides him with a model that language is more than making requests for wants and needs. Language is about sharing ideas, sharing thoughts and feelings, protesting, commenting, asking, answering. And so Symbol-It provided me a way to be able to model that complex language with him. And in turn, he used that and continues to use that during interactions.
Josh Anderson:
So Deann, what’s next?
Deann O’Lenick:
What is next? That’s a great question. I actually love that. As I think about the idea of immersing and the dual-symbol immersion which comes from Hear MY Voice – Language through AAC, which is a nonprofit that I founded right after I finished my Ph.D, its goal is to provide education and training on dual-symbol immersion, as well as research to guide what we need to do in order to improve language outcomes. So when dual-symbol immersion and immersion is the goal, then Symbol-It provides one way for us to be able to speak in real time.
Deann O’Lenick:
Another roadblock that I see for immersion is personal proficiency with the symbol speaker’s system. And so that will be the next task that I [inaudible 00:23:46]. How do you and I develop our proficiency with the symbol speaker’s system? How do we move through our developmental progression so that we’re proficient and able to speak on their system, as well as being able to use Symbol-It in real time? So I think that’s probably what my next challenge will be, is to figure out how to address that.
Deann O’Lenick:
But the big goal for me is language outcomes for symbol speakers. They deserve the opportunity to develop language to the fullest extent, and they deserve the opportunity to communicate at the highest level possible. So that language outcome, that language outcome being equal to what you and I have developed, that’s my hope and dream for all symbol speakers.
Josh Anderson:
Well, I think that’s a great hope and a great dream. Deann, if our folks want to find out more about SymbolSpeak, or maybe about Language Through AAC, what’s the best way for them to do that?
Deann O’Lenick:
There are multiple ways. One way is on the internet at our websites. So the website for SymbolSpeak is www.symbolspeak.co. It truly is dotco. It is a European foundation. And then for Hear MY Voice – Language Through AAC, the nonprofit, the website is www.languagethroughaac.org. Our social media has those same names. So SymbolSpeak on Facebook, Twitter, and Instagram. And Language Through AAC also is on the social media.
Josh Anderson:
Perfect. We’ll put links to that all over in our show notes. Thank you so much for coming on the show today and telling us all about Symbol-It, and just how it can really help folks, who use AAC devices, with their language development.
Deann O’Lenick:
I appreciate the invitation, and appreciate the platform to be able to improve those language outcomes for symbol speakers. So thank you so much.
Josh Anderson:
Do you have a question about assistive technology? Do you have a suggestion for someone we should interview on Assistive Technology Update? If you do, call our listener line at (317) 721-7124. Shoot us a note on Twitter at INDATAproject, or check us out on Facebook. Are you looking for a transcript or show notes? Head on over to our website at www.eastersealstech.com. Assistive Technology Update is a proud member of the Accessibility Channel. For more shows like this, plus so much more, head over to accessibilitychannel.com. The views expressed by our guests are not necessarily that of this host or the INDATA Project. This has been your Assistive Technology Update. I’m Josh Anderson, with the INDATA project at Easterseals Crossroads in Indianapolis, Indiana. Thank you so much for listening, and we’ll see you next time.