Podcast: Play in new window | Download
Your weekly dose of information that keeps you up to date on the latest developments in the field of technology designed to assist people with disabilities and special needs.
Show notes:
Dr. Ariana Anderson, Chatterbaby founder | https://chatterbaby.org/
The Ultimate Guide to Resumes http://bit.ly/2A7BQyX
Alexa hacked to grasp sign language https://bbc.in/2A4TR0G
Tap to Alexa brings more accessibility features to the Echo Show https://tcrn.ch/2A2HceE
——————————
If you have an AT question, leave us a voice mail at: 317-721-7124 or email tech@eastersealscrossroads.org
Check out our web site: https://www.eastersealstech.com
Follow us on Twitter: @INDATAproject
Like us on Facebook: www.Facebook.com/INDATA
— transcript follows–
ARIANA ANDERSON: Hi, this is Ariana Anderson, and I am an assistant professor at UCLA and founder of Chatterbaby, and this is your Assistive Technology Update.
WADE WINGLER: Hi, this is Wade Wingler with the INDATA Project at Easter Seals Crossroads in Indiana with your Assistive Technology Update, a weekly dose of information that keeps you up-to-date on the latest developments in the field of technology designed to assist people with disabilities and special needs. Welcome to episode number 374 of Assistive Technology Update. It’s scheduled to be released on July 27, 2018.
Today we are going to talk about an app that listens to baby and tells us a lot about what’s happening in their world. We’ve got the ultimate guide to résumé from class the door. And a couple of AT hacks for your Amazon Echo.
We hope you’ll check out our website at EasterSealsTech.com. Send us a note on Twitter@INDATA Project. Or call our listener line. The number is 317-721-7124.
Like this show? Check out our YouTube channel at EasterSealsTech.com/YouTube.
Not exactly an AT story, but still relevant. Over at glassdoor, they’ve created an ultimate guide to résumés, which is a very nice notable PDF document that talks about how to make sure your resume is in the best shape that it can be as you’re doing jobseeking. It’s ironic because we have LinkedIn and all kinds of tools. It seems like we still use résumés. So if you are looking for a job, you want to make sure yours is great. One of the things they talk about here is they give you some templates to work with for a good-looking but simple résumé that does a nice job. They talk about design and formatting. They talk about fonts of no less than 11 point — thank you, especially if you have low vision. Margins that are at least point seven inches. And to talk about some of the ways to make sure the white space is used effectively. A lot of resumes are submitted online, so there is a checklist. It’s as, before you click submit, make sure you do some of these like a check for grammar; check for word usage errors; make sure you’ve capitalized names and titles correctly. Make sure you have enough people look at your résumé and review your bullet points to make sure that it is concise. They go on to provide examples — in fact, 17 different templates for every kind of professional that you can just plug your information into and know that you’re going to have a nicely designed document to help you land that job. The PDF goes on to describe things about how to write a perfect cover letter, six impressive skills to include on your résumé. And the important part, the top 20 companies to work for and if you are interested in life balance and are they hiring now and those kinds of things.
Again, not necessarily an assistive technology story, but I know that a lot of people in our audience are looking for jobs in the industry of education and even assistive technology. If you are in the season of your life, check out our show notes and get the ultimate résumé guide from glassdoor. Check our show notes.
***
[3:08] Assistive Technology Hacks for Amazon Echo***
WADE WINGLER: I have a couple of interesting stories hear about people hacking Amazon Echo for assistive technology purposes. Check this out.
SPEAKER: Alexa, what is the weather.
ALEXA: Right now in New York, it’s 29 degrees Celsius with partly sunny skies. Today’s forecast has lots of clouds with a high of 34 degrees and a low of 24 degrees.
SPEAKER: Alexa, what is five feet in meters.
ALEXA: five feet is one point five meters.
WADE WINGLER: You are probably thinking, okay, thanks, Wade, you are telling us how Alexa works. But there is something happening in the audio that you can’t really see that is the hook for the story. What you are hearing is Alexa responding to American sign language. There is a lot of literature out there that suggest that smart speakers and Amazon Echo and all these smart devices are really taking the world by storm. I have a lot of friends who are blind or visually impaired who say, oh my gosh, the Echo is the greatest thing ever because I can interact with it in a very natural way and there is not a lot of need for assistive technology, at least for some of the uses for it.
But if you are deaf or hard of hearing, it might be more of a challenge. It was excited when I found this article on BBC.com that talks about a gentleman who is named Abishek Singh. He’s created an app that allows Amazon Alexa to respond to American sign language. He has a computer set up, and it is using some Microsoft Kinect camera technology as well as a machine learning platform called Tensor Flow that allows him to sign to his WebCam, then his computer talk to Alexa, and then it uses artificial intelligence to convert those visual images of sign languages or something that can be spoken to Alexa.
Now it’s just a proof of concept, right? He trying to figure out if he can make this thing work and can it be something useful. In fact, Jeffrey Bingham, who is an expert in human-computer interaction at Carnegie Mellon University says yeah, he’s getting this. “A great proof of concept.” But then they go on to say a system fully capable of recognizing sign language would be hard to design because, “As it requires both computer vision and language understanding that we don’t have yet.”
It’s an interesting thing and I’m going to direct you to a link on BBC.com where you can read about this amazing technology that takes American sign language, does a couple of technical things, and then gets Alexa to respond. There is a YouTube video that shows the demonstration and some more technical details about what’s going on now and what might have to happen to make this a commercial project. It’s a little new, a little bit beta couple pretty cool stuff.
Also on an article on TechCrunch.com, there is a story that talks about a couple of interesting Amazon echo accessibility features. The one is called Tap. This is not the Amazon Tap button that you can order lots of laundry soap with. It’s a different thing. It’s a setting you can turn on. What it does is it works with the Echo Show. This is the Amazon echo that has the video display, so you can see the weather in real-time and look at your grandmother’s recipes and all that kind of stuff. What it basically does is turns your Amazon echo show into a touchscreen tablet. There is a feature in setting that you can turn on, and it enables some shortcuts to allow you to do things like check the news, check the weather, also turn on specific smart home devices or turn them off and some text input kind of features. I think it’s sort of nascent, sort of development now, and it’s not available everywhere. But it’s going to allow people who can’t speak — or due to assistive technology and disability reasons don’t want to talk to the Amazon echo — give them a more physical interaction with it.
The other thing that is coming out in this same story on TechCrunch.com is that Alexa now has captioning. It was introduced for the US customers a few months back, and now it’s being rolled out to the rest of the world or at least the UK, Germany, Japan, India, and some others. It’s going to give you screen-based Alexa responses right there on the echo show, and also happen to work on the smaller, the part, the Echo spot. A lot of cool stuff the Amazon is doing with their Echo might a product to make it more accessible.
The real question is how many times did I set off your device in the story? Always sorry about that, always forget to call it something different. Anyway, lots of cool accessibility stuff happening with Amazon Alexa. I’m going to pop a link in our show notes so that you can go back and check those out.
***
[7:36] Interview with Ariana Anderson***
WADE WINGLER: When a baby cries, people generally react. In fact, when my kids were little, I sort of got to the point where I could sometimes tell if it was a hungry cry or a sleepy cry or an “ouch” kind of cry. If you are deaf or hard of hearing, you are probably dealing with less information than I was when I was trying to figure out what my babies cry means. So today we are going to spend some time talking about an app called Chatterbaby that is all about babies and their cries, some things about disability, and some science, so that we can all understand with we have a disability or not what is going on when a baby cries.
I’m so excited to have Dr. Ariana Anderson, who is a mom of four and a professor at UCLA in the Institute for neuroscience and human behavior, also the founder of Chatterbaby, online today. Dr. Anderson, thank you so much for joining us.
ARIANA ANDERSON: Thank you. It’s a pleasure to be with you today.
WADE WINGLER: We are so excited to have you on today. I know that you have been spending some time with babies crying, right? Do you introduce yourself that way at parties that I’m a professor who studies babies’ cries?
ARIANA ANDERSON: It’s definitely a niche field. I was fortunate enough to be blessed with four children, but like most parents it was a learning curve for me to understand what my children needed.
WADE WINGLER: And you have some academic background related to this. Why don’t you tell me a little bit about your journey so far and why you became interested in babies and their crying and some of your academic journey to get to that point.
ARIANA ANDERSON: Academic background is actually math and statistics. It is purely on data. When I became a mother, I was a little bit overwhelmed because none of my numbers could tell me what my baby needed. So to solve the problem, I created an algorithm to help not just myself but other parents in the future. I noticed patterns in my children, and I felt I could use my data and algorithms to try to see whether or not these patterns held true in other children, to help other parents who had just started their own journey.
WADE WINGLER: I didn’t expect to hear the data angle at the beginning of this. That’s an interesting kind of thing. Tell me a little bit about the baby crying part. Why do we as human beings care about the sound of a baby crying? Why is it important?
ARIANA ANDERSON: When we hear our babies cry, it hurts us. We can actually hear the pain in a babies voice. We have is a very strong desire to try to soothe them, try to help them, try to take care of them. This doesn’t just happen listen to our own children. It happens listening to other children as well. For example, when you look at brain imaging studies, when we hear babies crying who aren’t our children, we can feel distress, you can feel pain. It’s something that is built into us to want to help and respond and take care babies.
WADE WINGLER: So I’m not mistaken when I say that I can tell that it’s a sleepy cry or an “ouch” cry or a hungry cry? There is something going on, and it’s not just with my kids? It’s more universal?
ARIANA ANDERSON: Absolutely. There is some physiological reason why this is happening. For example, when someone is in pain, you can see it in their face. The muscles become tense, you can see the muscles becoming very tight. It’s the same tenseness you see in the muscles that changes the sound of babies cry. The baby isn’t trying to make a different sound with their cry, but the same physiological response that happens with pain affects the vocal tract as well.
WADE WINGLER: You just turn a lightbulb on for me. When I am wincing in pain, or my wife says that I am “hangry,” and that physiological manifestation of that would change a babies cry?
ARIANA ANDERSON: Yes. This is why we are able to find these patterns that exist across many different babies. We have these biological responses to these different stimuli that can also change our voice. For example, we know that in adults, you can listen to someone’s voice and you can use algorithms to figure out, for example, whether or not they are depressed. When people are depressed, they have very monotonous voices. They don’t really have much intonation. They might talk more slowly and not use any sort of denunciation of their words. This is something that we can tell in adults, and the same thing is true with children.
WADE WINGLER: That makes total sense. Again, I’m going back to my relationship with my wife. I can tell the tone in her voice and the look on her face. I guess it’s communicating more than I’m giving her credit for.
ARIANA ANDERSON: Yes. We know too that the voice can be an indicator not just, for example, mood, but can also be an indicator of neurological problems. For example, we know when adults have strokes, it kind of affects their ability to speak and enunciate words. One of the items we test for in the stroke scale to test if someone has had a stroke is whether or not they are able to enunciate words. Can they say come in words like zip top, huckleberry? You won’t be able to speak clearly when you have something wrong with her brain because your cranial nerves are likely going to be affected.
WADE WINGLER: Wow. That’s fascinating. Ariana, this project, Chatterbaby — we would get to the details of it in a bit — but this was originally designed as an app for folks with disabilities, deaf and hard of hearing? Is that right?
ARIANA ANDERSON: Yes. As we know, deaf parents are perfectly capable of interpreting the world with their eyes. It’s not really something that’s necessary unless they can see the baby. Just like you and me, deaf parents like to drive cars. They like to sleep. They like to read books. Because of that, when their eyes are engaged, they don’t have a way of monitoring their baby. This actually began as a technology to basically watch children off the deaf parents when the deaf parents’ eyes are engaged otherwise. When your eyes are busy, and they can monitor your baby, we are trying to solve the problem.
WADE WINGLER: I know that there are some alerting systems out there that try to provide environmental information to folks who are deaf or hard of hearing. Fire alarms, and smoke detectors, and doorbells and all that kind of stuff. But you are taking into a different level. I understand that you are also interested in how this might apply to other disabilities. Tell me how Chatterbaby is different from just the thing that lets you know that the doorbell rang or there was some sort of noise near the baby crib.
ARIANA ANDERSON: As you know, in most households, especially mine, there is a lot of noise happening. There are TVs, the electronic baby toys that you can never find and won’t be quiet, dogs barking, bells ringing. Any one of those things can trigger a false alarm if you are having just some sort of acoustic decibel monitor that is trying to identify sound. What we’ve done with Chatterbaby is try to make a specific to baby sound. This is what we hope to have in an upcoming version. It will screen out, for example, the doorbell, the dog barking, the three-year-old whining, all those things. It would just be able to focus on whether or not a baby is crying.
We also have plans to take this technology to a different angle. There is an interesting body of infant cry research that suggests that baby who are at risk of autism cry differently than babies who do not develop autism later. These are studies done and maybe 20 babies each of babies who are 12 months, 18 months, and tracking whether or not they develop autism, whether or not they have a sibling with autism, and whether or not they cry differently. These studies have found, for example, the babies who are at risk of autism have wrestled with regularity in their cry, have higher pitch in their cry, these different signatures of a baby’s cry which can make them at increased risk of autism.
One thing we want to do with this project is able to assess whether or not this is true, not just in a smaller sample done in research labs On babies around the world. For this purpose, we are saving the baby cry when they are submitted to our Chatterbaby app. We are following babies for six years. Starting at age 2, we are providing screening for autism with typical questionnaires asking for example, does your child make eye contact? Is your child talking? Is your child pointing? Is your child socializing? With this we are going to be able to identify whether or not the babies who cry differently earlier on are the ones who are at risk of developing autism much later.
WADE WINGLER: That’s a really novel way to seek correlation. Did those early sound result in things that show up in an autism screening.
ARIANA ANDERSON: It’s not just a way of doing an analysis for us. It’s a way for us to provide service to people who are using our app. We know that in this country, for example, children of color get diagnosed with autism much later than children who are from white backgrounds. Some of these reasons are economic, some of these reasons are cultural. For example, Latino children get diagnosed one to two years later with autism. One thing we are doing is we are providing free screenings in the privacy of your own home on his cell phone. Basically if your child is at high risk and you fill out a survey, you’ll know. You have a risk report. You can go to your doctor and say, hey, my child took the Q-CHAT and they scored in the very high risk range. Should I get them tested? Should I get them screened for having autism or another related disorder? This is all very private so they don’t have to worry about, for example, people being concerned about a physician taking their parenting habits are ending like that because this is really between you and a computer algorithm that is doing this. There is no human. We want to keep this as private and discreet as possible to try to help parents identify whether or not they need to go further and talk to the doctor brought this.
WADE WINGLER: That’s fascinating. I’m going to come back to a privacy question, but I think I want to back up a little bit. Let’s talk about the Chatterbaby app. How does it work? What is the user experience like?
ARIANA ANDERSON: The Chatterbaby is a free app that’s available on the Apple and Google play platforms. When you download the Chatterbaby app, the first thing you do is sign a consent form agreeing to participate in our research study. This is because we are storing your data when you use the app. We want to make sure you are okay with that in order to use the app. When you use the app, what you do is record five seconds of audio data. Those five seconds of audio data goes to our servers where the algorithm run the Chatterbaby algorithms on it. These algorithms predict whether your child is fussy, hungry, or in pain. However, we also have waste in the app where you can tag whether your baby is crying for a different reason. For example, babies cry because they are called, because they have ear infections, because they are tired, a thousand different reasons under the sun. We have a way within the app where you can actually tag the data and say the algorithm wasn’t right, my baby is not fussy, my baby has an ear infection. That’s not listed here. This is a way we can improve our algorithm and expand the vocabulary it has to that future parents who use it will be able to get more information about their children. You are not just helping yourself. You are helping other parents down the road.
WADE WINGLER: What does the ongoing interaction with the parent look like?
ARIANA ANDERSON: When the parent uses the app, they get back the information that has the probability that their baby is fussy, hungry, or in pain. What this is is kind of like a weather report. We will be able to tell you with a pretty high accuracy whether your child is in pain, fussy, or hungry, and then you can use that with the other information you are to have. When did your baby last it? Does your baby have some sort of fever right now? Is there anything else that could be causing your baby distress? What this is, is just one other indicator that we are able to use to help you make your own parenting decision.
WADE WINGLER: The only is the parent getting information back, but that data is then being used in a collective way so that the data gets better, right? The algorithm gets better?
ARIANA ANDERSON: Yes. Our goal is to always increase the accuracy of the algorithm and to make it better able to work for different children. For example, we know that language that baby cry might be reflective of the mothers language as well. For example, French and German newborn babies cry differently. We haven’t answered the question of whether or not babies who are in China or who are in Russia cry differently than babies who are in America. We expect are going to be similar patterns for things like pain, but we are not sure. Whenever parents use the app, they are providing data for us to answer these important research questions about how a baby’s cry might change with the language spoken at home.
WADE WINGLER: I saw on your website as I was doing some preshow research that parents can donate data. Tell me about what that means and why it’s important. I think we’ve covered that a little bit. And also talk to me about the privacy.
ARIANA ANDERSON: We are asking parents to donate data because you want to make our algorithm better. The second reason we are asking parents to donate data is because we are not just interested in the cries of infants. We are interested in the cries of nonverbal older children. When we launch the Chatterbaby app, we had a number of parents who were emailing saying I have a six-year-old who can’t speak. I want to use Chatterbaby. Will it’s work for them? I have a nine-year-old who can’t speak. Can I use Chatterbaby for them? Right now, we don’t believe that’s possible cop but we want to make it possible. We are asking parents to donate data of their older children. It could be your six-year-old doesn’t want to do their math homework. It could be the three-year-old who is angry because you didn’t give them a popsicle. It could be your 12-year-old who just got a vaccine or some sort of invasive procedure and is crying because they are in pain right now. Any bit of data we can have donated to us we can you to try to extend our algorithm to nonverbal children, which is a need for so many parents that haven’t been addressed yet.
For privacy, what we try to do to protect people’s privacy is we first of all try to collect as little information as possible. When you launch Chatterbaby, we have a server that asks people question about their medical history, the child’s medical history. But we don’t ask for is your full name. We don’t ask for your phone number. We don’t ask for your street address. We try to limit the amount of personal information that we are collecting from people because you want them to feel comfortable using our app. We don’t want to use this as a tool for harvesting data that we can, for example, sell to a company down the road. What we are trying to use this for is medical research.
The other way we are trying to protect people’s privacy is we are using basically HIPAA compliant servers. We are going to be using the same method that we collect medical data in the hospital and protecting it. We are not sharing the data. This data isn’t going to end up on a website down the road because we understand people have concerns. We want to have everyone feel comfortable sharing their data with us. Because of this, we are trying to maintain privacy*the as possible. The one that’s excellent. As our agency’s HIPAA officer, I love to hear that you guys are thinking about that stuff.
ARIANA ANDERSON: Another reason why we focused on a data privacy is because a lot of times people aren’t comfortable participating in medical research studies because they don’t want to have people giving them or to speculate. They just don’t want to feel that people are in any way criticizing any of their personal choices. Because of this, for example, we have people participating in our study who would not otherwise be captured. According to our surveys now, we have teen moms, moms who’ve used drugs during pregnancy. Everyone who is giving us this information that they probably wouldn’t reveal to their doctor if the doctor were sitting in front of them, staring at them with a white coat on.
WADE WINGLER: You and Chatterbaby have won some awards. Let me hear about that. Let’s brag a little bit.
ARIANA ANDERSON: We were the winner of the 2016 code for the mission award at UCLA. This is an award where different groups get together and try to create these new technologies that use for mobile health, education, the different “greater good” purposes. We were very proud to win the award because it was a reflection of our team. Our team has been largely composed of volunteers who have worked very hard to bring this project out of UCLA and into the marketplace where we are giving away for free. This is something that really reflects our dedication to the project and our dedication to the cause of assistive technology.
WADE WINGLER: As you think about the future of Chatterbaby — and I do you have a lot of projects going on — what’s in your crystal ball related to Chatterbaby?
ARIANA ANDERSON: One of the goals of Chatterbaby is to be able to reach people in underserved areas for autism and disability screenings. For example, we know that people who are going to be in countries across the world may have less resources to be able to identify whether or not their child is at risk of some sort of development of this order. What we’ve created here is a free way for parents to join our study. Right away, they figure out the most likely reason the baby is crying. The down the road, we are able to provide free service to them such as autism screenings that go from age 2 to age 6. This allows us to identify whether or not their child is at high risk so they can speak to the doctor. This is just a way of bringing information to parents so that parents have some sort of leverage, some sort of tool. Should they be concerned about their child’s behavior? If so, do they need to go talk to their doctor? We are able to provide this offer free because of technology, because they are able to automate the entire process. We know when a child turns one, we can send them an email. We can to the screening. We are really able to reach out around the world and extend our research boundaries from not just the one to 2 to 3 four blocks around UCLA, but to go everywhere and every family and try to help people beyond our immediate doorstep.
WADE WINGLER: I know that people focus on projects because of the impact they’re going to make. Tell me a story about a life that’s been changed or a very human impact related to Chatterbaby.
ARIANA ANDERSON: I think one of the biggest reflections of Chatterbaby is the response we get when we are unfortunately not able to serve everyone at once. Since we’ve launched the app, we’ve had a very high demand. At times, we haven’t been able to make the app available to everyone just because the traffic is so high. We will have parents emailing us saying, “my son is a single that and really relies on this. When is it going to be back?” “I use this app all the time. I can’t make it work. What’s happening right now?” We’ve been trying our best to increase our ability to reach other people with this app because we do want to make this free service available to everyone.
WADE WINGLER: As we wrap up the interview, how should people reach out to you or download the app? How can they learn more about your journey with Chatterbaby?
ARIANA ANDERSON: The Chatterbaby app is available both on the Apple iPhone store and on Google play. You can also visit our website at Chatterbaby.org with everything they are doing at UCLA and our collaborators at the institution. It lists our project update and where we want to go with this. We also provide free screenings for autism on Chatterbaby.org where you will be able to identify whether or not your child is at high risk of autism.
WADE WINGLER: Dr. Ariana Anderson is a mother of four, Professor at the UCLA Institute for neuroscience and human behavior, the founder of Chatterbaby, and has been our delightful guest today. Thank you so much for being with us. But to thank you so much. It’s a pleasure.
WADE WINGLER: Do you have a question about assistive technology? Do you have a suggestion for someone we should interview on Assistive Technology Update? Call our listener line at 317-721-7124, shoot us a note on Twitter @INDATAProject, or check us out on Facebook. Looking for a transcript or show notes from today’s show? Head on over to www.EasterSealstech.com. Assistive Technology Update is a proud member of the Accessibility Channel. Find other shows like this, plus much more, at AccessibilityChannel.com. The opinions expressed by our guests are their own and may or may not reflect those of the INDATA Project, Easter Seals Crossroads, or any of our supporting partners. That was your Assistance Technology Update. I’m Wade Wingler with the INDATA Project at Easter Seals Crossroads in Indiana.
***Transcript provided by TJ Cortopassi. For requests and inquiries, contact tjcortopassi@gmail.com***