Podcast: Play in new window | Download
Your weekly dose of information that keeps you up to date on the latest developments in the field of technology designed to assist people with disabilities and special needs.
Show notes:
Google’s Project Tango with George Takahashi & Dan Andersen, Purdue University
Wunderlist’s Watch app is now better than ever, and its iOS app gets enhanced accessibility — AppAdvice http://buff.ly/1Q9GT38
FCC Chairman Wheeler Honors Innovators in Accessibility Communications http://buff.ly/1M7H8oZ
RESNA Position Paper on the Use of Evacuation Chairs: http://buff.ly/1Q9CFIK
Apps help deaf cinema-goers hear movies http://buff.ly/1Q9BWXX
App: Tom Tap Speak – www.BridgingApps.org
——————————
Listen 24/7 at www.AssistiveTechnologyRadio.com
If you have an AT question, leave us a voice mail at: 317-721-7124 or email tech@eastersealscrossroads.org
Check out our web site: https://www.eastersealstech.com
Follow us on Twitter: @INDATAproject
Like us on Facebook: www.Facebook.com/INDATA
——-transcript follows ——
DAN ANDERSON: Hi, this is Dan Anderson, and I’m a PhD student at Purdue University.
GEORGE TAKAHASHI: Hi, this is George Takahashi. I’m the technical lead of the Envision Center at Purdue University, and this is your Assistive Technology Update.
WADE WINGLER: Hi, this is Wade Wingler with the INDATA Project at Easter Seals Crossroads in Indiana with your Assistive Technology Update, a weekly dose of information that keeps you up-to-date on the latest developments in the field of technology designed to assist people with disabilities and special needs.
Welcome to episode number 210 of Assistive Technology Update. It’s scheduled to be released on June 5 of 2015.
Today we talk with George Takahashi and Dan Andersen from Purdue University, and the discussion is all about Google’s Project Tango and what that might mean for folks who are blind or visually impaired. Also the FCC has issued some accessibility awards; RESNA has a position paper on the use of evacuation wheelchairs; a couple of apps, one called Tom tap Speak and another one to help people who are deaf or hard of hearing do better understanding at the movies.
We hope you’ll check out our website at www.eastersealstech.com. Give us a call or a comment or a question on our listener line. That number is 317-721-7124. Or shoot us a note on Twitter @INDATAproject.
***
Do you wonder if there are specific computer monitors that are better to avoid seizures? Are there apps that will read out loud for folks with dyslexia? What about assistive technology low interest loans? Is there a way to block bad web content for your kids? Who are Cook and Hussey? And what do you think about open-source assistive technology hardware? Well, these are the questions that we address next Monday on the seventh episode of Assistive Technology Frequently Asked Questions, or ATFAQ. If you haven’t heard our other show, it is becoming very popular, very quickly, and it’s a panel show where myself and some others hosted by Brian Norton answer questions that we get from people about assistive technology.
The cool thing about the show is we provide very in-the-trenches, practical information, and we respond to your questions. So if you’re interested, check out ATFAQshow.com, or just tweet with the hashtag #ATFAQ. We’ll get your question and you may be on our show. Check in on Monday. Those questions will be addressed then.
***
A quick two-part teaser, Wonderlist has made their Apple Watch app more accessible for voiceover, and in two weeks we’re going to have an interview with Dave Woodbridge from Australia talking a lot about Apple watch accessibility and getting down to the nitty-gritty. So in two weeks, check for Dave Woodbridge’s interview, and now check our show notes for information on Wonderlist’s accessibility on the Apple watch.
***
I’m always interested when there’s innovation in the field of assistive technology, and apparently so is the FCC. Here in the US, the Federal Communications Commission does some awards, and they are called the Awards for Advancements in Accessibility. They go by the Chairman’s triple-A for short. So Tom Wheeler is the chairman of the FCC, and he announced the winners in seven different categories this year. I’m going to going to detail on them because I will link in the show notes where you can read more about this, but here they are quickly. In the category of augmentative reality, Blind Square. In the category of CAPTCHA alternatives: Google with no CAPTCHA or reCAPTCHA. In the Internet of things: Convo Lights. In the category of real-time text: Beam Messenger. In the category of teleconferencing: AT&T Video Meetings with BlueJeans. In the category of video description: Comcast’s Talking Guide. And then lastly in the miscellaneous category: OpenAIR by Knowbility.
So as I read through these, there’s a lot of really cool innovation, a lot of really cool technology. I would encourage you to check our show notes and you’ll find a link to the press release there and you can learn all kinds of things about these innovative, award-winning assistive technologies. Check our show notes. I’ll put the link there.
***
I was recently with a group of students who asked a really good question. They say what happens when there’s a fire and someone in a wheelchair can’t get to the elevator or its not available due to smoke or whatever? And I said, well, there are these things called emergency evacuation chairs and I started talking about them a little bit. It piqued my interest where RESNA here recently, the Rehab Engineering and Assistive Technology Society of North America, has released a position paper on the use of evacuation chairs, and I haven’t seen something like this before. But it talks about who are the stakeholders and a little bit about information. And then it goes into the different kinds of equipment that are often used for evacuation chairs. It talks about the carry type devices, track type devices, or the sled type devices with really kind of let someone slide down the stairs. They make some very specific recommendations about these. They talk about how people who can be in a seated position, they should be using a track type evacuation chair. They talk about when selecting those kinds of chairs, preference should be given to devices which are compliant with agency standards and those kinds of things.
There are several different recommendations here and then some additional suggestive practices that will help an organization try to understand how to choose one of these chairs, how to use one of the stairs, even talk about training and kind of keeping track of where they are and how many you need and all that kind of stuff. It’s an interesting paper. I’m going to pop a link in the show notes straight over to the PDF which will take you right to the document which is the position paper on the use of evacuation chairs. And then you can be a little more informed about how these chairs work, what are the best practices, and what RESNA recommends as the way the stuff to be done. Check a link in our show notes.
***
Sometimes I find some interesting things at gizmag.com; I mean, after all, it’s all about the gizmos, right? This is about a group from Germany called the Fraunhofer Institute for Digital Media Technology. And they’ve created a couple of apps. They are called Cinema Connect and Mobile Connect. What they’re doing is pretty interesting. They are designed to be used in movie theaters by people who are hard of hearing. And what they do is they stream the audio from the stage for like a play or the screen for a movie to the user’s earphone equipped device or their smartphone. So if you were sitting in one of these theaters with your iPhone or your android device, you would load one of these apps and it streams the audio directly to your app which then you can plug-in your earphones or hook it up via Bluetooth to your hearing aid if you have one of those, and you can adjust the audio. It not only allows you to adjust the volume of the sound coming in, but it also allows you to adjust the bass and the treble and the softness and the deepness of the sound and all that kind of stuff.
An interesting technology. We think a lot about captioning, but this is a nice handy way for people to get auditory stuff boosted up, higher volume things, and a little more control for the individual. I’m going to pop a link in the show notes over to gizmag.com and you can learn more about this soon to be available set of apps from the Fraunhofer Institute. Check our show notes.
***
Each week, one of our partners tells us what’s happening in the ever-changing world of apps, so here’s an app worth mentioning.
AMY BARRY: This is Amy Barry with BridgingApps, and this is an app worth mentioning. This week’s app is called Tom Tap Speak. Tom Tap Speak is specifically designed in collaboration with parents and speech therapists to help people with communication impairments. It is picture-based and has a friendly text-to-speech function.
It was lovingly conceived by a father to help his son who was diagnosed with autism to communicate with the people around him. Tom Tap Speak is a simplistic app with a good foundation of words to help nonverbal students communicate. If you need an AAC app, and cannot afford to purchase the more expensive AAC apps, Tom Tap Speak can do the job very nicely.
We used this app in classes with students who are either nonverbal or difficult to understand. As many schools face budget constraints, this app is free and allows use with more than one student. It has been particularly helpful for some of our students who are overwhelmed by the more involved AAC apps. It also reduces the frustration level and is easier to manipulate. Folders are color-coded based on nouns, verbs, adjectives, and need to know words.
The user is presented with 12 folders. Once a folder is tapped, it opens up icons with the word. On the left-hand side of the center box, students are able to scroll through the other folders and add words to further communicate their needs and wants. Tom Tap Speak is a free app in the iTunes store and is compatible with iOS devices.
For more information on this app and others like it, visit BridgingApps.org.
***
WADE WINGLER: I try to keep an eye on what’s happening in the world of assistive technology, and I’m always fascinated when Google does something new that might have an impact on the lives of people with disabilities. Most of you know that I’m physically situated right here in central Indiana, and Purdue University is just up the road, and we work with Purdue on a number of projects. But today I have George Takahashi and Dan Andersen, both at Purdue University, and we’re going to talk about a thing called Project Tango which has to do with a Google tablet and some 3-D modeling and some really cool kind of stuff, and I’m excited to learn more about it today. But before we jump into the technical nerdy stuff, George, Dan, are you there?
GEORGE TAKAHASHI: Yes, we are here.
WADE WINGLER: Good. Thank you so much for thing some time out of your day to talk with us a little bit. George, I understand that you’re the technical lead at the Envision Center there at Purdue. And Dan, you are a first-year PhD student. I would love for you guys to tell me a little bit about the Envision Center there at Purdue and what you guys do as part of your full-time work. And then we’ll get into Project Tango.
GEORGE TAKAHASHI: All right. So like you stated, the Envision Center is a research facility that was founded 11 years ago. It was originally dedicated to doing data visualization and scientific visualization, so vision has been kind of our evolving technology that we’ve been working around. So most of the devices we use have to do with visualization. Now, our relationship with dealing with assistive technology, this is pretty new, but we will get into that and little bit. My role here at the Envision Center has been as a staff researcher as well as a miniature for all of the students that we coordinate for individual projects as well as grants.
WADE WINGLER: Excellent. So let’s talk a little bit about Project Tango. So this is a Google project. I’d love to know kind of what the primary purpose is for it. Why don’t we just start with that?
DAN ANDERSEN: Google Project Tango is kind of an effort by Google to add additional sensors to the smart phones that we have. The idea of it is what would it be like if we could add new abilities for our phones that led us capture more about the world around us. In the same way that we have phones and we have all of the GPS sensors and things like that in our devices, and that opens up a lot of opportunities for software developers and these new technologies, if there is new ways of capturing information about the environment around you.
So what the Project Tango tablet is, is it’s a development kit that Google is putting out. What it does is it uses infrared sensors to be able to capture depth information about the surroundings. So it’s like a camera in that you’re getting the normal color of what you’re pointing it at, but are also getting information about how far away each pixel is. So that can give you some 3-D information for modeling purposes.
WADE WINGLER: So help me make sure I’m understanding it. We are talking about sort of souping up a smartphone in such a way that it can very quickly and in a live way create a 3-D model of a room that I’m in for a path that I’m walking down or maybe an object and holding?
DAN ANDERSEN: Exactly. So some of the applications that some developers have done with this kind of technology are using it for quickly giving 3-D scanning. So maybe you can put an object on the desk and you can move this tablet around it, and you can acquire some geometry of that environment to get a quick 3-D model of something. What we want to do with is we want to find a way of using this information for navigation purposes for people who are visually impaired.
WADE WINGLER: So we’re talking about how this might be useful for folks were blind or visually impaired. Tell me how that 3-D modeling might come into play. How is this an assistive technology device?
DAN ANDERSEN: The general idea that we are working with is thinking about echolocation and being able to take information about the environment and convert that into sound, the way that that happens with several species in nature and even to some degree from people who are disabled to be able to echolocate. The idea is that if you can have knowledge of the environment in your phone, you can use that to tell the user something that’s useful.
So what this does is it takes the information from this depth camera, and it puts it all together frame by frame until you have a complete model of, for example, a room or a hallway, and then it takes that information, and converts it into sound. It kind of simulates 3-D audio echolocation, which can give the impression of certain obstacles being near or far in relation to where the user is. So the idea would be to have this tablet maybe worn on the chest or somehow attached to the person who is using it, and as they walk around, the system is collecting information about the world around them and is converting that into sound that they are hearing live so that they are getting some information in a sort of simulated echolocation about what’s around them.
WADE WINGLER: Well, you know, years ago, the advent of the GPS system revolutionized the way that people who are blind or visually impaired navigate the community and the neighborhood and things like that. But GPS doesn’t work inside. And I also know that there is some research happening right now about using fixed position beacons that will kind of help with indoor navigation. This kind of sounds like a technology that might be right for pairing with those other kinds of technology. Are you thinking that way?
DAN ANDERSEN: Definitely, I think so. One of the pushers, motivations, for this particular Project Tango is for indoor mapping. Especially when you think about the fact that this is a Google project, Google has everything for mapping the outdoors, they have Google Earth, they have Google map, Google Street view. Indoors, they don’t really have too much. So the idea is if you can have information that you were able to collect using infrared sensors, things that work indoors, that would be very helpful for navigation. So having that as an option for people who are disabled, as they go about their day, they are not always outdoors, that’s definitely an area that this kind of technology is useful for.
WADE WINGLER: That makes a lot of sense. I love it when this kind of technology evolves and converges and creates some pretty sophisticated systems. I think I have an understanding of how the smartphone uses technology to sort of map the environment. But then how do you convey that information back to the person who is blind or visually impaired that’s in a meaningful way? I mean, does it say look out, the couch is on the left, or the stairs going down? How do you take the information and turn it into something useful?
DAN ANDERSEN: That’s a really good question. Any time that you are taking information and putting it into a different format, you’re either going to be losing something or you just need some way of translating it that’s actually useful, especially since audio is kind of a reduced information channel when compared with sight.
So there’s different ways of approaching it. Some ways a very high interpretation and other was a very low interpretation. So there’s been some existing research that we’ve looked into that involves very high interpretation where it takes 3-D geometry and it will try to identify walls or floors. It will generate sounds at those locations that sound different. So if you point this device at the wall, it will sound different than if you pointed at the floor because the system can look at the geometry. It can know if it’s horizontal or vertical and you can tell that information. Another option would be to use this camera to do face detection. I mean, we have a camera in the same way as a normal smartphone camera, and that can detect where a person’s face is. So there may be ways of taking that information and playing a special sound at that location for a face. That way can help orient a person who is using the system to know when there’s people around.
What we are looking at currently — but we’re looking to do different things in the future as well — is more of a simulated echolocation where sounds that are mirrored when you are doing echolocation are going to correspond with closer objects. Things that are higher pitch are going to be closer. Things that are louder are going to be closer. So the idea is that if you can give all of that information and combine it all together, given some initial training, that can be a signal that is just as usable as sight.
The way I think of it is it’s helpful to remember how it is when we see things. The lights that go on to the back of our eye, onto the retina, is converted into an electrical impulse that goes into our brain, at which point our brain has to reconstruct it. My thought is there’s always going to be some sort of learning process for any kind of assistive technology. But you’re right, the goal is figuring out which things are meaningful and which things are just noise that you can get rid of during the qualification process.
WADE WINGLER: One of the reasons I think that’s kind of cool is the systems, GPS and systems that rely on fixed position beacons in an environment, those systems all require things to be sort of previously mapped out. Somebody has to go in and say this location means this and this location means that. This sounds like a technology that, when it’s fully developed, could deal with new environments and changing environment without somebody having to kind of go in and sort of map it out in advance. Do I have that right?
DAN ANDERSEN: That’s exactly right. So this has the potential to do some pre-mapping if that’s helpful. For example, you could have a system where you take this, you move it around the room or you give it a floor plan, and so it has the information and it can use it. But the idea is that this is live and real-time, so the user is walking around using this device. It’s capturing information right then and there and is able to add that, either to existing information it has are building a new map of the territory.
One of the things that Project Tango gives us, is it gives the idea of area learning. What that means is that it’s able to not just figure out where everything is in relation to the user currently but to put everything else together into a larger map. For example, if you’re in a large building and you walk around all the hallways and all the rooms, eventually you double back onto areas that you had walked on before, and that system is able to recognize that because the imagery that it’s seeing now is the same imagery that it’s so before and it can use that to build up a more complete map. It’s doing that all live.
WADE WINGLER: Which is what your eyes and your brain do anyway. It’s a great equalizing experience.
DAN ANDERSEN: It’s called simultaneous localization mapping, which is, you’re right, something that our brains do with sight. So the idea is to do that with sound as well.
WADE WINGLER: That’s really cool. In addition to people who are blind or visually impaired, do you think Project Tango has implications for people with other kinds of disabilities?
GEORGE TAKAHASHI: There could be, especially for individuals who may have issues with learning disabilities or may have issues with recognition. So there are some very unique cases where individuals might have brain trauma or have trouble identifying locations where places start looking the same and have trouble navigating around indoor rooms. So they might start passing the same hallway multiple times, and without having an individual guide or something that tells you where these locations are, it may be a little bit more complex. As you said before, we do have GPS systems for more outdoor kind of environment to help you navigate, but indoors we lose that ability. We are not able to tell you take a left at the next corner to go to the restroom, or if you have a meeting, this is the hallway you need to take in order to get to your next appointment. Having something like this could potentially empower users who may have different types of disabilities, not just necessarily sight.
WADE WINGLER: As I think about this a little bit, I think there are all kinds of educational situations and health and safety situations, and maybe even people with autism practicing social interactions. I just think of tons of possibilities for this. I really dig this futuristic forward thinking kind of stuff. My question for you guys at this point is how far along is the technology? Are we talking about a product that’s going to be ready for commercialization, and if so, when can I get one and how much does it cost?
GEORGE TAKAHASHI: There has been similar technologies before that were a little bit more simplified versions where it was basically, kind of like you think about your vehicle, you might have a backup sensor where, if you’re going backward, it might sound off an alarm if you’re getting too close. There are systems like that right now that are on the market today that you could buy to help you get an immediate sense if there’s something nearby. But again, unlike the Project Tango, this is something that does not gather cumulative knowledge of an area. So it’s just giving you that immediate feedback.
Unlike that, Project Tango will also take into account all of the information you gathered of that point. So if you traverse through a room or you start either walking near a wall to your right or to your left, if it’s not that before, then it knows that information is there. So even if the camera is not necessarily pointing at it, it can get that information.
Now, as far as the future release of a device like this, the technology has been around for quite a while now. As far as cost wise, there’s nothing really too unique or groundbreaking in terms of equipment that’s used. If you look at gaming systems like the Microsoft Kinect or peripheral devices that go on to gaming systems, they contain very similar mapping technologies. So it could be something that’s very commercial and affordable in the fairly near future. So what we’re looking at really is a group to pick it up and run with it as a company to turn it into something that’s marketable.
WADE WINGLER: Now I have to go shopping, right? George and Dan, if people wanted to learn more about what you’re doing there with Project Tango and the Envision Center at Purdue, where would you suggest they look?
GEORGE TAKAHASHI: I would start off with our website. If you go to envision.purdue.edu, you can visit us and see some of the other work that we do here. A lot of our other associated works have to deal more with virtual reality simulation screenings, scientific data visualizations, human computer interface research, and multimedia design and development. So we are a fairly large laboratory, but based on some of the equipment we have access to as well as some of the very talented researchers we have on campus, we’ve been able to collaborate to build projects like this as well as do research on potential future technologies. Definitely our website would be a fairly easy avenue to learn more about us.
WADE WINGLER: And that website once again?
GEORGE TAKAHASHI: envision.purdue.edu
WADE WINGLER: Great. George Takahashi is technical lead at the Envision Center at Purdue, and Dan Andersen is a first-year doctoral student there. Gentlemen, thank you so much for sharing your information today about your work with Project Tango.
DAN ANDERSEN: Thank you.
GEORGE TAKAHASHI: Thank you very much.
WADE WINGLER: Do you have a question about assistive technology? Do you have a suggestion for someone we should interview on Assistive Technology Update? Call our listener line at 317-721-7124. Looking for show notes from today’s show? Head on over to EasterSealstech.com. Shoot us a note on Twitter @INDATAProject, or check us out on Facebook. That was your Assistance Technology Update. I’m Wade Wingler with the INDATA Project at Easter Seals Crossroads in Indiana.