ATU322 – 3D Camera for the Blind from MIT

Play

ATU logo

 

 

 

 

 

 

 

Your weekly dose of information that keeps you up to date on the latest developments in the field of technology designed to assist people with disabilities and special needs.

Show Notes:
3D Camera for the Blind from MIT – Robert Katzschmann, PhD Candidate in CSAIL (Computer Science and Artificial Intelligence Laboratory) at MIT | robert@csai.mit.edu
App: Money up | www.BridgingApps.org

——————————
If you have an AT question, leave us a voice mail at: 317-721-7124 or email tech@eastersealscrossroads.org
Check out our web site: https://www.eastersealstech.com
Follow us on Twitter: @INDATAproject
Like us on Facebook: www.Facebook.com/INDATA

——-transcript follows ——

 

ROBERT KATZSCHMANN:  Hi, this is Robert Katzschmann, and I’m a PhD candidate at the Computer Science and Artificial Intelligence Lab at MIT, and this is your Assistive Technology Update.

WADE WINGLER:  Hi, this is Wade Wingler with the INDATA Project at Easter Seals crossroads in Indiana with your Assistive Technology Update, a weekly dose of information that keeps you up-to-date on the latest developments in the field of technology designed to assist people with disabilities and special needs.

Welcome to episode number 322 of assistive technology update. It’s scheduled to be released on July 28, 2017.

Today my interview is with Robert Katzschmann who was with MIT. Dave developed a 3-D camera for people who are blind or visually impaired that help with navigation. We also have an app from BridgingApps.

We hope will check out our website at www.eastersealstech.com, give us a call on our listener line at 317-721-7124, or shoot us a note on Twitter at INDATA Project.

Each week, one of our partners tells us what’s happening in the ever-changing world of apps cost so here’s an app worth mentioning.

AMY BARRY:  This is Amy Barry with BridgingApps and this is an app worth mentioning. This week I am sharing an app called Money Up:  learn to use the next dollar up method.

Looking for an accessible app to teach or reinforce how to use money in real-life situations?  Money up is the perfect app for children learning to handle money, kids, teens, and adults with developmental disabilities, as well as for therapists who work with stroke survivors, those in the early stages of Alzheimer’s, and more.

Money up was designed by teachers and therapist as a way for people to practice working out how much money they have, if they have enough to purchase the items they need, and how much money to give a cashier, all without requiring them to count change. The app’s goal is to help children, teens, and adults acquire or maintain a life skill they need in order to foster independence.

Helpful features of the app include teaching the next dollar up technique with clear images, large buttons for easy navigation, highly customizable student profiles, detailed lesson results reporting, and error-less teaching. The app supports five different currencies with crisp, clear images of notes and coins and has two built-in reward games written to be easy to play and fun as a reward after finishing a lesson.

The app can be used by anyone that needs assistance and desires more independence using money without the need to count change. Users will learn if they have enough money to make a purchase and how much they need to give the cashier. Within this one app, there are a number of different activities that all offer multi sensory inputs and reinforcers to practice the dollar up technique. These activities include identifying coins and bills, how much to pay when a total is presented to them, checking how much money they’ll have, figuring out how much money to give the cashier, paying for items, and calculating if they have enough money to pay for those items.

Money up is fully customizable for individual or group use. Data can be tracked and reported. The grocery list feature is great to make it a community experience with students by making a list and then generalizing learned skills to the grocery or store community.

This is a solid and very well thought out app that provides a clean interface, structured, predictable pace, and consistent appearance and feedback with limited distractions, which is very important for students with special needs. Money Up is available for $15.99 at the iTunes Store and is compatible with iOS devices. For more information on this app and others like it, visit BridgingApps.org.

WADE WINGLER:  I remember many years ago when one of the very first backpack style GPS-based outdoor navigation systems was unveiled at a conference that I attended. Everybody was pretty stoked. They were wearing these backpacks, wandering around parking lots, trying to find bus stops and fast food restaurant and all those kinds of things. One of the first complaints or concerns that we were all saying was that this was great for outdoor navigation when you have good reliable GPS signal, but it doesn’t address the need for indoor navigation by people who are blind or visually impaired.

Those outdoor GPS systems continue to get better and smaller and more accurate and sophisticated with lots of bells and whistles, but that indoor navigation issue, at least from my perspective, continues to plague everybody. I was excited when I learned that a group from MIT has been working on a system that includes a 3-D camera that is outfitted with some tactile interfaces and some pretty cool algorithms and calculations to help address this issue of indoor navigation. We are so excited to have Robert Katzschmann from MIT – he works in the computer science and artificial intelligence lab and is a PhD candidate there.  Have them on the line. How are you?

ROBERT KATZSCHMANN:  I’m doing alright. How are you?

WADE WINGLER:  I’m doing fine and I’m excited to have you on to talk about what you guys are doing with this 3-D camera. I think it’s off a real world problem that I know has been around for a long time. Why don’t we start with a little bit of the back story of the project?  Why did you and your team become focused on this issue?

ROBERT KATZSCHMANN:  Our team became focused on this issue a couple of years ago when a former researcher, principal investigator at MIT was interested in expanding his efforts on autonomous driving and also assistive technology. The Bochelli Foundation came around and said we would like to sponsor some research that would help visually impaired people to actually get better access to technology that would help them, using exactly what you did with autonomous driving. Back then, it was roughly 5 or six years ago that they gave a generous gift. From that point on we started working on this.

WADE WINGLER:  That’s fascinating. Congrats on having the support from the Bochelli Foundation, because that’s pretty cool. Talk to me a little bit about the current state of your idea. Was the current iteration the original or have you been working through different iterations of technology to get where you are now?

ROBERT KATZSCHMANN:  I got involved about 2 to 3 years ago on this project. There had been a couple of iterations before. It started off with taking sensing technology from humanoid robots and trying to make them more portable and have the user carry them with them or strap them around the chest. There was a whole range of prototypes that we tried out. A lot of these efforts were to identify what was really needed [Inaudible] sensing capabilities and was not needed and how can we make it more discrete so it’s actually something a person who has a visual impairment would like to carry with them without feeling [Inaudible] like it looks awkward.

WADE WINGLER:  Talk to me a little bit about the current product. Who does it serve?  I think we talked that it’s for people who are blind or visually impaired. Who does it serve and how does it work?

ROBERT KATZSCHMANN:  The current product serves people who are particularly not capable of seeing at all. We were looking at other ways of getting around the commonly used white cane to trace your path or to follow a tactile tiles, if you are lucky, on the floor. For those tasks, a white cane is pretty good, but when it comes to things like entering a room and trying to find an object, or avoiding a moving obstacle like a person or something that is being pushed around. Trying to avoid those with a white cane can sometimes be difficult, sometimes awkward, and very often tedious for the person themselves to actually find the chance to sit down, for example.

We were looking at this spot of advanced capabilities that the white cane cannot give to a person that cannot see. The 3-D vision system and with an RGB system, we can see the world in such a way that we can find these objects of interest and tell with a tactile interface to the user, there is the table you are looking for and there is no one sitting at it. Just walk this way and you will find a spot to sit down.

WADE WINGLER:  I’m fascinated, and you are drawing me in to try to visualize and conceptualize how this works. The components are a camera, a computer that processes, a braille interface, and some haptics; right?

ROBERT KATZSCHMANN:  Yes.

WADE WINGLER:  Break those down. Let’s start with the camera a little bit. How does that work and what’s going on?

ROBERT KATZSCHMANN:  We looked into where the best place to mount the camera and also went through a couple of iterations between placing it to a hat or you would wear a tall hat or try to put it around as a necklace. We finally came down to taking a camera case that a tourist would wear. You might have seen these let their camera cases that you can put your camera in and the lens is looking out. We were inspired by this design. We initially took a fake camera lens, put it on to this casing, and include this very small 3-D camera on top of it. All of the computing happens within that package. Imagine it has roughly the same size as this camera case and you wear it around your neck around chest height, and it is looking forward and downward. It has a 70 to 80 degrees viewing angle and can look up to four or five meters towards the front of you and passes that part of the world at any given moment.

That’s the vision part of the system that can pick up, we call them point clouds of distances in space. It can directly build up this 3-D world in terms of points. Then we take those 3-D worlds and we find connections between all those dots and make planes that are in those worlds. We find consecutive planes that could be a floor or chair surface. Once we find that, we can then process the information in such a way that we have two different tactile means to tell the user how to use it. One of them is we mount a set of linear vibrators around the abdomen of the user. It’s below your chest, underneath your clothes. You wear a couple of cells that linearly resonate. They give you directional information. If you tell the system that you want to find a chair, it will look for chairs, and once it finds a chair it will tell you to walk that way by vibrating on your left to indicate you to go left or vibrate on your right to indicate going right.

Sometimes you’re not only looking for a chair. You might not know what is in front of you. The use of these tactile vibrations around your abdomen would not give you that depth of information. We partnered up with a company in Germany that makes braille interfaces on braille cells. We made our own custom braille cell that you can wear in your pocket or around your belt. It has 10 braille units inside. Each braille unit has eight pins that can be refreshed instantaneously. We can directly communicate to the user with a single letter that this is a chair or another letter to indicate this is a doorway or this is a table. This way you directly get feedback about what’s in front of you.

WADE WINGLER:  I looked at a picture on the website. From the physical perspective, it looked just like you said. The camera looked like a tourist wearing an old-fashioned leather camera. The haptic interface look like somebody might wear a money belt when they are on vacation trying to keep the pickpockets from stealing their money with a couple of tubes of lipstick in it. The braille interface looked to me like a small cell phone on somebody’s belt. Is that fair to describe the physicality of those components?

ROBERT KATZSCHMANN:  It is. The vibration belt, these lipstick units basically, in the images we published we show it, but it’s supposed to be worn underneath your clothing so you have better tactile feedback. You wouldn’t even see that as an outstanding person seeing the system.

WADE WINGLER:  So no one is going to ask you why you are wearing lipstick on your money belt?

ROBERT KATZSCHMANN:  Exactly. It looks like you were being packed up with some spy gadgets.

WADE WINGLER:  Good luck getting to the airport with that. I’m starting to conceptualize how your building this map of the environment with the different planes. How fast can it process those images and provide meaningful feedback?  There is a lot of competition going on. I’m sure your algorithms have to be pretty smart.

ROBERT KATZSCHMANN:  For the initial implementation, we used a similar computer to a Raspberry Pi. Lots of people might be aware of that image of a computer that can be powered by a battery. It’s quite compact in form and has the size of a cell phone. We use something similar that comes from Korea and has more computing power. With this generic interface, we are able to do 10 frames per second, sometimes even better. We are able of doing quite good processing with minimal latency at any given time. We partnered up with another lab in MIT that focuses on making so-called application-specific circuits. They took our algorithms and miniaturize them into a specific chip that does exactly the competition that we are doing. It can do it in a small form factor. That just shows the capabilities if you know what you want to do and know how you want passed the world around you. You can do it at very low cost and very low energy and further miniaturize this system into something that can be commercialized at some point.

WADE WINGLER:  That leads to my next question. What stage of development is the project at this point?  Is your end goal to create a standalone commercial product, or is this a licensing deal to be included into something else?

ROBERT KATZSCHMANN:  It’s more the latter. As a research lab, we are focused on showing that this is possible and trying to make these demonstration to show this is something that’s quite doable. We patented parts of it, so we are interested in finding partners that want to take it to the next level and commercialize and license our technology. We would be more than happy to support and get that started it just need someone who is interested in funding this research to become commercialized. For our team, we are primarily interested in seeing if this is possible, describing the work and writing a paper about it and also doing studies with users to actually show that it’s possible. My job description doesn’t fit into commercialization.

WADE WINGLER:  Any question that I ask about cost and availability is premature at this point?

ROBERT KATZSCHMANN:  Yes. We can hypothesize about what it cost us to make the system as a prototype. That gives you an idea of material costs. I would say the computer itself is similar to the Raspberry Pi, $60-$70. The camera is another $200. The braille was $1000. The belt that we use, the vibration belt, is another $200. It’s not that expensive for a prototype.

WADE WINGLER:  You are at $1500 for components.

ROBERT KATZSCHMANN:  It doesn’t include the labor of making the components, but just in terms of using something that we can get ready access to, that we can make ourselves, it was around that price point.

WADE WINGLER:  That’s encouraging. I know you are working with users or testers. What are they telling you we as people who are blind or visually impaired experience this technology, what kind of feedback are you getting from them?

ROBERT KATZSCHMANN:  We started off testing this with Andrea Pocelli. We were invited to come to Italy and visit him. We brought our prototypes, and he was really hyped to try them out. You have to imagine, you come to his home and he has children and his wife and they are all very eager to see something that might be able to help them to become a little more independent. He has been quite talented to be able to sing and be able to please so many people in the world with his music. There were always people around him who were able to help them to go places. He is very used to have someone on his side to walk with him to place. To have a system that would give him freedom for a change to walk out of his house onto the street and walk along the Esplanade, that was quite surprising to him because we were able to give him something that, not only in his house which he knows pretty well, he can also walk around his house and if someone walks in front of him, he would directly know the person standing in front of him. If anything changes acutely, he would be able to sense it. With a further extension of our system, we were also doing a combined way of going outdoors and walking along the street. His wife was saying, wow, this is so unusual to me. She would never have imagined him just walking off confidently. That was one of the first impressions we got from a user.

We worked a lot with someone in MIT’s president’s office, who was therefore governmental outreach. He’s also blind. He has helped us a lot in building up, finding the right focus for our system and giving us feedback when we were developing the tactile feedback. We did a lot of testing with him. One of his favorite examples was to go into the lobby of a hotel – he does it all the time. He goes to a hotel and sits down because they have a really great atmosphere. It’s relaxing. But every time he gets there, if someone is sitting on a chair or occupying a certain space, he can’t just walk back to the place he’s used too. Being able to enter the space and walk without running into someone or hitting a person with his cane is quite interesting to him.

WADE WINGLER:  That’s fascinating. Unprofessional of me, but I have to stay how cool is that. Oh, Robert, what are you doing this week?  I’m flying to Andrea Pacelli’s house to show him this cutting-edge technology to help people who are blind to navigate. It’s pretty cool.

As you are interacting with folks and doing research, have you identified – are there still gaps in navigation in terms of people who are blind or visually impaired?  Does this close the gap?

ROBERT KATZSCHMANN:  I think there are still gaps. I think there is more research to do. We started into the realm of object detection. When it comes to smaller objects, like indoor signage, there is still some system work to be done to combine various research efforts into the system. There are people that can easily read signs. They had developed technology for that part. It’s a question to put it all into this mobile package. The big challenge is to have enough computing power that can be carried around. It’s a trade-off of how complicated your algorithms are so that they can still run on a miniaturized computer. I feel there is a lot of work to be done in this, basically waiting for more powerful miniaturized systems to come while we further optimize our algorithms to do exactly what they need to do without wasting compositional power.

The other thing I feel need some more work is more improved tactile feedback. Not everyone might be able to read braille, so that’s one limitation, finding more intuitive ways of communicating what is in front of you just with tactile feedback and having more than just five feet around your abdomen. Placing them properly is more research that can be done. Also just being able to take the camera system that we have and have it work just as well outdoors as indoors, it requires combining different 3-D vision systems.

WADE WINGLER:  We are about out of time for the interview today. If people wanted to follow your work and continue the conversation with you, is there a website or contact information you would like to provide?

ROBERT KATZSCHMANN:  If people are interested in the work, our lab has a website that can be found on the distributed.MIT.edu. There is a subsection about a blind navigation system. You can also me, robert@csail.MIT.edu. If people are interested or would like to hear more about it, I share the research we’ve done and the paper we’ve written about it.

WADE WINGLER:  Robert Katzschmann is a PhD candidate at the Computer Science and Artificial Intelligence Laboratory at MIT, has a really cool job and get to visit cool people. Thank you so much for being on our show.

ROBERT KATZSCHMANN:  Thank you.

WADE WINGLER:  Do you have a question about assistive technology? Do you have a suggestion for someone we should interview on Assistive Technology Update? Call our listener line at 317-721-7124, shoot us a note on Twitter @INDATAProject, or check us out on Facebook. Looking for a transcript or show notes from today’s show? Head on over to www.EasterSealstech.com. Assistive Technology Update is a proud member of the Accessibility Channel. Find more shows like this plus much more over at AccessibilityChannel.com. That was your Assistance Technology Update. I’m Wade Wingler with the INDATA Project at Easter Seals Crossroads in Indiana.

***Transcript provided by TJ Cortopassi.  For requests and inquiries, contact tjcortopassi@gmail.com***

Leave a Reply

Your email address will not be published. Required fields are marked *