ATU415 – Haptimage with Shruthi Suresh and Ting Zhang

Play
ATU logo

Your weekly dose of information that keeps you up to date on the latest developments in the field of technology designed to assist people with disabilities and special needs.

Show Notes:

Shruthi Suresh and Ting Zhang, Co-Founders of Haptimage

Website: www.haptimage.com Email: haptimage@gmail.com

Apps worth mentioning: www.bridgingapps.org

iOS 13 Mouse Story: http://bit.ly/2PXtGNR

Google Accessibility Story: http://bit.ly/2PUBr7c

SoloWalk Story: http://bit.ly/2PVKll5


——————————
If you have an AT question, leave us a voice mail at: 317-721-7124 or email tech@eastersealscrossroads.org
Check out our web site: https://www.eastersealstech.com
Follow us on Twitter: @INDATAproject
Like us on Facebook: www.Facebook.com/INDATA

SHRUTHI SURESH:  Hi, this is Shruthi Suresh.

TING ZHANG:  This is Ting Xhang.

SHRUTHI SURESH:  And we are the cofounders of HaptImage, and this is your Assistive Technology Update.

JOSH ANDERSON:  Hello and welcome to your Assistive Technology Update, a weekly dose of information that keeps you up-to-date on the latest developments in the field of technology designed to assist individuals with disabilities and special needs.  I’m your host, Josh Anderson, with the INDATA Project at Easter Seals Crossroads in beautiful Indianapolis, Indiana.  Welcome to episode 415 of Assistive Technology Update.  It’s scheduled to be released on May 10, 2019.

Thanks to everyone who listen to our last show, but we are going to get back our normal format today with a great interview with Shruthi Suresh and Ting Zhang, the cofounders of HaptImage.  We have Amy Fuchs on with an app worth mentioning from BridgingApps.  A story about mouse access for iOS 13.  Some new things Google is working on for accessibility. And a new device called the solo walk, which can help individuals with mobility challenges.  Without any further ado, let’s go ahead and get on with the show.

***

I realize we are a little early – it’s only May – but some stories are already coming out on some changes that may be coming to Apple’s iOS 13.  I found a story over at 9-to-5 Mac by Bradley Chambers, and it’s kind of a commentary. It’s called “Comment: Mouse support in iOS 13 would solve iPad ergonomic issues for me.”  On our sister show ATFAQ, we’ve actually had the question a few times about how can I control my iPad with a mouse.  While there are some plug-ins – there is a device but I don’t remember the name of it off the top my head – that you can use to actually circumvent and be able to use mouse support, it can be a little clunky and weird to use.  But the rumor is that when iOS 13 comes out, you will actually be able to connect a mouse and use that to control your iPad.  I don’t know if that’s just going to be for the iPad Pro or for all iterations of the iPad that can run iOS 13, but it is a very cool thing.

This story here really focuses on the ergonomics of using an iPad: having to lean over and touch the screen after doing that for a few hours, had to put your body in the wrong position; whereas using a mouse or a trackpad would give you a whole new way to access it.  But I could see how this could also help folks with disabilities. [The iPad] has had a switch access for a long time, so we’ve been able to connect switches to do different scanning and having us on the iPad, and those have worked really well. There is a lot of different Bluetooth switches that we can use in order to access it, but really, once you get mouse support, then we can go with joystick, trackball mice.  Really any kind of mouse that you can use to access the computer can then be used to access your iPad that is running iOS 13.

Again, this isn’t written in stone.  We don’t know if it will actually be something that is there.  But from everything I’ve been reading and have seen so far about iOS 13, it looks like mouse support is going to be one of the big things that’s in there.  So really looking forward to it and hopefully will hear more about that in the coming months, and hopefully that will be included in iOS 13.

***

Moving straight from Apple and into Google, I found a story over at venturebeat.com about Google, and they are unveiling three accessibility projects that can help people with disabilities.  Those three projects are Project Euphonious, Live Relay, and Project Diva.  None of these are actually available right now, but they are all being worked on there at Google to help build accessibility for their products.

First one, project euphonious, the idea behind this is to help individuals with ALS or other speech impairments to be able to access speech to text technology, or the Google assistant.  So individuals who have ALS or other speech disabilities have a very hard time using AI or asking questions of the device.  This could be anything, ALS, MS, traumatic brain injury.  So really, what they are doing is they are having these individuals with these speech disabilities talk to the Google assistant, and they are using it to record all these voices and then optimizing their AI-based algorithms to more reliably recognize and transcribe the words that they say.  This could have a lot of helpful uses.  Not only would the individual be able to probably access their phone, their android phone, or Google assistant much easier, they could also talk out there text messages, especially as some of the motor skills maybe go away.  They could also use it to speak to their device, which could then understand them better and then have those words spoken out of the device, so they may not even need a separate AAC device or this could work in tandem with the device. It says right now the models are currently limited, but they are really working on this in order to be able to help out.  They are also only right now working with individuals who speak English, but I’m sure once they get those algorithms and AI going, they should be able to move that on to others.

Live Relay is their next program.  This is to help folks who are deaf or hard of hearing.  Really, the idea behind it is voice calls really aren’t an option for folks who are deaf or hard of hearing, but the technology behind this live relay is to help them.  Basically they would call an individual.  They could type out whatever they want said.  The hearing individual would then hear that said.  And then they would talk to their phone, that would show up as text on the other individual’s phone who is deaf or hard of hearing.  So basically it’s like sending text messages, but the one person would never actually have to type anything or even read their screen. Very, very cool.  It says it leverages Google’s smart compose and smart reply feature, which is predictive writing suggestions and instant responses that can help a person typing to keep up with the speed of a voice call.  They can also sit there and actually understand what the person is saying and maybe give a list of replies.  Again, not quite ready yet, but a very cool idea that they are doing.

The last one is Project Diva.  Project Diva is a program to help a person who was is nonverbal or has limited mobility use the Google assistant.  If you think of the Google home thing.  What it does is an external switch plugs into this Bluetooth device that that connects to the Google assistant.  It’s still in the works just like the other ones, but as of right now, what they are trying to do is use big red buttons in order to be able to do it, and that would access the device.  What they are hoping to do is make this available to where it can have a lot more uses so that maybe you could touch a CD and it play your music or touch a cartoon stuffed animal and it will play your cartoons or other show that you want to watch.  A very great idea, and I do love that Google and all these of the big companies are trying to do what they can to just make their product more accessible to individuals with disabilities.  We’ll put a link over to that in our show notes and hopefully these are all things that we will see in the not too distant future.

***

So in today’s interview, we are talking about a device that helps individuals who are blind or visually impaired.  I found an interesting story about a device that helps individuals with mobility challenges.  It’s from Newz Hook, and the story is called “SoloWalk reaches out to people with cerebral palsy to move around with ease.”  It talks about the SoloWalk device.  What this is, is a mobile mobility robot, I would almost call it.  It’s got to big arms on it that you use for balance, and as you walk, the device actually moves with you.  You can turn, you can go straight.  Not only does this give you some stability, it also has fault detection. It can catch you if you would slip or fall or lose anything like that.  And can really help with building up the muscles in the legs and things like that. It was originally developed and used for rehabilitation for helping folks get back on their feet after an accident or injury.  This one, it was actually able to move around the building.  It’s a pretty good size device, but it can really help out the individual.  Like I said, it can move with them so they are not only on a treadmill or anything like that.

According to the story here, it says it’s still kind of in the development stage, but it can really help individuals, especially those with mobility challenges, CP, and different things like that.  You could also give you the help for falls and everything that a normal walker or other kind of mobility device might not be able to do.  It doesn’t have anything about prices, where it’s available, or if it’s available at this time, but it can be really helpful for folks with a myriad of mobility challenges, not just cerebral palsy, but it could definitely help those folks as well.  We will put a link to the story in our show notes.

***

AMY FUCHS:  This is Amy Fuchs with BridgingApps, and this is an app worth mentioning.  This week’s featured app is called Centered.  Centered is a meditation and mindfulness app from Blue Cross Blue Shield that allows users to set goals for daily steps and weekly meditation sessions. The goal is for users to increase their daily steps and weekly meditation time.  This is shown on the app as two different circles.  The closer the circles come towards each other, the more centered the user becomes.  The app asks users to allow access to the health data in order to help the user get most out of the app and make it personal to their daily lifestyle.  It also asks you to enter some personal information about yourself such as height, weight, and birthday.  You can set your own meditation goal based on the amount of time you want to spend in meditation each week.  You enter a daily step goal and record your current mood.  This app allows you to track stress levels throughout the day.  You are able to see your weekly step data including the amount of calories burned, time spent walking, and mileage walked.  You can also select meditation exercises in order to reach your meditation weekly goal.  The exercises range from four minutes to 19 minutes long.  This is an excellent app for veterans, adults, people who have PTSD, and anyone who wants to learn how to effectively use meditation and mindfulness. Centered is available for free at the iTunes Store for iOS devices.  For more information on this app and others like it, visit BridgingApps.org.

***

JOSH ANDERSON:  So a few weeks ago, we had a story about some students over at Purdue University that created a device to allow visually impaired individuals to kind of see images on a screen with a joystick and other feedback.  Reading the story, I just had to learn more about the inventors and the device. Lucky for us, they were nice enough to come on the show today and tell us all about it.  Shruthi Suresh is a PhD student in the Department of biomedical engineering and Ting Zhang is a PhD student in the school of industrial engineering, both at Purdue University.  They combined their talent to found HaptImage.  Shruthi, Ting, welcome to the show.

SHRUTHI SURESH:  Hi, Josh, nice to talk to you.

TING ZHANG:  Hi Josh.

JOSH ANDERSON:  It’s great to have you on.  First of all, how were finals?

SHRUTHI SURESH: Finals were not too bad. Thankfully as PhD students, we don’t have to take too many finals.  So that was kind of a relief.

TING ZHANG:  Yeah, like this is my last year of my PhD, so I’m away from classes for a while.

JOSH ANDERSON: Congratulations.  That is excellent.  That means you both are in a good mood for the interview today, so that’s very good to hear.  Before we start talking about HaptImage, could you both tell us a little bit about yourselves?

SHRUTHI SURESH: Sure.  My name is Shruthi.  I’m a second-year PhD – you’ve already talked about that.  I am from Indonesia, which surprises a lot of people because my name sounds very Indian.  I love being part of something that I know makes a difference in people’s lives, and hopefully this is something we can take forward and actually bring out to the masses.

TING ZHANG:  I’m Ting.  I’m almost graduating, and I’m a PhD student in the school of industry engineering. I’m from China.  For my background, I took my undergraduate in China in software engineering.  That’s why I have some background about developing the technology we will be talking about.

JOSH ANDERSON:  I’m glad you both met and were able to come together on this.  Our listeners are definitely tuned in to hear about this, so go ahead and tell us what is HaptImage.

SHRUTHI SURESH: HaptImage is a tool which allows people who are visually impaired to understand images in real time.  We are talking digital images, and given that the world is so digital right now, there is no real time accessible way for people who are visually impaired to get the nuances of an image.  So we’ve developed this multimodal technology which uses a variety of outputs to basically convey specific details of an image to the people that are blind and visually impaired.  So this could be sound or vibration or forced feedback.  This gives them a better idea of what the image is in the context in the rest of the page.

JOSH ANDERSON:  Very good.  So there’s all different kinds of feedback, not just a vibration or something like that?

SHRUTHI SURESH: Exactly.  We’ve done experiments before, and this was Ting’s Masters thesis actually.  We had blind users test it out, and they actually love that they can understand very specific details of it.  The message seems to be going across pretty strong.

TING ZHANG:  Haptics is kind of a new concept, not only for the blind community but also for us.  When we first introduced haptics, it’s really hard to convey what haptics really is.

JOSH ANDERSON:  It’s also hard to do it on an audio only kind of program, telling folks what exactly it is.  I know I saw the original picture of the device, it’s very cool it looks almost like a joystick.  Where did the idea for this come about?

TING ZHANG:  We didn’t develop the hardware part.  The haptic device is commercially available hardware. The one we use here is manufactured by Force Image, and the model we are using is an omega six.  What we did is we used this haptic device, and we developed software that can automatically extract the image features from any images and render those images into different haptic modalities.

JOSH ANDERSON:  The image itself, does it have to be in a specific format, or is it really any digital image?

TING ZHANG:  It doesn’t need to follow any format.  Any image will work.

SHRUTHI SURESH: That’s kind of the novelty of our technology, is that basically you can take any image, may be off the Internet, or even if it’s from a specific database that your school uses, for instance, and you would just be able to re-create that and help a student understand or a person understand what’s going on in the image.

JOSH ANDERSON:  And you originally developed this with STEM classes in mind?  Is that right?

SHRUTHI SURESH:  Correct.  It was originally developed with a bunch of blood smears, which is basically red blood cells and white blood cells [INAUDIBLE].

TING ZHANG: Initially, we had different types of blood cells, like sickle cell, the red blood cell that’s not circular.  So we wanted to see if this kind of technology could help blind students understand different shapes and different textures better than tactile images.

JOSH ANDERSON:  I could see how this could help students in architecture, graphic design, or anything that’s visually based, right?

SHRUTHI SURESH: Absolutely.

TING ZHANG:  Yeah.

JOSH ANDERSON:  About a month ago, you both went to DC for the University innovation and entrepreneurship showcase.  You tell me a little bit about that program or showcase?

SHRUTHI SURESH: Sure.  The American Public Land-grant University has an innovation showcase that they do.  It’s the second year that they’ve done it.  We got nominated by Purdue to represent Purdue’s efforts at making entrepreneurship and innovation more commonplace.  We actually got selected to represent Purdue and go over there and kind or showcase the technology that we had two people in DC.  It was government representatives, a bunch of congresspeople who were able to get a view of the technology and what different land-grant universities are doing to help students do more entrepreneurial things.  That was kind of the point of going there, was to get different feedback.  We’ve always had feedback from people who are technically minded, and we’ve also had input from people that are very STEM-based.  So getting input from people who are not necessarily related to that field was very novel and cool to understand how this technology could actually be explained with the people.

JOSH ANDERSON:  I’m sure like you said, the whole other side of it, the whole idea.  Have you had a lot of students who are blind or visually impaired try this out?  What were their thoughts that you took back and were able to change?

TING ZHANG:  From the blind students, we’ve been testing [INAUDIBLE] I think they all think this is a great technology and this makes digital images real time for them, so they don’t need to wait for the print for tactile images.  Another drawback of the technology is it’s based on one point of interaction.  Normally how blind kids interact with tactile images is using both of their hands and all their fingers.  But using this haptic device, it has only one point because it was like using a mouse but with some forced feedback.  In this case, it took them some time to get used to the system. Normally we need to train them using the system for half an hour or an hour each so they get used to the system and understand how it works.

SHRUTHI SURESH:  I think it’s the prevalence of tactile technology that makes it kind of difficult for them to transition.  Change is always different for people who are used to a very specific type of technology, and that’s how they learn ever thing so far. So I think what we are trying to do is take the experience of a tactile technology, make it more real-time, and hopefully, eventually we will be able to change the tech to allow a more multi-fingered approach.

TING ZHANG:  The other thing we are working on is to make this device portable and smaller.  What we have right now is a fixed station.  It’s kind of heavy and lays on the desktop.  You need to connect with a PC.  But what we are trying to do right now is to develop a device that is much smaller that can connect to your iPad, your phone, your tablets, so you can take it on the go, and whenever you want to see the image, you just plug it in and you can feel what’s on your phone.

JOSH ANDERSON:  That’s a great idea.  I could see how that could help the students, anybody who was blind or visually impaired to access the world and everything that is out there on the Internet a whole lot easier.

SHRUTHI SURESH: Absolutely.  That’s kind of the intent of what we’re trying to do.  We recently did an NSF program that allowed us to do a lot of customer discovery.  We spoke to over 100 people, and they all told us that there was a need for this technology.  People really wanted something that could allow them to see any digital image or understand any digital image, because right now the technology is there, but it’s not being widely adopted by people, and there is a very that’s preventing them from taking it up.

JOSH ANDERSON:  You’re right.  Right now you can add descriptions of pictures and things like that, but how you describe something and how I describe something can be totally different, and even more different for the individual having a describe to them. That’s really helpful.

SHRUTHI SURESH: Exactly.  So we are not saying we are going to replace that.  We are a complement to that.

JOSH ANDERSON:  Just another way of access so that you can have those two together, kind of like voice feedback on a computer doesn’t replace Braille.  But when you have the two that can work together, it can really help with understanding and just make sure that you can have more access.

SHRUTHI SURESH: Exactly.

JOSH ANDERSON:  I’m going to give you both a chance to brag a little bit, because I know you’ve won some awards for HaptImage.  Tell our listeners a little bit about that.

SHRUTHI SURESH:  We took part in the Burgundy Morris Business Model Competition last year and we came in second for the social track, which was pretty cool.  And we also had a chance to participate in the Women In innovation challenge, and we actually won that.  It was fantastic because through that, we actually won a bunch of different services as well, so we had accountant services and lawyer services that came along with the cash award, which has been extremely helpful in helping us set up the company and understand what the boundaries of doing things are, how to file taxes for instance.  That was pretty complicated.  We also won a black award from Purdue itself, so that was pretty cool.  It was able to help us promote the technology and get enough capital to start having our CEO recruit an intern who can actually help us move forward with the project.

JOSH ANDERSON:  Very nice.  You could’ve bragged a little bit longer.  That was very good.  So you’ve talked about it a little bit, but what else is on the horizon?  What else is coming up for HaptImage?

SHRUTHI SURESH:  Like I said, our newly minted CEO is Dr. Bradley Doersop [phonetic].  He’s also our advisor, so that’s how Ting and I know each other, is because he’s our PhD advisor.  His role has been to help us as a company find the right people who can help us meet this goal of creating portable technology that is able to allow people who are visually impaired to just plug it into their phones and just use it to access any image that they want.  What we are looking to do is making this available to anybody and everybody who wants access to this sort of technology, and then making it possible through hiring the right people.  We would love to know if anyone would be interested in joining the company.  That would be fantastic as well.

JOSH ANDERSON:  Did you hear that listeners?  There is even jobs out there.  That’s perfect.  Let’s say that our listeners want to find out more or get a hold of you so they can let you know that they are interested.  How would they do that?

SHRUTHI SURESH:  They can go on to HaptImage.com, our website. You can also email us at HaptImage@gmail.com.

JOSH ANDERSON: Shruthi, Ting, we talked about what’s on the horizon for HaptImage.  But what about for the two of you?  I know that you are working really hard on this.  Are there other things on the horizon for both of you?

SHRUTHI SURESH: Yeah.  I’m actually working on my PC right now, hoping to prelim eventually and get my degree and probably move out and do pretty amazing things with the technology that I have access to or whatever I can contribute to.

TING ZHANG:  For me, I’m almost there with my PhD.  For my PhD, what I did was an extension of what we had for HaptImage right now.  I really hope with the new research I did, like I incorporated machine learning techniques and some coaching strategies to help them better understand, to help the blind people understand images more accurately and efficiently.  I really hope I can also incorporate this new stuff into what HaptImage has right now.

JOSH ANDERSON:  We look forward to seeing what you both do in the future.  Thank you both so much for taking time out of your day to come on the show and talk about this amazing program.  As I said, we can’t wait to see where it goes from here.  Thank you.

SHRUTHI SURESH: Thanks for having us.

TING ZHANG:  Thank you, Josh.

***

JOSH ANDERSON:  Do you have a question about assistive technology? Do you have a suggestion for someone we should interview on Assistive Technology Update? If you do, call our listener line at 317-721-7124, shoot us a note on Twitter @INDATAProject, or check us out on Facebook. Are you looking for a transcript or show notes? Head on over to our website at www.EasterSealsTech.com. Assistive Technology Update is a proud member of the Accessibility Channel. For more shows like this, plus so much more, head over to AccessibilityChannel.com. The views expressed by our guests are not necessarily that of this host or the INDATA Project.  This has been your Assistive Technology Update.  I’m Josh Anderson with the INDATA Project at Easter Seals Crossroads in Indiana. Thank you for listening, and we’ll see you next time.

***Transcript provided by TJ Cortopassi.  For requests and inquiries, contact tjcortopassi@gmail.com***

Leave a Reply

Your email address will not be published. Required fields are marked *