Featured on the website Motherboard.Vice.com on Oct. 31, 2016. Written by Joseph Neighbor.
3D cameras have ushered in a new era of assistive technologies for people with disabilities. One such project, the Wheelie, allows users to control a wheelchair using only their facial expressions. Designed by Paulo Gurgel Pinheiro -- a Brazil-based Intel® Innovator, co-founder and CEO of HOOBOX Robotics -- Wheelie could radically improve the lives of millions whose mobility has been compromised by ALS, quadriplegia, and stroke. Motherboard spoke with Pinheiro about the promise of 3D cameras, and its potential to help us “hack disability.”
Can you give me a little background on yourself?
I got my PhD in computer science and robotics in 2013. My focus was on developing localization solutions for mobile robots. You know Roomba, the robots that clean the floor autonomously? At that time, when I was in school, they were not able to go from point A to point B on their own. We built a solution for this. Afterwards, I started a postdoc program, which is when I began working with wheelchairs. We put that same localization system into wheelchairs, basically.
What do you mean by “localization”?
In order for a robot to go from point A to B autonomously, it first has to know where it is to start with. And it has to locate itself very quickly, because Roomba is supposed to clean the floor, not spend forever trying to find its poles.
This is the problem we solved. With Roomba, we installed a very cheap, tiny camera, which uses AI to match what it sees with a map of the house. But with the wheelchair, we have an Intel® RealSense™ 3D camera, which allows us to do the localization really fast.
Localization is one thing we can do, but it’s not really the heart of Wheelie.
Wheelie is the first computer program that can translate facial expressions into commands that can be sent to a wheelchair, allowing a disabled person to control its movements.
We place an Intel® RealSense™ 3D camera in front of the user, which records their facial expressions; so when the user smiles, for instance, the camera records certain points around the mouth. Our software then classifies those points, recognizing that the user is performing a smile, and then translates that smile into a command for the wheelchair. So when the user smiles, the wheelchair will stop, for example. This entire process occurs in less than 0.7 seconds. It’s just not something you can do with 2D cameras.
When someone uses it for the first time, do you have to calibrate it so the program recognizes that person’s particular way of smiling?
No, it understands that the mathematics behind your smile is the same as my smile, even though we look different. This is the artificial intelligence in our classifier, which behaves like us. For example, I may never have seen a person before, but I'm able to recognize when a person is raising their eyebrows or giving a kiss. Wheelie uses the same behavior to interact with people.
Share on Twitter Share on Facebook Back to News