The self-driving wheelchair

Latest

This week, I took a ride in a robotic wheelchair. The thing had lasers and a Kinect sensor for eyes, and a touch screen through which you could operate the vehicle.

“Our end goal with this is for people who have disabilities to be able to have freedom,” said Martin Gerdzhev, a graduate student working on the SmartWheeler at McGill University in Montreal.

He and other members of his team were demoing the SmartWheeler at the Neural Information Processing Systems Foundation conference in Montreal on Wednesday night.

Having a wheelchair that can autonomously navigate its environment would be a boon for children or people who have cognitive impairments, don’t have enough upper body strength to maneuver a regular wheelchair or are paraplegics.  Often people who can’t move their hands and arms must resort to “sip and puff” devices that control a motorized wheelchair through changes in air pressure, which can get exhausting.

Researchers have been working on robotic wheelchairs for decades, but thanks to the advent of better computer-vision and navigation algorithms, more powerful computers and more sensitive sensors, scientists are starting to make some progress.

Plus, there are self-driving cars on the road now, from Google and many automakers. With more public awareness of autonomous vehicles, so the technology doesn’t seem as foreign or futuristic as it once did.

“With all the automation that’s going on in the car market, people are going to expect and be very accepting of the same type of things for assistive technologies, like wheelchairs,” said Joelle Pineau, a computer scientist at McGill University who leads the SmartWheeler project and a former student of Sebastian Thrun, the man who helped build Google’s self-driving cars.

The hard part, she says, is convincing insurance companies and government health agencies to pay for them, as autonomous wheelchairs aren’t going to come cheap.

Right now, the SmartWheeler is still a research prototype with several limitations. Outside the lab, the researchers haven’t yet implemented voice control, which would be necessary to serve some patients, because of difficulties filtering out noise. It’s the same reasons Siri has issues understanding you if you’re if multiple people are speaking at once or if you’re in a noisy bar. Some of the sensors aren’t as reliable as they’d need to be to work reliably outside the lab, and the better ones can cost in the thousands of dollars, which would make a commercial model inaccessible for many.

Pineau’s prototype can detect objects, but it isn’t very good at detecting movement. And if robots can’t guesstimate the speed or direction things are moving the way humans can, that could make for some awkward, if not dangerous, interactions.

“A lot of robots in the future are meant to be in environments with people,” said Gerdzhev. But “it’s still not a very well-solved problem — planning around people…There’s a lot of things that need to be adapted.

Daniela Hernandez is a senior writer at Fusion. She likes science, robots, pugs, and coffee.

0 Comments
Inline Feedbacks
View all comments
Share Tweet Submit Pin