The revolution in technology that is helping blind people see

Latest

Lex Arriola is your typical 15-year-old girl. She uses her smartphone a ton. She texts. She FaceTimes. Like most teens, she loves emoji and, of course, Taylor Swift.

But unlike most of her peers, Arriola was born blind. When she gets texts with emojis, Siri translates them, so messages are punctuated by “face screaming in fear” ( ) and “puffing with angry face” ( ). She has a Braille Sense, a small book-sized beige contraption with a tactile keyboard that she uses to read and write. Arriola, a petite curly-haired brunette with a warm smile, commands her always-dark iPhone screen with a flurry of taps, swipes and voice commands to Siri.

Her web experience is mostly good, she says, though she steers clear of Facebook and Snapchat because they’re so picture-heavy. But what she really wants is more autonomy in the real world. She got a glimpse of what technology will make possible when Google’s self-driving car stopped by an event she recently attended. The search giant stationed one of its robocars in the parking lot so kids and parents could take a look at the future. Arriola didn’t get to ride it, but she was excited about it nonetheless, and hopes to one day buy one.

“Think about it, it’s less asking people to drive you around,” she said. “It’s more independence.”

Recent advancements in artificial intelligence, along with the proliferation of sensors, mean a technological revolution is coming for people with vision loss. Universities and companies like IBM, Microsoft and Baidu are working on technologies ranging from smart glasses to better computer-vision software that could one day serve as digital eyes for the estimated 285 million visually impaired people worldwide.

Over the last 30 years, technologists have made huge strides in making the internet more accessible to the blind, with digitized Braille systems and text-to-speech software that reads the words on a webpage or app aloud. More recently, companies, including Facebook, have started translating images into read-aloud text. (Maybe that will actually nudge Arriola to start using the social network.)

But “real life, compared to the cyberworld…has been a great challenge”, said Chieko Asakawa, a researcher at IBM and a professor at Carnegie Mellon University’s Robotics Institute. “The first step is to find out where you are,” she told me. “If a computer knows were I am, a computer can help a lot.”

Technology that’s currently being developed for non-humans is going to be a boon for the visually impaired. Now more than ever, tech companies need computerized machines to know exactly where they are, whether they’re working in a factory or cruising down city roads. The blind may benefit most from the tech world’s obsession with self-driving cars, not only in being able to use them to get around, but because the artificial intelligence developed for cars—to help them see and navigate streets—will likely be repurposed as assistive technologies.

The blind may benefit most from the tech world’s obsession with self-driving cars.

Right now, Arriola relies mostly on her cane and spatial memory to get around. Inside her house, she feels for walls, banisters and furniture with her hands. She knows her way pretty well, though she occasionally misses a step or forgets something is blocking her path. Outside is more of a challenge, though. There, she depends on her cane to warn her of things that might be obstructing her way along the ground. Branches and objects higher up pose a bigger threat. She’ll walk with her left arm blocking her face if she senses something is approaching that she might collide with.

It is incredible that more than 50 years into the technology revolution one of the most widely-used assistive tools for visually impaired people to navigate the world is a stick. Minority groups are not a priority when companies are developing products because the market is small. Thankfully, engineers are finally starting to innovate on the cane.

Researchers in the U.K. have developed a prototype “smart” cane with a camera, facial recognition software, and GPS that can reportedly detect objects, like faces, from 30 feet away. A high-tech cane could be costly though, so other researchers, including Chieko Asakawa, of Carnegie Mellon University and IBM, are working on smartphone apps that are like the navigation systems for self-driving cars. Instead of the expensive radar technology that Google’s self-driving cars employ, they use Bluetooth.

For Asakawa, working on this kind of technology is personal. She lost her sight when she was 14 years old after an accident injured her optic nerve.

Asakawa’s NavCog analyzes signals from Bluetooth beacons placed around the CMU campus to create detailed indoor and outdoor maps. It uses a smartphone’s sensors and camera to figure out where you are, within a range of about five feet, and then provide turn-by-turn directions. Asakawa hopes to deploy it at museums and urban centers, she says—though that means getting them to install the Bluetooth beacons, which have caused privacy panics in the past.

Michael Zaken, 68, is one of the few people besides Asakawa to test it out. He used it recently to get from her office to the bookstore, which is located in another building. It told him how many feet down the sidewalk he had to walk or when doors or staircases were approaching. Zaken, a former software developer, found it useful, but said he walked faster than the app was able to keep up, so directions often came too late.

“Once it’s fully developed and more buildings are integrated, it would provide a great service for blind people. It would be as important to a blind person as GPS is to a driver nowadays,” he told me.

Asakawa’s also working on computer vision algorithms that will be able to recognize objects, faces and even a person’s mood—cues that blind people don’t have.

Microsoft is taking a similar approach to improving accessibility with a pilot project in London that uses Bluetooth and Wifi signals to create maps of the environment. It can tell users about nearby points of interest, like restaurants and stores; give them public transportation updates; and warn them they’re on a main road or straying too far toward the curb—all through a pair of bone-conducting headphones that use vibrations of bones inside the skull to convey sound. These are worn around, rather than in or on, the ears, allowing a person with vision loss to still use audio cues to navigate their surroundings.

That’s really important because a pair of regular headphones can “isolate you from the reality of the street and if you can’t see you’re already quite isolated,” said Hannah Thompson, a visually impaired lecturer from England. “It closes off an avenue of information.”

Thompson hasn’t tried out the Microsoft system, but she has helped a team of researchers at the University of Oxford prototype a pair of smart glasses. The glasses have a 3D camera that can “visualize” the structure of nearby objects, even transparent ones, and then create a stark, black-and-white sketch of the world so that a wearer with very poor vision can interpret what’s around them. (To be useful, the wearer needs to have some residual vision, meaning they can still detect light.)

Thompson, 42, has been legally blind since birth, but—unlike Arriola— she largely operated as a sighted person because she had partial vision in one eye until four years ago when cataracts robbed her of her remaining vision. During a recent trip to a cheese shop wearing the smart glasses, Thompson could, “for the first time ever,” according to her blog, distinguish the cheeses’ different shapes and sizes, making her feel more confident about what she was buying. She used that information—data most of us wouldn’t think twice about—to strike up a conversation with the cheesemonger and ask his input on which goat cheese to purchase.

“They gave me more power and more confidence over what I was doing,” she told me.

Stephen Hicks, the University of Oxford neuroscientist who’s leading the work on the smart glasses, hopes to eventually sell them for less than £300 ($450). His team recently received a Google Impact Challenge Grant, a program that is currently funding nine projects focused on technologies for people with disabilities.

“There is a big mental health component to this,” said Hicks. In the U.K., less than half of all blind people actually attempt to leave their homes each day, he says. “Once you take a couple of nasty falls you lose a lot of confidence. This leads to agoraphobia and depression. I hope that vision enhancers like our glasses can help restore that confidence.”

Arriola is lucky to have a supportive family unit. Her mom, Marieta, who works with children with vision loss, along with her dad and twin sister (who can see) are a huge support system for Arriola. But not everyone has that.

There’s a lot of cautious excitement about new assistive technologies among engineers and blind users. But there’s still a lot of work to be done before they can benefit people like Arriola.

“The biggest problem is the investment in infrastructure. Who is going to pay to install, configure and maintain tens of thousands of Bluetooth beacons in shopping centres, train stations, galleries and along the street?,” said Hicks. “Sure, Tesco [a British grocery store] and Walmart already have these systems for shoppers, but you can’t just take their equipment and scale it up to the rest of a town.”

The computer-vision component also needs work. Image captioning is getting better, but “to do it with a level of detail, with the right context for a blind person to navigate—that still needs some work,” says Adam Coates, the director of Baidu’s Silicon Valley AI Lab. “The trouble with all these things is that we can show research examples of the technologies working, but they’re not yet strong enough that you would trust your grandmother to use them and keep her safe.”

At Baidu’s offices last month, I tried out what they’re working on for the blind, a small earpiece called DuLight that can analyze images taken with a smartphone camera. It only works with still images, and can currently recognize food, currency, clothes, animals and plants, but the long-term vision is to have it scan the world in real time. I had to take photos of Chinese currency three or four times before the machine recognized the bill. For the most part, it works well, but it requires good images, which could be hard to get if the user doesn’t have a sense of where they’re pointing the camera.

What would greatly help researchers working on tech for the blind would be detailed data about pedestrian walkways and public and private spaces, like the insides of buildings, says Hicks. It took Google nearly a decade to amass the kinds of maps necessary to make self-driving cars a reality.

“Hopefully, Tesla and Google and others open up their data from self-driving cars to provide 3D maps of local streets,” said Hicks. “That would be a good dataset to which we could start adding pedestrian-level details.”

The tools are there, but researchers now need the data and funding to make them robust enough for practical use.

“Real-life accessibility has a long long way to go,” Asakawa said. “We’re now on the start line.”

Daniela Hernandez is a senior writer at Fusion. She likes science, robots, pugs, and coffee.

0 Comments
Inline Feedbacks
View all comments
Share Tweet Submit Pin