Soon we'll have Siri-like assistants that don't give up our secrets

Latest

This summer, Google launched an app called Photos. It’s an incredible tool: If you beam your digital memories to Google’s cloud, the search giant will make your photos searchable by face, by object, and even by emotion, and will use its intelligence to turn your disparate images into albums, movies and gifs. But many of my friends and colleagues refused to try it out. They didn’t want to store all of their photos in Google’s cloud, where they might be stolen by hackers or mined by the search giant for who-knows-what.

Without giving Google access to your camera roll, though, you can’t take advantage of Google’s ‘deep neural networks,’ a powerful form of AI that typically works best in the cloud, where engineers can store the huge amounts of information the programs need to learn. These big models often run on several powerful computers at once. You’re probably using at least one neural-net powered service: Google, Facebook, Apple, and Microsoft, among others, rely on cloud-based neural networks to improve facial recognition, computer vision and speech understanding.

There might be a time when you won’t need to send your personal information to a company’s cloud to take advantage of this intelligence. Companies are starting to develop ’embedded deep neural networks’ that work on your phone and other devices. These do much of the same tasks, but are leaner, travel-sized versions of their cloud-based counterparts that don’t require an Internet connection to work their technological magic. Experts say they’re going to become more and more commonplace with time as more and more “smart” features are packed into apps and devices like security cameras, thermostats, watches, TVs, kitchen appliances and cars.

Take Sensory, for example, a company that develops security and voice-control technology. On Thursday, it released a new version of TrulyHandsfree, an always-on voice-recognition listening system, that benefits from this technology. While much of the AI we interact with on a daily basis—like GoogleNow, Apple’s Siri or Microsoft’s Cortana—is beamed to our devices via the internet, TrulyHandsFree is local.

“When you have a service hosted in the cloud, your personal data is being sent off for people to analyze, where you have no control,” said Sensory CEO Todd Mozer. “When that data is in your device, you control it…[And] the hacker incentive to steal massive amounts of data just isn’t there.”

Embedded AI is doable nowadays because the computer chips in consumer-grade electronics are becoming more powerful. An iPhone 6, for instance, has about as much computing juice as a 2012 Macbook Air. Engineers are also learning how to code cleaner, reliable neural networks that work offline.

Because Sensory’s AI runs on the phone, your data doesn’t leave your device, and the app doesn’t even need an internet connection to work. For the privacy-conscious, embedded neural networks could be a big deal because they allow users to benefit from the insights big-data analytics make possible, without giving up too much of their information. It might be nice, for instance, if your car was able to warn you if you’re distracted or when you’re about to get into an accident, without tipping off your insurance company.

The drawback, though, is that without a link to a datacenter, the algorithms in an embedded neural network can only learn so much about you, making them less able to predict your likes and dislikes. That learning step is still too intensive for phones and laptops to carry out. So these neural nets won’t be able to evolve over time to better reflect how you want to use them.

“There’s a trade-off between efficiency and privacy. The more it pools data, the better the central model gets,” said Chris Nicholson, the CEO of Skymind, an AI startup that helps other companies build embedded neural networks. But companies “are very aware people want privacy.”

So tech companies have a dilemma. They’re torn between collecting as much data as possible to improve products that feed their bottom line and satisfying consumers’ need for privacy.

We are starting to get our little embedded friends. Last week, Google released a new version of Translate that now runs on an embedded AI so it can work offline. People who’ve downloaded the app to their phones can point the camera at something with a foreign language printed on it, and the app translates it instantly, and they can do it without being connected to Google’s cloud as long as they’ve downloaded a language dictionary in advance. (Google wouldn’t comment on whether Translate sends data back to Google when the app is online.)

“For Google, it’s the data that gives them their value,” Sensory’s Mozer says. “They don’t have a strong incentive to go embedded.”

The game-changer for embedded intelligence could be autonomous cars, where Google is a clear leader.

“Self driving cars are predicated on really effective machine vision,” said Skymind’s Nicholson. “They have to be able to learn in remote places. Car makers will want machine vision that continues to learn without having to go back to the cloud…There are plenty of places where you don’t have great connectivity, but you want the appliances you sell to work.”

Daniela Hernandez is a senior writer at Fusion. She likes science, robots, pugs, and coffee.

0 Comments
Inline Feedbacks
View all comments
Share Tweet Submit Pin