Will Google's new wireless carrier service be a giant ad machine?

Latest

Today, Google unveiled Project Fi, its new wireless carrier service for mobile devices ,which works on the Sprint and T-Mobile networks. Customers will pay just $20 for unlimited text messaging and voice calling, and then $10 per gigabyte of data—the entire messaging layer runs through Google Hangouts. Right now, Project Fi is invite-only, and works only with the Motorola Nexus 6, but the program will expand soon.

Naturally, the news that Google is getting into the wireless carrier business got us wondering if the search and advertising giant plans to use data from customers’ voice conversations to serve them targeted ads, the way it does with their e-mail and search histories. Is Project Fi just a bid to collect more customer data, and use it to sell stuff?

Google says it isn’t. “This is a stand-alone product,” Google spokesperson Chelsea Maughan told us. “We have no plans to tie this to ads.” (“No plans to do X,” of course, is the classic response given by companies who want to leave the door open for a change of mind later on.)

In Project Fi’s case, an influx of new user data would be a huge boon for Google’s core advertising business. Location-based, personalized ads are at the heart of how Google makes money, and nobody knows more about mobile device usage than the carriers who provide service for those devices. With data about all of our wireless use, including what we’re saying in our conversations, Google could reach a new frontier of targeted advertising.

It’s easy to imagine how this might look in practice. Say you’re having a conversation with a friend about a new book you want to read, but neither you or your friend can think of the title. Instead of having to search for it yourself, your Project Fi phone could automatically recognize keywords you uttered during the conversation and ping you with a list of possibilities, along with a link to buy the e-book from the Google Play store.

This scenario may sound like bleak sci-fi, but a few months ago, I previewed something that works very much like that. ExpectLabs, a San Francisco-based artificial intelligence company that’s backed both by Google and Samsung, has developed an app and developer platform called MindMeld. It uses speech-recognition algorithms to listen in on your conversations, or things you’re listening to. Then, it analyzes that information and pump out a list of relevant links—everything from newspaper articles to local businesses’ websites.

“This is the way that people are going to find information in the future,” ExpectLabs CEO Tim Tuttle told me. “When you want information, we’re not just at our desks typing searches…As we’re walking around town, people will expect, you’ll be able to get to it by having all these devices pay attention to your context and be able to suggest stuff.”

Google Now and Siri are steps toward a system like this. But like a traditional web search, they still require a user to act. What Tuttle envisions, and what real-time voice analysis of our phone calls will enable, would be passive search. You’d be able to multitask, thanks to artificial-intelligence systems running in the background, helping you find things without you needing to ask first. “This is really the frontier of search and content discovery,” Tuttle added.

Although Google has “no plans” to tie its wireless carrier service to ads at present, there’s no reason to believe the company isn’t working on something like MindMeld, or that it won’t ever use the data associated with Project Fi to create a similar experience. The potential benefit to the company would be immense, and customers might even enjoy it. (A Googler in the promo video hints at the optimistic view, saying that they were excited about “getting [Project Fi] into users’ hands and find[ing] out all the new and amazing things we can build to make your life easier.”) Facebook, another tech company that makes its money through ads, is also getting into the phone game, with an app called Hello. Depending on if and how it collects that, the app could also provide more data for the company’s AI to analyze.

As for Google, there’s already some precedent for the search giant taking user voice data and using it to improve other its products. Back in 2007, Google launched a free 411 number—Goog-411— that allowed people to call in to get the contact information for people and businesses they wanted to reach. As it turned out, the company was using that voice data to train its speech recognition algorithms. Less than a decade later, voice recognition is one of Google’s signature products.

If Google had complete conversations at its disposal, the possible improvements to its voice recognition algorithms would be enormous. Of course, in the post-PRISM era, it would also raise privacy fears. Metadata is one thing; would users willingly surrender the content of their phone conversations, in exchange for convenience and a good user experience?

Daniela Hernandez is a senior writer at Fusion. She likes science, robots, pugs, and coffee.

0 Comments
Inline Feedbacks
View all comments
Share Tweet Submit Pin