I read a post this morning titled, “You Won’t See Facebook’s Graph Search On iPhone Or Android Anytime Soon,” that has left me scratching my head a bit. Surely I’m missing something.
The entire basis of Tareq Ismail’s argument was the difficulty of typing in long sentences on a mobile phone. “It’s simple: Graph Search for mobile would need to incorporate speech, which is a different beast altogether,” Tareq wrote. “Many of the examples given during the Graph Search keynote contained long sentences, which are not easy to type on a mobile device. Think of the example “My college friends who like roller blading that live in Palo Alto.” Search engines like Google get around this on mobile by offering autofill suggestions, but their suggestions come from billions of queries. For Facebook, since their search is based on hundreds of individual values like “fencing” or “college friends” specific to each user and not a group, autofill suggestions will often not be useful, or worse, will require a lot of tapping and swiping to drill down to the full request.”
I’m in a noisy Starbucks as I type this. I just pulled up the notes app on my iPhone and spoke the words from above, “My college friends who like roller blading that live in Palo Alto.” Guess what? That’s exactly what Siri returned in the text. Perfectly.
On modern smart phones, this is a non-issue. The ability to “incorporate speech” into an app is simply built in to the latest iPhones and iPads using the up-to-date iOS software. The images to the right show how it’s implemented, automatically, into the myFirstAm app we built for First American Title. There was no special coding required to make this possible.
Speech to text is also an integrated part of the Android operating system. In fact, it’s been possible for several years. The author also argues, “speech recognition doesn’t come cheap.” I just don’t get it. How much cheaper could free be? Ongoing improvements and investment in speech-to-text technology will be coming from the mobile OS side, not the app side.
I will agree with two of Tareq’s arguments, however. First he says, “Names are Facebook’s strength and speech recognition’s weakness.” No doubt… but I think this is a minor hurdle for Facebook to overcome in getting mobile adoption for their Facebook graph search, not a roadblock.
Second, he rightly states, “Facebook has over a billion users who collectively speak hundreds of different languages. Facebook has said they’re beginning their launch with English. How long until all billion users’ languages are supported for the desktop?” As an English speaking American with no discernable accent, speech to text is significantly better for me than most people. I realize this. And yet, I still don’t see this as a barrier to Facebook launching graph search on mobile. As iPhone and Android grow their ability to recognize and translte speech into text, in any language, all apps that take advantage of it will grow as well.
If Facebook’s graph search has value, people will find a way to use it on mobile. Its adoption, as well as other natural language search, won’t be slowed by speech-to-text. If it’s useful, it will get used, even if I have to type a long sentence.