At the TechCrunch Disrupt event in NY today, Dag Kittlaus delivered the first public demonstration of Viv, his team’s follow-up to the popular Siri service. There’s been a lot of press in advance of the demo and frequent chatter around eBusiness and bots and what Viv means to them. My focus is on Application Development and Delivery (AD&D) professionals, so I thought I’d update you on what all of this means to them.
Viv is attempting to create what they’re calling a Global AI. While I’m not an AI expert, as I understand it AI entities are typically ‘trained’ using either algorithms or by dumping a bunch of data into it and helping it sort through it all. The self-training algorithms are where AI research had stalled until recently, but machine learning and other methods are revitalizing it. The Viv team, however, is taking a different approach. They’ve built the requisite language processing capabilities (through a partnership with Nuance) and coupled that with a code generator (what they call dynamic program generation) that delivers the needed results. What happens in between? Well, that’s the special sauce that will be very interesting for developers.
The knowledge Viv uses to deliver on voice requests is directly driven by direct input from developers. Well, that’s not necessarily true, but you’ll see what I mean in a minute.
In the demo shown today, what you saw was Viv processing the spoken request in order to clearly understand exactly what’s being asked. Once it understands that, it searches a database of services to determine if it can somehow craft the desired result from the data it knows it knows. Once it has that information, it generates then executes program code to call the appropriate services, stitches together service results, and even calling more services, if needed, to get the answer or perform the task as directed by the requestor.
The database of services used for the demo was created by the Viv team working with some select partners. Going forward though, the services Viv has at its disposal will not be created by the team at Viv, it will be created by developer types like you and me. As Dag mentioned in his demo, Viv will be opening up its service to developers, enabling them to enhance Viv’s brain in “whatever self-interested ways they want.” This means that anyone can augment Viv’s intelligence, anyone.
Apple and Google’s voice interaction systems are closed systems; Siri knows what Apple’s developers tells it to know and Google Now can connect stuff happening on your device to back-end data through coding Google’s developers created. The Amazon Echo is extensible by any developer, but you have to document the specific command phrases you’re code will recognize. Viv, on the other hand, is wide open. Any service a company or individual developer publishes to Viv’s repository is available to Viv’s Global AI. As long as the voice parsing algorithm works as expected, Viv can get to work.
What’s going to be interesting to watch is how quickly does the services catalog grow and how soon before you can do most of what you want to accomplish through Viv. Stay tuned.