I recently spoke with Tim Tuttle, the CEO of Expect Labs, a company that operates at the vanguard of two computing categories: Voice recognition (a field populated by established vendors like Nuance Communications, Apple, and Google) and what we can call the Intelligent Assistant space (which is probably most popularly demonstrated by IBM’s “Jeopardy”-winning Watson). In their own words, Expect Labs leverages “language understanding, speech analysis, and statistical search” technologies to create digital assistant solutions.

Expect Labs built the application MindMeld to make the conversations people have with one another "easier and more productive” by integrating voice recognition with an intelligent assistant on an intuitive tablet application. They have coined the term “Anticipatory Computing Engine” to describe their solution, which offers users a new kind of collaboration environment. (Expect Labs aims to provide an entire platform for this type of computing).

Here’s how MindMeld works: Imagine 5 colleagues across remote offices – all equipped with Apple iPads – are having a conference call on a particular topic or set of topics. Using the MindMeld application, these users join a collaborative workspace that updates in real-time during the call. The MindMeld app “listens” to the conversation, surfacing themes and topics word-cloud style. It then leverages the Anticipatory Computing Engine to go out and find relevant content (say, from the web) that it surfaces on those topics. These pictures, videos, articles, and other content create a richer conversation – as well as a record of the collaborative experience – that should drive stronger, more effective, more data-supported collaboration.

In the future, you could imagine MindMeld tapping into proprietary big data sources (like CRM systems) to help inject insights from big data into streams of work within an enterprise – having its Intelligent Assistant act as a content curator in real time.

Here's a video demonstration of MindMeld:

The MindMeld app reveals some interesting end user computing truths:

  • Some of the most innovative software experiences come first to tablets. Expect Labs developed MindMeld for Apple’s iPad first. The motivation came from the touchscreen environment (which creates a collaboration-oriented user interface); the screen real estate; and the market share of iPad among tablets. The company also plans an Android tablet experience. While the core technologies that drive MindMeld – voice recognition and intelligent assistants – aren’t bounded to tablets, the developers chose tablets as their form factor of choice for their user experience. This isn't completely surprising: According to Forrester's survey of over 2,000 software developers in Q1, 2013, tablets rival smartphones as a form factor that developers either support today or plan to support. The numbers are already close — 54% target smartphones with the software they develop, while 49% target tablets — even though smartphones outnumber tablets roughly 5:1 globally today. 
    What It Means: Tablets are in the driver's seat for empowering innovative computing experiences. They're often the place where you'll find developer interest.
  • Computing is evolving — rapidly — beyond keyboards. “In a world where keyboards aren’t tightly coupled with computing, easier interaction methods are required,” Tim Tuttle told me, describing Expect Labs’ focus on voice recognition. His observation is important as we think of all the scenarios in which keyboards aren’t present: In a car, in our living rooms, in some mobile- and tablet- computing scenarios, with Xbox Kinect, or in a variety of embedded computing scenarios (like wearables or home automation solutions) that Forrester calls Smart Body, Smart World.
    What It Means: In addition to thinking “mobile first” for application development, think also in terms of “keyboard-free” device and application scenarios.
  • The technological limits to voice recognition will abate in the next five years. In a glimpse ahead, Tim Tuttle – who holds a PhD from the MIT Media Lab – mentioned that many of the technical challenges inhibiting successful implementation of voice recognition –based interactions. In particular, he noted that a variety of barriers have fallen in the past 18 months, and predicted that all major performance problems will be overcome in the next five years.
    What It Means: Jean-Luc Picard’s computer on Star Trek: The Next Generation is now fully in sight – tablets fulfilled the touchscreen aspect, while voice recognition is finally emerging as a productive user interface as well. (No predictions on the Holodeck, though).