My Three Assumptions For Why The Next Generation Of SW Innovation Will Be Cognitive!
I am just back from the first ever Cognitive Computing Forum organized by DATAVERSITY in San Jose, California. I am not new to artificial intelligence (AI), and was a software developer in the early days of AI when I was just out of university. Back then, if you worked in AI, you would be called a SW Knowledge Engineer, and you would use symbolic programming (LISP) and first order logic programming (Prolog) or predicate calculus (MRS) to develop “intelligent” programs. Lot’s of research was done on knowledge representation and tools to support knowledge based engineers in developing applications that by nature required heuristic problem solving. Heuristics are necessary when problems are undefined, non-linear and complex. Deciding which financial product you should buy based on your risk tolerance, amount you are willing to invest, and personal objectives is a typical problem we used to solve with AI.
Fast forward 25 years, and AI is back, has a new name, it is now called cognitive computing. An old friend of mine, who’s never left the field, says, “AI has never really gone away, but has undergone some major fundamental changes.” Perhaps it never really went away from labs, research and very nich business areas. The change, however, is heavily about the context: hardware and software scale related constraints are gone, and there’s tons of data/knowledge digitally available (ironically AI missed big data 25 years ago!). But this is not what I want to focus on.
To make life easier, I am giving up on the term artificial intelligence, and use it only to refer to the past, and am sticking to the more modern term cognitive computing, by referring to the present and the future. But bear in mind, cognitive computing exists only because AI has existed. We would be far more behind if not. So what can we do with cognitive computing today? A question we will answer more in detail in a planned Q4 2014 market overview document we are working on. For now, I am going to make three assumptions for BT AD&D pros based on my last six months of paying attention to cognitive computing and on what I saw in San Jose last week at the cognitive computing forum:
-
Assumption No. 1: Cognitive computing will change for ever the way we engage with software systems. One goal of cognitive computing is to help computers interact with humans in a human way. While we are still far away from full speech/voice recognition and language understanding, natural language (NL) in limited domains is possible (shallow NL). A concrete product example that leverages shallow NL, besides the IBM Watson Q&A system, comes from the Narrative Science Quill product which hides the data complexity of analytical models and data and instead shows the meanings and insights of the data in natural language narratives. Shallow NL is here and many current cognitive products are leveraging it. But more advanced technologies are in the works too: Mark Sagar from the university of Auckland showed how by combining bio-engineering, neuroscience, sensing, and computer graphics research they can teach software to communicate and understand facial expressions and emotions. His demo was pretty entertaining and effective at the same time. He showed a “software child face” mimicking a real one, which could learn instincts and reacted to Mark’s interaction while on stage. Some of the technology Mark presented has already been used in the filming and gaming industries. NL is a great promise for systems of engagement and mobile, think how portable NL is across channels and how easily leverage able it can be on our own devices. Forrester strongly believes cognitive will transform the engaging model in many ways and at increasing levels of disruption. That’s why we have coined cognitive engagement and written about it here.
-
Assumption No. 2: Cognitive computing is ready to help solve business problems in much less time, than with traditional programming, now. Many of the vendors and start-ups I met at the show claim deployed production cognitive solutions at large enterprises. Saffron, Wise.IO, and Narrative Science all have solid references. I found inspiring the speech of Vivian Ming, Neuroscientist from Gild, who presented a framework for selecting effective SW developer talents based on predictive models. Drawing conclusions over a vast database of millions of CVs would take for ever if done by humans — Gild’s framework does it in minutes. Quill from Narrative Science reduces days or weeks of human being work to seconds or nothing. A few large financial services and rating agencies are using Narrative Science technology already. Interestingly, Quill does not understand its own narratives it produces, but that is not the problem they are trying to solve, and would be a hard one anyway.
-
Assumption No. 3: What’s surfacing from Cognitive computing these days is only the tip of the iceberg, much more will be coming. Cognitive computing has long-term goals, spanning over a decade or more. It will solve a class of new problems we have not even yet thought of. Many pieces of the puzzle were presented at the conference: neural networks trained now over big data sets (deep learning), more statistical based machine learning approaches (less to do with cognitive), language understanding, knowledge representation and reasoning, genetic algorithms, expressive machines, cortical and brain simulations, and more. I totally agree with one conclusion made by Patrick Lilley’s from Emerald: it’s time to get all these streams of research, practices, and cool inventions and pull them together. That will help build a true foundation for future and game changing cognitive computing. A foundation to be leveraged by business and society. My take-away for you with this assumption is, if you get involved in cognitive computing, you should get in for the long run. Yes you can get some spot business solutions going, but the big reward is going to take more investment and time.
In conclusion, although this is only the start of long series of research we will be writing in Forrester, I recommend developers to prepare to this new wave of technology and integrate it in their practices rather than isolate it out. One reason AI failed once was because it never got integrated into IT. Let’s avoid that mistake. I am hoping to post, based on what I saw at the forum, how cognitive computing impacts the way you develop, and it effects your SDLC similarly to how Agile and Lean do — a good reason to keep doing what you are doing with Agile!. In the meantime you should also read the take aways my colleague Michelle Goetz has summarized in her blog about this conference too.