Artificial Intelligence (AI) is not one big, specific technology. Rather, it is comprised of one or more building block technologies. So, to understand AI, you have to understand each of these nine building block technologies. Now, you could argue that there are more technologies than the ones listed here, but any additional technology can fit under one of these building blocks. This is a follow-on to my post Artificial Intelligence: Fact, Fiction, How Enterprises Can Crush It.
Here are the nine pragmatic AI technology building blocks that enterprises can leverage now:
- Knowledge engineering. Knowledge engineering is a process to understand and then represent human knowledge in data structures, semantic models, and heuristics (rules). AD&D pros can embed this engineered knowledge in applications to solve complex problems that are generally associated with human expertise. For example, large insurers have used knowledge engineering to represent and embed the expertise of claims adjusters to automate the adjudication process. IBM Watson Health uses engineered knowledge in combination with a corpus of information that includes over 290 medical journals, textbooks, and drug databases to help oncologists choose the best treatment for their patients.
- Robotics. A robot is an autonomous mechanical device that can perform tasks and interact with the physical world. Robots may look humanoid, but most are designed by engineers to take a form that is more appropriate to their function. For example, manufacturing welding robots take the form of a large jointed arm. A driverless car is a robot because it is autonomous, and it obviously takes the form of an automobile. Enterprises mostly use robotics to automate repetitive tasks in controlled manufacturing environments for materials handling, assembly processes, and quality checks. But, as robotic technology advances, enterprises can use it to automate a wider range of business processes, customer interactions, or new product development.
- Speech recognition. Speech recognition technology converts the audio of spoken words to text that applications can use to take commands from humans (like Apple’s Siri, Google Now, or Amazon Echo), transcribe a conversation, or participate in a conversation. For example, Nuance offers a speech recognition solution called Dragon Medical that integrates with major electronic health record (EHR) applications such as Epic Systems, Cerner, and eClinicalWorks to help doctors capture clinical narratives. This is a literal application of speech recognition technology; however, understanding spoken words also requires an understanding of the context in which they are said. Say aloud “This machine can recognize speech.” Now say it again, but picture yourself on a Cape Cod beach looking at a backhoe. You probably hear instead “This machine can wreck a nice beach!” Context-aware speech recognition is still a challenge in many situations.
- Natural language processing. NLP technology strives to understand the meaning of words in conversations and written text. The ultimate goal of NLP is to do this at scale — extract meaning expressed in language in libraries, the internet, and billions of conversations that take place every minute of every day. Today, enterprises can use NLP to analyze any text to extract topics, sentiment, meaning — and knowledge. A large financial information services firm uses NLP to monitor social media and financial market news in real time to look for changes in sentiment that may signal an opportunity for its customers to buy or sell a financial instrument. eCommerce companies use NLP to analyze customer product reviews and then correlate it with star ratings to determine salient product features, quality issues, and general sentiment toward the product and manufacturer.
- Natural language generation (NLG). Natural language generation is the inverse of NLP. This technology strives to express information stored and modeled in software in natural language that humans can understand as if they were talking to a native speaker. Applications use NLG technology to speak or converse with humans. For example, intelligent digital assistants such as Amazon Alexa talk back to humans who ask them a question. Enterprises can use NLG to provide employee-less customer service agents such as Amelia from IPsoft and Watson Engagement Advisor from IBM. Enterprises can also use NLG to produce software-written narrative reports. USAA uses Narrative Science’s Quill to generate customized investment advice reports for its customers.
- Image analysis. Image analysis is technology that strives to identify and understand what is seen — objects, people, and situations in static digital images and/or video. Image analysis technology assigns labels to identify objects and/or motion that AD&D pros can use within applications to give them the power of vision. Lemon Tree Hotels in New Delhi uses NEC’s hotel face recognition system to alert hotel staff when VIPs enter the lobby and security officers when undesirable guests enter the hotel. A large chip manufacturing firm uses image analysis to visually assess silicon wafers for quality defects.
- Machine learning. Machine learning is composed of tools, techniques, and algorithms to analyze data that AD&D pros and data scientists use to create predictive models or identify patterns in data. Machine learning is not a singular approach to analyzing data. There are dozens of specialized classes of algorithms that focus on specific problem domains. For example, some machine learning algorithms design personalized product recommendations for customers, while others predict customer behavior such as when a customer might churn. Cognitive search technology uses machine learning to identify recurring patterns in search results to make them increasingly relevant to customers over time.
- Deep learning. Deep learning is a branch of machine learning that specifically focuses on algorithms that construct artificial neural networks inspired by biological neural networks formed in the brain. It is a computationally intensive technique that makes neural networks more efficient to create at scale. Today, all the internet giants use it to analyze and predict online behavior, improve search, and label uploaded images. Other enterprises can experiment with deep learning to organize information and predict outcomes or to boost the accuracy of other AI building blocks, such as image analysis and speech recognition. As a newer technique, deep learning uses open source frameworks such as Caffe specifically for image classification and Google TensorFlow and Theano for more general purposes. Enterprises must be prepared to invest in research to try these multiple frameworks. AI researchers see great potential for deep learning because it may evolve into a general-purpose learning system similar to the human brain.
- Sensory perception. Sensors measure and collect one or more physical properties of persons, places, or things — such as location, pressure, humidity, touch, voice, and much more. AI applications exists in the physical world and often need information about the physical environment to provide context. AD&D pros should take inventory of the sensors available to their applications for information that add context or training data for other AI building blocks. For instance, GE’s Predix Asset Performance Management is an internet-of-things (IoT) application that uses sensory information from industrial equipment to build machine learning models to optimize maintenance routines and predict equipment failures before they happen. NTT Docomo’s sensor packages help detect if a cow is ready to go into labor, allowing for faster veterinarian response time and a safer calving process.
Cognition Is Still Research Only
This is number ten, but not currently a pragmatic AI technology because it is still “research only”. Applications are Turing complete. That’s computer science jargon meaning an application performs exactly as it is programmed. Intelligence doesn’t or shouldn’t work that way. Applications that are cognitive must perceive, interact, learn, act, and evolve — that’s pure AI. Cognition occurs when all the AI building blocks come together to create an application that has a “mind” of its own — it can use acquired knowledge to problem-solve toward a goal. In the past few years, IBM has popularized the phrases “cognitive computing” and “cognitive services” to mean systems and applications that use the pragmatic AI technology building blocks we’ve described here to make applications more intelligent. Vendors including Accenture, Microsoft, SAS, and many others including startups are also now using the “cognitive” moniker to describe what we’d more accurately call pragmatic AI. So, when you hear “cognitive,” know that true cognitive computing is still the subject of research and it means vendors are actually offering pragmatic, not pure, AI.