Even GenAI-Trained Execs Are Confused About It (Our Survey Shows) — So Give Them Better Training
Do executives understand generative AI (genAI) better than consumers do? Nope — apparently not. Our research shows that among consumers, generative AI is poorly understood or not at all (see The State Of Consumer Usage Of Generative AI, 2024). And you might think that execs would be better informed, considering the high-stakes decisions they must make about their organizations’ technology strategies. But they aren’t.
How do we know? Because Forrester’s Q2 AI Pulse Survey, 2024, found that most executives mistakenly believe that:
- “GenAI models are good at looking up and validating facts” (82%) — wrong. Some hybrid tools like Bing Chat look up facts using old-fashioned search and then use genAI to summarize what they find. But genAI models themselves don’t and can’t look anything up at all. They probabilistically generate completions of sequences of words (or other types of tokens). ChatGPT would keep working tomorrow even if someone unplugged every other website in the world tonight.
- “GenAI tools will always produce the same outputs given the same prompt” (70%) — wrong. They produce different responses every time, even when the prompts are identical, except in the rare instances when companies set the “temperature” parameter to zero. But companies almost never do that, because it tends to compromise the fluency and creativity of the results.
- “GenAI models are good for complex mathematical problems” (84%) — wrong. They never have been, which is why many genAI tools now have filters that intercept math-related statements to bypass the genAI model and route them to simple old-fashioned math modules that avoid using genAI.
And those are just three of the widespread misunderstandings of AI we uncovered through this survey.
Who are these execs, you might wonder? Well, 74% describe their level of seniority as vice president or above; the most heavily represented departments are executive management (38%), IT/technology (15%), and operations (7%); 61% say that a significant portion of their job involves technology responsibilities; and the most heavily represented industries are financial services (13%), healthcare (13%), and retail (11%). Most importantly, almost all describe themselves as either the final decision-maker (66%) or a major influencer (34%) in terms of their level of influence on decision-making for artificial intelligence, including genAI, for their organization, and 56% are the final decision-maker for genAI specifically.
These levels of poor understanding among company executives making decisions at this level are a ticking time bomb threatening the quality of customer experience (CX) as companies increasingly weave genAI into digital experiences and into their processes. So for starters, CX leaders should proactively address the problem by making the case for educating executives companywide about genAI.
There’s just one problem, though: When we asked respondents whether they had been given formal training on how to use AI for work, 80% said that they already had — so whatever training they’re getting is apparently ineffective.
This means CX leaders advocating for training should do what they can to ensure that their companies choose executive training about genAI wisely.
- If you’re a Forrester client and you’d like to set up one-on-one or small-group training to boost your organization’s executives’ understanding of the fundamental principles of how genAI works behind the curtain, you can set up a guidance session with either of us or any other Forrester analyst.
- If your company offers training about genAI and you’d like to tell us about it so we can point companies your way in situations that seem to be a good fit, such as for larger groups, feel free to submit a briefing request.
Oh, one last thing: Fully half of respondents told us that they are “using AI in production applications,” so this is not about a hypothetical future — it’s about now. You should act today.