AI In Collaboration Tools Is A Tech Problem In Need Of A Human Solution
Generative AI — where AI generates novel content in response to a user’s natural-language request — is about to get baked into our collaboration tools. On March 16, Microsoft announced Microsoft 365 Copilot, which injects generative AI directly into the company’s productivity suite (including Word, Excel, PowerPoint, Teams, and more). This follows Google’s recent announcement about adding generative AI to Workspace apps and Gmail. Hidden in plain sight in these announcements is that generative AI is about to go beyond ChatGPT’s ability to compose simple emails or Midjourney’s ability to generate interesting images. What do we mean? Consider “Tyson,” a typical desked worker at his firm, and how different his workday will flow just one year from now compared to yours today.
It’s Q2 2024. Tyson’s double-booked with a customer and misses a live meeting in which his team made key decisions for Q3. To catch up, Tyson turns to a tool that his company has come to depend on, Microsoft Copilot. It uses generative AI to summarize the meeting notes, extract important points, and make a list of next steps for Tyson to consider.
Microsoft’s executives have long emphasized that these tools are just that — tools, a “first draft,” not a replacement for human judgment — and that users need to engage in due diligence before using Copilot’s outputs. Hence the name, “Copilot.” Copilot is there to help you, not replace you.
Tyson’s busy, however, and trusts the AI summary, glancing at it quickly and coming away confident that he knows what happened in the meeting and what it means for his role. Sure, he could go back and listen to extracted parts of the meeting to capture the details and confirm his understanding — he could also message teammates who were there to get the human interpretation.
Instead, Tyson asks Copilot to pull last quarter’s performance numbers, make a spreadsheet that computes a trend line, and create a figure that he can share with the team. Copilot does as asked, generating and distributing the requested output. Did Tyson understand the need correctly? Did he request the correct thing, and did Copilot deliver it? Ultimately, did the business get value from the combination of Tyson’s unique knowledge and Copilot’s automated assistance? You don’t know. And Tyson doesn’t, either.
Tyson’s Problem Is Your Problem, And Soon
Applying generative AI to productivity software makes a lot of sense in principle. Microsoft demoed very promising applications of its Copilot technology, which should be available this year to enterprise customers. But the technology will only be as useful as the people using it choose to make it. Otherwise, as my colleague Rowan Curran blogs, despite the usefulness of generative AI-based tools, these solutions “can easily generate coherent nonsense instead.” (I invite Forrester clients to read Rowan’s full report for a fantastic overview.)
The solution to this rising problem is not better tech. It’s more thorough preparation of the humans — your employees — and their organizational context that will determine success or failure in using these new tools. At Forrester, we measure this individual and organizational readiness via RQ, the Robotics Quotient, which helps you know how well prepared your employees are to collaborate with and drive business results from AI and automation tools like these.
In the end, this is about putting people first. They don’t come to AI with the skills, inclinations, or beliefs needed to succeed. They must be taught. In the case of Copilot, moving from determinative computing (“I do X, software does Y”) to generative computing (“I describe my problem and don’t know precisely what output Y will look like”) will require investing in assessing and increasing your RQ. That way, the Tysons that work for you will have the ability to know which routines they can automate, which ones they should investigate, and which questions they should raise internally — and to whom — to avoid heading into a cloud of generative AI “coherent nonsense.”
We Can Help You Tread Carefully And Intentionally
We can’t yet make definitive statements about Microsoft’s or Google’s tools; we haven’t used them — only a small number of pilot customers have. But based on our RQ research, we can say that employers should proceed with intention. To take an example, RQ tells us that nearly three-fourths of employees say they don’t know when to question the results of automation or AI. We’ll be watching and researching how all of this evolves in coming years, but you and your company would be wise to take a careful — and human-centered — approach from the start.
I can work with clients to help you understand the opportunities and pitfalls of generative AI in productivity software. Clients can request a guidance session with me. To go even deeper, we can conduct an advisory session, including workshops that assess your RQ and determine a path to boosting it. Expect lots more to come on the research side, too.
J. P. Gownder is a vice president and principal analyst on Forrester’s Future of Work team.
Thanks to Rowan Curran and James McQuivey for reviewing this blog post.