In the spring of 2023, as ChatGPT was taking the world by storm, Forrester set out to build its own generative AI tool — one that would deliver answers grounded in Forrester’s trusted research. There was no manual, not even a clear vision of what the final product would look like. But through rapid iteration, focused execution, and bold decision-making, the team built a working demo in under eight weeks. And on October 18, 2023, Izola launched in beta to a select group of clients.

Since then, Izola has evolved dramatically, shaped by continuous client feedback. A major milestone came last month with the launch of Forrester AI Access, a new offering that brings Forrester’s trusted insights to entire organizations via the Izola interface.

To mark Izola’s second anniversary, we sat down with Doug Washburn, VP of research product management, and Wadah Sayyed, director of machine learning engineering, to reflect on the journey — from the early build to the lessons learned.

Q: Take us back to the beginning. What prompted you to build Izola?

DW: That spring, [Forrester CEO George Colony] was talking about the transformative power of generative AI and urging companies to experiment with it. Internally, the question was: “What were we going to do?”

Our clients are busy. They had longed for a way to get insights and advice from Forrester’s research and data that was faster than searching, reading, and summarizing. Generative AI gave us an opportunity to create something that would do just that while providing a level of trust they couldn’t get from ChatGPT. So we started exploring feasibility and proof of concept. And we wanted to move fast.

Q: What were the biggest challenges you and your team encountered?

WS: Today, there are easier ways to build genAI tools, but those didn’t exist back then. We had to figure it out ourselves, and we weren’t sure that it would work. We didn’t know if the RAG (retrieval-augmented generation) architecture we were using would scale. But we also knew we didn’t have the human capacity to train a large language model on Forrester content.

We knew, too, that the stakes were high in terms of building a tool that clients could trust. Its answers had to be both accurate and sourced exclusively from Forrester IP. So before we even started coding, we enlisted analysts that cover different areas of our research to give us feedback on answer quality. We essentially let them stress-test the prototype in 30-minute sessions, throwing questions at it and giving candid feedback. That input really helped us fine-tune core components like embedding size, similarity search, and relevancy scoring — all of which helped us achieve our goal of delivering trusted answers.

Q: Given the tight time frame, what mindset helped the team succeed?

WS: Empowering people to make decisions and just run with them was key. This was brand-new territory, and even though we had very seasoned engineers, I had to coach them to trust their instincts. We had to assume the project would succeed — and if we made mistakes, we would learn and improve.

We also had to stay laser-focused. People would ask us things like, “Did you see that ChatGPT can do this? Is your tool going to do that, too?” And we would have to say, “No, we’re not going to do that. We’re not going to deal with attachments like PowerPoints. We’re not generating voice.” While these are opportunities we’re exploring now, at the time, we had to block out those distractions and focus on one or two core use cases.

Looking back, I’m proud of how many decisions we made early on that are now best practices for building scalable genAI apps. A lot of that came from our experience building enterprise-level software. We were able to fail fast, and because we had sound architecture, we could pivot quickly.

Empowering people to make decisions and just run with them was key. We had to assume the project would succeed — and if we made mistakes, we would learn and improve.

Q: What was the earliest version of Izola like, and what did you learn from it?

WS: Our first demo was the Friday before Labor Day. Initially, Izola gave long, two-page answers because we had wanted it to provide as much insight as possible. Naturally, the feedback was that it was too much. But we’d anticipated that possibility and were able to make the answers more concise within a few hours.

Once we started getting feedback from early client betas, we saw some really compelling use cases. That pushed us to evolve Izola’s architecture to be more agentic — even though “agentic” wasn’t a thing yet. We realized that, depending on the type of question, the tool needed to respond differently. So we started building ways to detect intent and route questions accordingly.

Q: When did you know that you were really on to something with clients?

DW: We demoed Izola at our Technology & Innovation Summit North America event later that month, before most Forresterites even knew what we were working on. It was our version of stealth mode: just me at a high-top table and my laptop, instantly overrun by attendees. It was awesome. One of the most informative pieces of feedback I got at that point was, “What you’ve created is more than good enough.”

There was so much enthusiasm for a generative AI tool whose responses were based on our trusted IP that it gave us a lot of confidence. We started testing it with employees, kept refining, and launched the beta to select clients in October. By the following spring, Izola was available to all Forrester clients.

Q: What have been some of the key lessons you’ve learned from how clients use Izola, and how have you applied them?

DW: One is the importance of fitting Izola into where and how clients work. Our reports are the most visited pages on the Forrester site, and clients told us they wanted an easier way to get key takeaways from them. So we built Izola into report pages so that clients could “converse” with reports and get instant summaries, key takeaways, and actions to take.

We also knew how important it was to keep delivering on the promise of trusted answers. Our clients use Izola to find data for major presentations and to justify business decisions — they need to have complete confidence in what Izola surfaces. To provide more specificity and transparency around Izola’s answers, we added a citations feature that lets clients see the Forrester sources for the sentences or bullet points included.

Another key learning from clients was that they wanted to “democratize” knowledge by providing access to Forrester’s research and data more broadly in their organization. That led us to develop our new AI Access platform, which gives clients’ teams self-serve access to trusted Forrester insights via Izola.

Q: What’s next for Izola?

DW: We’re always working to improve the user experience and the quality of Izola’s answers. We continually monitor feedback from clients and Forresterites to spot improvement opportunities.

One enhancement on the near horizon is making Izola fully conversational. From there, we’ll enrich Izola’s responses with content it doesn’t have access to today: webinars, podcasts, and more of our survey data. We’re aiming for an integrated experience — imagine getting a great answer from Izola, then being invited to a related webinar, or seeing which analysts contributed to the response and following them for updates, or even scheduling a call with them through Izola.

We’re also exploring opportunities to bring Izola into the tools that clients already use to get work done. Stay tuned!

 

Want to see Izola in action? Sign up to get a demo of Forrester AI Access, powered by Izola.