As I move into my second six months of using AI daily, I’m convinced that its most overlooked role isn’t writing content (or even code) but creating tools.

We all know how generative AI has shaken up software development, writing code at scale and collapsing cycle times. And, further:

AI lets individuals — not just well-funded teams — build analytical and decision-support tools that were once the province of specialized analysts or expensive consultancies.

A few years ago, if you wanted a system dynamics model tied to real organizational data, you hired a quant team or signed a six-figure contract. Today, with an AI assistant and some Python scaffolding, you can have a prototype running by Monday. Open-source ecosystems such as PySD, Neo4j, and Jupyter have matured, and orchestration frameworks like Model Context Protocol (MCP) are on the horizon. What used to take a team of PhDs is now practical for a single motivated professional.

From Idea To Prototype In Hours

Here’s my confession: I’m an intellectual dilettante. Over the years, I’ve brushed against a lot of analytical traditions: Stella for system dynamics, Monte Carlo for uncertainty modeling, factor and cluster analysis in statistical research. That last one is worth mentioning, as factor analysis was key to how DevOps was validated. Dr. Nicole Forsgren and her colleagues used it to cut through noise and identify what really drove software delivery performance. I’ve admired that rigor for years without being in a position to apply it myself — until now. What once required deep specialization is suddenly something I can attempt. My broad awareness, once a liability, feels like an advantage because AI fills the execution gap.

For years, I’ve suspected that technical debt (and other IT management dynamics) could be modeled with stock-and-flow approaches. At one point, I even bought Stella and its two thousand pages of documentation, which now sit on my subwoofer like a monument to unrealized intent.

This week, I asked Claude about that idea. A couple of hours later, we had a rough model expressing my hypothesis. It wasn’t a shortcut; it didn’t eliminate thinking. It did collapse the timeline from “idea in my head” to “working prototype” from months to hours.

Another recent example: I had to analyze Enterprise Architecture Awards submissions. Instead of asking AI to draft a blog, I gave it a different job: Write Python to parse the responses, highlight those aligned with my themes, and propose which examples to feature. It felt like working with my own postdoc, one who never gets tired and has a surprising gift for regex.

This is what excites me. AI isn’t just a writer — it’s a toolsmith.

Beyond Prompt Obsession

Most AI conversations today orbit around prompting: context engineering, prompt engineering, call it what you like. It matters. But prompts without pipelines produce shallow wins. The bigger opportunity is in workflows.

AI can read PDFs, pull data from spreadsheets, or spin up a Jupyter notebook that benchmarks scenarios. Even something as simple as asking Claude to generate Python that creates a spreadsheet with complex formulae feels like discovering a new superpower.

My next leap was personal knowledge graphs. Commercial AI services like ChatGPT will never build a massive, bespoke graph for you. That’s not economical for them — and honestly, you wouldn’t want them to. But you can, and you can include information that will never exist on the open internet, giving you a differentiated point of view.

I downloaded Neo4j Community Edition and started small. Now my proof of concept has 15,000 nodes and 50,000 edges. When I feed unstructured text to Claude, it performs entity recognition and suggests what belongs in the graph. I review, curate, and refine. Yes, there’s an occasionally maddening learning curve. But once the graph exists, every new insight compounds in value. It feels like building a second brain.

Of course, these new capabilities bring responsibilities. Databases need backup, and while AI can write that script for you, you have to make sure to run it. AI-generated code still needs version control and traceability. (I use GitHub Pro to keep my work private.) If you’re using a model to influence decisions, you need an audit trail. If you’re making bold claims from your shiny new factor analysis, have a statistician check your work — or, at least, prompt a large language model to critique it like a tenured professor. And when orchestration frameworks such as MCP start wiring everything together, security and identity become critical. Assume that any AI agent with access to a resource may inadvertently destroy it — lots of stories on LinkedIn about vibe coders coming to grief. I’ve had setbacks but nothing dramatic, because I operate with that assumption.

One blocker for some: the command line. In my experience, it’s still the most powerful way to get value from these tools. Pretty GUIs often add noise and hide what’s happening under the hood. Maybe that makes me a purist. And please, if you’re using Claude Code, run it in a container. Ask Claude to set that up for you — it’ll happily oblige.

These are extraordinary times. How are you using generative AI to extend your capabilities? Drop me a note — I’d love to hear your story. Now, if you’ll excuse me, there’s a new crop of MIDI MCP servers I need to check out.

Have any thoughts? Contact me at inquiry@forrester.com. Forrester clients can schedule a Forrester guidance session.