As I move into my second six months of using AI daily, I’m convinced that its most overlooked role isn’t writing content (or even code) but creating tools.

We all know how generative AI has shaken up software development, writing code at scale and collapsing cycle times. And, further:

AI lets individuals — not just well-funded teams — build analytical and decision-support tools that were once the province of specialized analysts or expensive consultancies.

A few years ago, if you wanted a system dynamics model tied to real organizational data, you hired a quant team or signed a six-figure contract. Today, with an AI assistant and some Python scaffolding, you can have a prototype running by Monday. Open-source ecosystems such as PySD, Neo4j, and Jupyter have matured, and Model Context Protocol (MCP) is ready for at least local, sandboxed POC use. What used to take a team of PhDs is now practical for a single motivated professional.

From Idea To Prototype In Hours

Confession: I’m an intellectual dilettante. Over the years, I’ve brushed against a lot of analytical traditions: system dynamics for nonlinear, feedback-driven systems, Monte Carlo for uncertainty modeling, factor and cluster analysis in statistical research. That last one is worth mentioning, as factor analysis was key to how DevOps was validated. Dr. Nicole Forsgren and her colleagues used it to cut through noise and identify what really drove software delivery performance. I’ve admired that rigor for years without being in a position to apply it myself — until now. What once required deep specialization is now something I can attempt, the next time I have possession of some raw survey data. My broad awareness, once a liability, feels like an advantage because AI fills the execution gap.

For years, I’ve suspected that technical debt (and other IT management dynamics) could be modeled with stock-and-flow approaches. At one point years back I even bought the systems dynamics tool iThink (a variant of Stella). Its thousand pages of documentation now sit accusingly on my subwoofer.

This week, I asked Claude about that idea. A couple of hours later, we had a rough model expressing my hypothesis. It wasn’t a shortcut; it didn’t eliminate thinking. It did collapse the timeline from “idea in my head” to “working prototype” from weeks (including wrestling with learning new tools), to hours focused on iterating the core problem.

Another recent example: I had to analyze Enterprise Architecture Awards submissions. I don’t trust AI summarization across long docs. No matter how I prompt it, the results never match the choices I would make and there are always thoroughness issues. So instead of asking AI to draft a blog, I gave it a different job: Write Python to parse the responses, highlight those aligned with my themes, and propose which examples might merit further anlaysis. I had much greater confidence about the thoroughness. It felt like working with my own postdoc, one who never gets tired.

This is what excites me. AI isn’t just a writer — it’s a toolsmith.

Beyond Prompt Obsession

Most AI conversations today orbit around prompting: context engineering, prompt engineering, call it what you like. It matters. But prompts without pipelines produce shallow wins. The bigger opportunity is in workflows.

AI can read PDFs, pull data from spreadsheets, or spin up a Jupyter notebook that benchmarks scenarios. Even something as simple as asking Claude to generate Python that creates a spreadsheet with complex formulae feels like discovering a new superpower.

I asked AI (as one does) for a list of techniques that might be newly accessible to interested professionals. It gave me this:

  • Optimization Techniques – Linear programming, mixed-integer programming, constraint programming, multi-objective optimization.

  • Queuing Theory and Network Models – Service capacity planning, congestion analysis, interconnected queue networks.

  • Markov Chains and Stochastic Processes – Reliability modeling, transition prediction, hidden Markov models.

  • Simulation Frameworks – Discrete event simulation, agent-based modeling, hybrid simulations.

  • Graph and Network Analytics – Bottleneck analysis, community detection, influence metrics.

  • Game Theory and Decision Analysis – Competitive dynamics, equilibrium modeling, probabilistic decision trees.

  • Statistical Forecasting & Time Series Models – State space models, vector autoregression, survival analysis.

  • Reliability and Risk Modeling – Fault tree analysis, reliability block diagrams, Bayesian networks.

  • Multi-Criteria Decision Analysis (MCDA) – Analytic hierarchy process, multi-criteria ranking methods.

  • Simulation–Optimization Hybrids – Combining modeling and optimization for complex systems.

I’ve been building a personal knowledge graphs. Commercial AI services like ChatGPT will never build a massive graph of “all the things.” That’s not economical for them — and honestly, you wouldn’t want them to. Always remember an LLM’s “deep research” is nothing more than a speedy Googlized lit review, competently synthesized and enhanced with whatever the LLM “knows” from sources it read — perhaps with dubious legality, and those kinds of IP holes are being rapidly shut down. As content creators respond to the accelerating, AI-driven destruction of the Internet business model I’ll predict that LLMs tomorrow will have less and less truly current information embedded in their training. And of course the LLM in any case is an imperfect parrot (hence GraphRAG).

You, on the other had, can start building your own graph, and you can include information that will never exist on the open internet, giving you a differentiated point of view.

I downloaded Neo4j Community Edition and started small. Now my proof of concept has 15,000 nodes and 50,000 edges. When I feed unstructured text to Claude, it performs entity recognition and suggests what belongs in the graph.  I review, curate, and refine iteratively with Claude who does the final data entry. We’re working on proper graph data science approaches – embeddings (which turned out not be that useful in my case), interest, relationship strength, affinity analysis. The first analytic reports across the full graph were eye opening. Yes, there’s an occasionally maddening learning curve. But once the graph exists, every new insight compounds in value. It feels like building a second brain.

Of course, these new capabilities bring responsibilities. If you’re using a model to influence decisions, you need an audit trail. If you’re making bold claims from your shiny new factor analysis, have a statistician check your work — or, at least, prompt a large language model to critique it like a grumpy tenured professor. Literally, that’s one of my prompts.

“You are a tenured Ivy League full professor of mathematics with an endowed seat. Some crank has sent this purported analysis to you and because your lunch date stood you up, you’ve decided to not ignore it but rather give it a critical read. You don’t expect much but amusement. Provide your observations.”

At an operational level, AI-generated code still needs version control and traceability. (I use GitHub Pro to keep my work private.) , Databases need backup, and while AI can write that script for you, you have to make sure to run it. And when orchestration frameworks such as MCP start wiring everything together, security, identity, and risk management become critical. Assume that any AI agent with access to a resource may inadvertently destroy it — lots of stories on LinkedIn about vibe coders coming to grief. I’ve had setbacks but nothing dramatic, because I operate with that assumption.

One blocker for some: the command line. In my experience, it’s still the most powerful way to get value from these tools. Pretty GUIs often add noise and obscure what’s happening under the hood. Maybe that makes me a purist. And please, if you’re using Claude Code, run it in a container. Ask Claude to set that up for you — it’ll happily oblige.

These are extraordinary times. How are you using generative AI to extend your capabilities? Drop me a note — I’d love to hear your story. Now, if you’ll excuse me, there’s a new crop of MIDI MCP servers I need to check out.

(Note: I realized yesterday that this blog is a variation on Charity Majors’ durable vs disposable code. It also reflects the idea of vibe analytics.)

Have any thoughts? Contact me at inquiry@forrester.com. Forrester clients can schedule a Forrester guidance session.