It’s AI strategy season in a tough economic climate. Cutting IT costs is a top priority even as chief data and analytics officers want to scale AI. This led to a conversation that I had with a services provider today about the cost of running AI models. It seems that there are several clients seeking to remove AI models because cloud costs are too high. I thought, “What a horrible idea!” Then, ignoring my filter, I blurted it out. Under conditions of economic uncertainty, extending your AI footprint and building insights-driven capabilities ensures enterprise resilience. That was proven during the pandemic.

This is a classic “leopard ate my face” moment. If you aren’t familiar, LeopardsAteMyFace is a Reddit thread containing stories where people suffer ironic consequences resulting from a poorly considered decision.

Retiring models based on cost is an avoidable catastrophe. It indicates a lack of ModelOps and AI governance as well as a lack of AI monetization by model and business value stream. Cost-based model retirement ignores the impact on making money and saving money using AI. And it ignores the problem of what replaces the AI-driven intelligence and decision automation when the model no longer exists.

So if you must retire models, and cost is the key driver, be smart about it and provide insights that a non-data scientist understands. Here are the tools you need to avoid hungry leopards:

  • A CxO-level business performance framework for AI. CxOs need to see AI’s overall speed to value and scale of value, as well as cost to own and serve. AI business performance frameworks help CxOs interpret AI contribution to overall goals and metrics that quantify money made and money saved. For example, chief revenue officers care about an overall contribution of AI personalization to revenue generation.
  • Audits of model performance and process stream performance over time. ModelOps tools help data scientists know when model performance degrades. Indications of data drift, bias, and overall model degradation are early warning signals. Business intelligence on AI in the form of continuous audits uncovers the decay trajectory to guide model optimization strategy. Where models are often interdependent, business intelligence on AI also extends ModelOps to see model dependencies and helps business decision-makers tune models in context of each other for a holistic assessment of model performance.
  • Data intelligence. Data intelligence (data observability tools, pipeline profiling and lineage, data catalogs and glossaries) bring fidelity to the state and value of a machine-learning model. New data and metadata capture is required, along with knowledge graph capabilities that link and describe the state and dependencies of the data, model performance, data and AI policies, domains, and business metrics. While feature stores are all the rage and simplify model deployment, management, and reuse, they need integration with data intelligence capabilities for closed-loop traceability for audits.
  • Model testing and lifecycle plans. Unlike traditional technologies, AI is not implemented and forgotten. Continuous monitoring and optimization frequently have multiple models performing the same task in production as part of testing plans. This has a multiplier effect on cost. The strategy should not be with an aim to limit in-production testing, however, but rather to maintain lifecycle best practices that update, replace, and retire degraded ML.
  • Up-front plans for cost optimization. Self-service, citizen ML model development, and increased application and data-flow complexity impact the efficiency of models. Poorly crafted transformation and queries can make the difference between milliseconds and seconds in a transaction, increasing compute and thus increasing cost. In addition, edge use cases can add to cost with hybrid (cloud/edge) storage and compute requirements. Upskill data scientists on data engineering basics and integrate their activities with engineering and DevOps to address and properly test ML models before deployment, then make cost a KPI used for testing and release within data engineering, ML engineering, and DevOps.

The cost of AI matters. Reducing the number of ML models based only on cost, however, is a recipe for business latency, missed opportunity, and poor resilience. Your organization will be better positioned to ride out economic and market conditions without being eaten by the leopard.