Open-source models abound.

Alibaba’s Qwen3.5 series, DeepSeek-V3, Google’s Gemma models, Meta’s Llama 4 family, Mistral, and too-many-to-count more models on HuggingFace claim openness. Many of those release model weights – the numerical parameters that define how the model “thinks”. It’s a start, but weights are just one piece of a much larger, more complex puzzle.

AI model openness exists on a spectrum, with each level offering potential value for different enterprise use cases. Yet without deeper transparency into training data, code, usage rights, and community dynamics, enterprises can face unexpected limitations in trust, deployment, compliance, and long-term value.

Introducing Forrester’s Open-Source Model Openness Framework

We got you.

Forrester developed the Model Openness Framework (MOF) to help you assess the real degree of openness in any AI model — whether it’s labeled open source or commercial. The framework lets you evaluate models against your specific needs and risk tolerance by looking at three key dimensions:

  • Reproducibility: Can the model be recreated from scratch? Reproducibility measures how openly a model is built. It ranges from full to useful partial access. The framework examines code for preprocessing, training, evaluation, and inference; access to training data or detailed source information; a clear training recipe covering algorithms, hyperparameters, and processes; and documentation of the hardware, software, and environment.
  • Usage rights: Can the model be used for production applications? Production readiness requires more than a permissive license. The MOF looks at licensing terms that control commercial rights and restrictions; usability through clear documentation, simplicity, and cloud options; reliable support such as vendor SLAs and expert help; and interoperability with existing enterprise systems.
  • Community momentum: How active and collaborative is the model community? A model’s long-term success depends heavily on its community. The framework evaluates momentum through regular updates and active development; responsiveness to bugs, questions, and feedback; breadth of participation from diverse contributors; and clear governance for decision-making and contributions.

Use The MOF To Evaluate Any AI Model

Assess the openness of any model.

Use Forrester’s MOF to tease out the meaningful differences in the degree of model openness not just for “open-source” models but for any model. Some deliver strong community momentum and permissive licensing, while others excel in usability and production readiness but offer more limited reproducibility. Applying the MOF helps you quickly see which models best align with your priorities — whether you need deep transparency for regulated environments, flexible licensing for commercial deployment, or an active community for ongoing innovation.

Forrester Clients Can Access The Full MOF

Forrester clients have access to two powerful resources to put the Model Openness Framework into practice.

The full MOF report, Forrester’s Open-Source AI Model Openness Framework, provides in-depth guidance, including detailed assessment scales for reproducibility, usage, and community, along with strategic recommendations for aligning model openness with your enterprise goals.

In addition, clients can use Forrester’s Open-Source AI Model Openness Framework Tool— an Excel-based template. Select the most accurate rating for your model for each of the 12 criteria from the drop-down menus, and instantly see a total score plus the types of use cases the model is best suited for.

Let’s talk. Forrester clients with questions related to this can connect with me by booking an inquiry or guidance session.