As AI adoption accelerates across the public sector, so do the questions from stakeholders and employees:

  • Can I trust this system to treat me fairly?
  • Will it help me do my job — or replace me?
  • Who’s accountable when it gets something wrong?
  • Who is controlling the answers?

These aren’t just technical questions. They’re human ones. And they demand a human-centered response.

The AI Trust Gap In Government

Government agencies face a unique trust challenge. Unlike private-sector firms, they must uphold empathy, transparency, and accountability while navigating complex regulatory environments and diverse stakeholder needs. AI’s “black box” nature — its opacity, probabilistic logic, and tendency to reflect societal bias — only deepens the trust gap.

To bridge it, public agencies must go beyond compliance. They must build AI systems that are not only lawful but lovable systems that people want to work with and believe in.

The Seven Levers Of Trust: A Framework For Government AI

Forrester’s seven levers of trust — accountability, competence, consistency, dependability, empathy, integrity, and transparency — offer a practical blueprint for building AI that earns confidence from both constituents and employees.

Let’s explore how each lever applies in a government context and some action steps for building trust:

  • Accountability: the willingness to take responsibility for outcomes

Take ownership of AI outcomes. Establish ethics boards, audit systems regularly, and communicate openly when errors occur.

  • Competence: the ability to do something effectively and reliably

Ensure that your AI is fit for purpose. Quantify uncertainty and adopt best practices such as model risk management.

  • Consistency: the ability to deliver stable, repeatable results over time

Use ModelOps to monitor and retrain models. Standardize deployment protocols to ensure reliable performance.

  • Dependability: the assurance that systems will perform as expected under real-world conditions

Simulate AI outcomes before real-world use. Stress-test systems to uncover vulnerabilities.

  • Empathy: the capacity to understand and reflect stakeholder needs and values

Involve stakeholders in design. Use “bias bounties” to crowdsource fairness checks.

  • Integrity: the commitment to act ethically and avoid harm

Appoint a chief trust officer. Proactively mitigate bias and uphold ethical standards.

  • Transparency: the openness to explain how decisions are made and why

Invest in explainable AI. Make decision-making traceable and communicate clearly with the public.

From “Two Beers And A Puppy” To “Gaps And Discord”: A More Practical Trust Test

In workshops, I used to reference the “two beers and a puppy” test — a metaphor for likability and reliability. But in the context of AI in government, we need something more actionable. Trust isn’t just about how AI makes us feel; it’s about how it behaves in the real world.

Let’s reframe the trust test through two communication dynamics that consistently erode confidence in both people and systems:

  • Gaps in communication: silence or delayed responses, unclear expectations, missing context
  • Discord in communication: tense tone or defensiveness, misalignment of messaging, frequent conflict

When AI systems fail to explain themselves — or when their outputs contradict human expectations — they create gaps. When they deliver results that feel misaligned with values or tone, they create discord. Both erode trust.

Agencies must design AI systems that communicate clearly, consistently, and empathetically — just like a trusted colleague would.

NIST & CISA’s Role In Building AI Trust

The Cybersecurity and Infrastructure Security Agency (CISA) is helping agencies operationalize these principles. Its AI roadmap emphasizes responsible use, assessment and assurance, and protection against malicious use. CISA’s recent guidance on AI data security and trust calibration training provides actionable tools for agencies to build trustworthy systems from the ground up.

Building Trust With Employees

Employees aren’t just users of AI — they’re stewards of it. Agencies must:

As I often say in storytelling sessions, “documents we create today will be read by AI tomorrow.” That means we must say the quiet parts out loud — clarify our intent, surface our values, and help others understand where their curiosity can lead them.

Final Thought: Trust Is A Strategy

Trust isn’t a soft skill. It’s a strategic asset. Agencies that lead with trust will unlock AI’s full potential — serving constituents more equitably, empowering employees more effectively, and fulfilling their public mission with integrity.

To learn more about AI adoption, check out my research on curiosity velocity and schedule an inquiry session with me by emailing inquiry@forrester.com.