Can AI be trusted? Do people put too much faith in this nascent technology? These questions have persisted — and will continue to evolve — as AI tools become more sophisticated and their applications more complex. The answer isn’t straightforward. Trust in AI isn’t a “one size fits all” metric. It varies significantly depending on the specific use case and is shaped by cultural, demographic, and technological contexts. Forrester’s 2025 “Consumer Insights: Trust In AI” reports explore these nuances, revealing how consumer attitudes differ across North America, Europe, and Asia Pacific.

Familiarity Drives Use — But Not Always Confidence

Across all three regions, AI usage is climbing. In North America, 38% of US online adults have used generative AI, with 60% of those using it weekly. In Europe, nearly a third of consumers have tried genAI tools. And in APAC, adoption is highest in metro India, where over half of online adults report using it.

Yet this growing familiarity doesn’t always translate into trust. Many consumers feel conflicted — adopting AI tools while simultaneously fearing their misuse. In fact, half of genAI users in both North America and Europe admit that they don’t tell others they use it, citing feelings of shame and uncertainty. Consumers are worried about misinformation, fraud, and the erosion of control — especially when they can’t tell whether AI is being used in their interactions with companies.

Knowledge Breeds Polarization

One of the most revealing insights from the reports is the role of AI literacy. Consumers who consider themselves knowledgeable about AI are both more trusting and more skeptical. They’re more likely to recognize its potential — but are also more attuned to its risks. In Europe, less than 30% of consumers feel knowledgeable about AI, with Gen Zers and students leading the way. In North America, only 24% of US and 20% of Canadian adults say the same. And in APAC, knowledge varies widely: Just 5% of metro Indians say they’re not knowledgeable, compared to 34% of Australians.

This divide matters. Knowledgeable consumers are significantly more likely to trust AI-generated information — but they’re also more likely to believe that AI is biased or poses a societal threat. For businesses, this means that building trust isn’t just about transparency — it’s about education.

Consumers Want Transparency And Governance — But Don’t Trust Institutions To Deliver It

Across all three regions, consumers are calling for stronger oversight. Most want companies to disclose when AI is used in customer interactions. Many also support government regulation — especially in Europe and Canada, where privacy concerns are paramount. But trust in institutions is low. In Europe, fewer than one in five consumers trust public-sector bodies to manage AI risks. In North America, only 15% of US adults trust companies that use AI with customers. And in APAC, while demand for regulation is high, faith in both public and private institutions to deliver it is limited.

Consumers are clear about what they want: protection from fraud, misuse, and unethical behavior. But they’re not convinced that governments or corporations are up to the task.


Why This Matters

As AI becomes more pervasive, trust will be a defining factor in its success. These reports offer a timely and data-driven look at how consumers are thinking about AI — and what organizations must do to earn their confidence.

Download the full reports to explore:

  • Regional differences in AI trust and adoption.
  • Demographic insights by age, gender, and employment status.
  • Consumer expectations for AI governance and risk management.

Forrester clients can explore the Europe, North America, and APAC reports and connect with us via guidance sessions or inquiries.