Anthropic is now requiring select users to successfully complete a physical government issued ID document verification (PIDV) process “for a few use cases” although those use cases are not currently specified. Anthropic is the data controller in the process, and will be using IDV provider Persona Identities to conduct the identity verification process.  Identity verification prompts may be triggered when Claude users access certain capabilities, as part of Anthropic’s  “routine platform integrity checks, or other safety and compliance measures.”

According to Anthropic, it is taking these steps as part of its broader ongoing AI safety commitments to address risks of AI misuse.  In the face of growing abuse by cybercriminals and emerging regulations for stronger user accountability, it has elected to take an enterprise-grade user identity verification approach.

If identity verification fails due to a blurry photo, an unreadable document, an expired ID, or a technical issue, users are permitted additional attempts. In the event a user exhausts all attempts, they may contact Anthropic through an online help form. Aligning with Anthropic’s intent to prevent abuse, enforce usage policies, and comply with legal obligations, accounts may be banned following verification for reasons including repeated policy violations, creation from unsupported locations, terms of service breaches, or underage use. If a user believes their account was disabled in error, there is an appeals process.

From Forrester’s point of view, the expected benefits for Anthropic from introducing PIDV will include:

  • Better user verification leading to more secure operations and fewer attacks against their models
  • Easier and more accurate user correlation and activity tracking
  • User deterrence to perpetrate hacking
  • Enterprise users can potentially use already existing B2C user verifications by Anthropic and other AI vendors for B2E employee verification processes

Forrester expects some of the potential challenges for Anthropic from introducing PIDV will include:

  • Defending privacy safeguards the company promises/d to users
  • User frustration and attrition as a result of IDV customer experience or opposition against IDV processes for simple search operations
  • Defending the fairness of the appeals process to large user populations.

Identity verification for high-risk, high value transactions, including those in the public sector, banking, insurance, and healthcare has long required PIDV or other forms of strong identity verification/assurance processes. Requiring PIDV for certain use cases shows that Anthropic believes that generative AI and AI agents have become providers of high-risk, high value transactions. Some simple queries (e.g.: asking gen AI to summarize a sport team’s strategy for an average spectator) are not high-risk, high-value and do not need high levels of identity assurance (similarly to how simple web searches via a search engine do not require IDV or even authentication).

Anthropic’s move may also prompt other genAI and search bellwethers (Google, Microsoft, OpenAI) to further restrict and secure the use of their services. While this provides security improvements, it may also affect usability and access.

Users may respond to these new identity verification requirements by migrating to other genAI/LLM vendors that do not require IDV, or hosting and maintaining their own LLM models (Ollama).