Powerful AI tools are now widely available, and many are free or low-cost. This makes it easier for more people to use AI, but it also means that the usual safety checks by governments — such as those done by central IT departments — can be skipped. As a result, the risks are spread out and harder to control. A recent EY survey discovered that 51% of public-sector employees use an AI tool on a daily basis. In the same survey, 59% of state and local government respondents indicated that their agency made a tool available, compared to 72% at the federal level. But adoption comes with its set of issues and doesn’t eliminate the use of “shadow AI,” even when authorized tools are available.

  • The first issue: the procurement workarounds for low-cost AI tools. In many cases, we can think of generative AI purchases as micro transactions. It’s $20 bucks per month here, $30 per month there … and all of a sudden, the new tools fly under traditional budget authorization levels. In some state governments, that’s as low as $5,000 overall. A director procuring generative AI for a small team wouldn’t come close to levels where it would show up on procurement’s radar. Without delving too deeply into the minutiae of procurement policies at the state level, California allows purchases between $100 to $4,999 for IT transactions, as do other states including Pennsylvania and New York.
  • The second issue: the painful processes in government. Employees often use AI tools to get around strict IT rules, slow purchasing, and long security reviews, as they’re trying to work more efficiently and deliver services that citizens rely on. But government systems hold large amounts of sensitive data, making the unapproved use of AI especially risky. These unofficial tools don’t have the monitoring, alerts, or reporting features that approved tools offer, which makes it harder to track and manage potential threats.
  • The third issue: embedded (hard-to-avoid) generative AI. As AI becomes seamlessly integrated into everyday software — often designed to feel like personal apps — it blurs the line for employees between approved and unapproved use. Many government workers may not realize that using AI features such as grammar checkers or report editors could expose sensitive data to unvetted third-party services. These tools often bypass governance policies, and even unintentional use can lead to serious data breaches — especially in high-risk environments like government.

And of course, the use of “shadow AI” creates new risks, as well, including: 1) data breaches; 2) data exposure; and 3) data sovereignty issues (remember DeepSeek?). And those are just a few of the cyber issues. Governance problems include: 1) noncompliance with regulatory requirements; 2) operational issues with fragmented tool adoption; and 3) issues with ethics and bias.

Security and technology leaders need to enable use of generative AI while also mitigating these risks as much as possible. We recommend the following steps:

  1. Increase visibility as much as possible. Use CASB, DLP, EDR, and NAV tools to discover AI use across the environment. Use these tools to monitor, analyze, and, most importantly, report on the trends to peer leaders. Use blocking judiciously (if at all), because if you remember the shadow IT lessons of the past, you know that blocking things just drives use further underground and you lose insight into what’s happening.
  2. Inventory AI applications. Based on data from the tools mentioned above and working across various departments, work to discover where AI is being used and what it’s being used for.
  3. Adapt your review processes. Create a lightweight review process that accelerates approvals for smaller purchases. Roll out a third-party security review process that’s faster and easier for employees and contractors.
  4. Establish clear policies. Include use cases, approved tools, examples, and prompts. Use these policies to do more than articulate what’s approved. Use them to educate on how to use technology, as well.
  5. Train the workforce on what’s permitted and why. Explain to teams why policies exist and the related risks, and use these sessions to further explain how to best take advantage of these tools. Show different configuration capabilities, example prompts, and success stories.

Enabling the use of AI results in better outcomes for all involved. This is an excellent chance for security and technology leaders in government to encourage innovation of technology and process.

Need tailored guidance? Schedule an inquiry session to speak with me at inquiry@forrester.com.