AI couldn’t stay out of the headlines in 2023. While there was the ongoing string of generative AI announcements, AI also experienced legislative and litigious scrutiny. Both AI technology companies and enterprises that use these technologies experienced investigations related to their AI use. All evidence is pointing to 2024 not showing any signs of slowing down when it comes to both innovation and legislation. We are seeing social media companies, built on algorithms, address US legislators on the harms of these platforms and ways to legislate to protect children.
That’s the mile-high view. The street-level view is a patchwork of laws, executive orders, and legislations across federal and local jurisdictions with which enterprises and technology vendors must contend. In 2023 alone, 190 bills were introduced at the state level to regulate AI, and 14 became law. At the federal level, the Federal Trade Commission has begun to enforce existing laws with new powers from executive orders as well as more attention from FTC leadership. This could cause a dampening effect on enterprise AI innovation and strategy. In reality, regulations aren’t stopping AI leaders from pushing ahead. But it is changing the calculus on the AI use cases that enterprises will pursue and how.
In our new report, Navigate The Patchwork Of US AI Regulations, we answer the top six questions that help every organization navigate the US AI regulatory chaos. With this patchwork of regulations, there’s no clear path for how to proceed with AI strategy that doesn’t overstep current regulatory requirements, especially as they evolve. Here are four things that every organization should know as they continue to develop and deploy AI in safe and legal ways:
- Don’t take your eyes off the regulatory horizon. Enterprises will find it increasingly difficult to keep themselves abreast of and ahead of all the potential regulations coming. While keeping an eye on the regulatory horizon, enterprises must focus on the existing US laws that already cover aspects of AI. They must also consider whether their industry has created specific standards or requirements for AI. When it comes to prioritization, enterprises should continue to leverage existing risk management and governance practices in light of existing regulations.
- Upskill employees in technical and nontechnical roles. All roles within your enterprise are on the hook for meeting regulatory compliance. Those in technical roles will need to incorporate new regulatory requirements in the models that they build, train, and oversee. Business roles, on the other hand, will need to learn how to use AI responsibly in their day-to-day tasks. Enterprises need to build processes and practices to educate all employees on how to ensure that their AI use does not violate enterprise policies and regulations.
- Double down on vetting and monitoring third-party AI providers. Assurances and indemnifications may ease buyers’ concerns when purchasing AI-enabled models, data sets, products, or services, but a certificate of “assurance” will not be enough to get them out of hot water when AI is used in a way that falls outside the covered terms. Additionally, any AI purchased or acquired from external entities (yes, even the free open-source AI) will require the same level of, if not greater, due diligence and ongoing monitoring as other third-party products and services.
- Continue to innovate with AI — just do it safely. Despite the regulatory chaos, enterprises should continue to find new use cases and ways to incorporate AI into their organization. To successfully innovate without exposing enterprises to undue risks, AI leaders need to partner with risk and compliance teams to navigate the regulatory landscape.
To learn how to best succeed on your AI ambitions despite regulatory uncertainty, read our report and schedule an inquiry or guidance session with Michele Goetz to learn more about how this affects your organization specifically or Alla Valente to understand AI’s risk and regulatory compliance implications.