The White House Puts Its Money Where Its AI Is — Enterprises Need To Act Now
The White House announced investments and actions to put the Bill of Rights and National Institute of Standards and Technology’s (NIST) AI Risk Management Framework (AI RMF 1.0) to the test. Tackling AI risk head on, the Biden administration engaged with Alphabet, Anthropic, Microsoft, and OpenAI, with additional focus on the impact of generative AI. In addition, the Department of Justice, Federal Trade Commission (FTC), Consumer Financial Protection Bureau (CFPB), and Equal Employment Opportunity Commission established AI principles advocating robust data collection and analysis to mitigate bias and discrimination.
Conversations with AI leaders show that AI governance is in early days, yet the tsunami is coming, and the impact will be felt in all enterprises. Leaders need to be prepared, because they are accountable for how their organization uses AI. In the US, 17 states and the District of Columbia have pending legislation around AI as well as AI task forces reviewing their existing laws related to cyberattacks, surveillance, privacy, discrimination, and the potential impacts of AI. The time is now for enterprise AI governance to ensure:
- Evaluations of embedded AI in applications and platforms. Fifty-one percent of data and analytics decision-makers are buying applications with embedded AI capabilities, and 45% are leveraging pretrained AI models. Enterprises need AI policies to test for effectiveness, responsibility, and business and data processing risks. When a software-as-a-service model using embedded AI conflicts with enterprise policies, vendors need to demonstrate how they move models on-premises, allow shut-off configuration, and release updates.
- Controls around IP use and infringement. Foundation models and generative AI expose enterprises to entitlement and IP violations. The US Supreme Court recently upheld that only humans can create IP, not AI. Other countries such as Australia have similar laws. Enterprises need a comprehensive understanding of data sources; a process for validating training data, algorithms, and code; and automated controls to avoid IP violations.
- Product safety standards on AI. AI leaders, such as Alphabet’s Sundar Pichai, have called for regulation rather than proactively addressing AI risk, allowing an uptick in harmful propaganda and misinformation. The EU AI Act is an attempt to counteract that trend by extending product safety regulations to AI use. In the US, the CFPB and FTC are examining existing product safety, libel, and consumer protection laws. Legal teams need to prepare for regulatory compliance and potential class-action lawsuits as enterprise AI capabilities come under regulatory scrutiny.
- Inclusiveness as part of AI ethics. AI ethics that do not consider inclusivity are incomplete. With more black-box machine-learning models, such as large language models and neural nets, organizations will struggle to ensure that model behavior does not violate civil or human rights laws. Enterprises must take action to minimize bias in training data and model outcomes and also recognize that conversations about AI and ethics must involve a broad set of stakeholders.
- Data integrity and observability. Enterprises need to be able to trace and explain their data. New York State has a regulation under review that requires disclosure of data sources and any use of synthetic data. While most organizations track data sources and observe AI when a model is in production, data governance will be necessary in data science processes and data sourcing to proactively address data transparency and usage rights throughout the AI lifecycle.
As regulators and courts start to scrutinize the use of AI, enterprises need to quickly build AI governance as a bulwark against risk. Expecting data science and AI teams to tackle AI governance alone is a recipe for failure. AI governance will require enterprisewide cooperation — including CEOs, leadership teams, and business stakeholders — to build effective processes and policies.