At the end of October, the Biden White House issued a comprehensive executive order on artificial intelligence, touching on topics such as advancing American AI leadership and encouraging AI innovation. Much of the 100-page-plus document focuses on mitigating AI risk and promoting responsibility. Whether your organization is preparing for impending regulation or trying to bridge the AI trust gap with customers and employees, the executive order (EO) can help you prioritize your efforts. Here are four responsible AI practices to begin implementing today:
- Create mechanisms for accountability and remediation. The executive order provides new ways to report harmful AI practices and hold irresponsible managers of AI systems accountable. For example, the EO indicates that the Department of Health & Human Services will establish a safety program to receive reports of unsafe healthcare practices involving AI. It also directs the Department of Justice and federal civil rights offices to coordinate on best practices for investigating and prosecuting AI-related civil rights violations. Companies should hold themselves accountable by developing feedback mechanisms for identifying and resolving unsafe AI practices. Don’t wait for the government to do it for you!
- Demand greater transparency from AI vendors. A lack of transparency is a major concern for AI buyers of AI systems. The National Institute of Standards and Technology plans to set red-team testing standards for foundation models and will require these model developers to share their results with the federal government before making their systems publicly available. Companies have an opportunity to better understand the safety and trustworthiness of AI models by requesting these test results in their procurement process. In response to the demand for greater transparency, some vendors such as Twilio are offering AI “nutrition labels” with details about model provenance.
- Engage authentically, not deceitfully. With reports of deepfake-enabled election meddling springing up, it’s only natural that Americans are wary of developments in AI. The EO plans to protect Americans from AI-enabled deception by enlisting the Department of Commerce to develop standards to label AI-generated content and authenticate communications that Americans receive from the government. Businesses using AI to engage with customers can’t wait on the sidelines. Sports Illustrated recently stumbled into the uncanny valley when it allegedly published articles from fake AI authors. Customers don’t like being deceived — let them know when they are dealing with a robot.
- Advance equity by reducing harmful bias. When used irresponsibly, AI perpetuates societal bias — for instance, a nonprofit research organization found that Facebook’s ad algorithm showed job advertisements to different genders disproportionately. The EO takes steps to promote responsible AI through guidance to government agencies. For example, the order provides guidance to landlords, federal benefits programs, and federal contractors on how to prevent biased behavior based on AI algorithms. The EO also includes plans to develop best practices for responsible AI use in the criminal justice system. Companies should employ best practices for ensuring fairness across the AI lifecycle, such as evaluating their models using a range of fairness metrics.
Don’t wait for the EO to manifest in regulations and standards before investing in responsible AI. By employing these best practices today, you will build trust in AI, mitigate risks, and may even unlock new market opportunities. Schedule an inquiry with me today.