AI-based discrimination — even if it’s unintentional — can have dire regulatory, reputational, and revenue impacts. While most organizations embrace fairness in AI as a principle, putting the processes in place to practice it consistently is challenging. There are multiple criteria for evaluating the fairness of AI systems, and determining the right approach depends on the use case and its societal context. Furthermore, technology architecture and delivery leaders in charge of AI development efforts should adopt best practices to ensure fairness throughout the AI lifecycle — from defining the problem to model development, all the way through to deployment and ongoing performance monitoring.