2023 Year of #GirlhoodThe Power of Women in 2023

Building Ethical AI: 4 Strategies for Mitigating Bias in Systems Design

Last Updated: April 24th, 2024
By Amy Chivavibul

AI is not only shaping our present, but also our future.

We are at an exciting time in AI development. 49% of the general population uses generative AI, and 34% uses it every day, according to a 2023 Salesforce survey. Businesses are leveraging AI to obtain competitive advantage by automating processes, supporting cybersecurity and fraud management, and enhancing customer experiences. With growing demand, companies like OpenAI, Google, and Minstral are racing to release new and advanced LLMs (Large Language Models) that outperform the competition.

AI is a part of our present and is the future. Yet as we navigate this pivotal era in AI development, we also face existential and ethical concerns. How can we ensure we’re creating the “right” future? Who gets to decide what this future looks like? 

This discussion builds upon topics from my previous blog posts on AI, 3 Myths of Artificial Intelligence and How to Use AI Responsibly. In this exploration, we dive deeper into actionable strategies that developers, engineers, designers, and organizations can employ to design AI systems that prioritize bias reduction and ethical implementation.

abstract illustration of an ai/machine and human facing eachother1. Prioritize Transparency and Explainability

Transparency in AI refers to the clarity and openness with which AI systems operate, providing insight into their functioning. Explainability complements transparency by ensuring that the rationale behind AI decisions can be comprehended and trusted by humans. Together, these principles allow developers, engineers, and end-users alike to peer into the “black box” of AI systems and identify potential biases.

Leverage LLM-to-LLM Evaluation: Enhance AI explainability by using LLMs to evaluate the outputs of other LLMs. This approach utilizes specific metrics like context relevance, groundedness, and answer relevance, leveraging the evaluators’ capacity to interpret complex language patterns efficiently. Employing one LLM to assess another not only improves the accuracy of explanations, but also ensures they are meaningful and contextually appropriate for technical and non-technical audiences.

Model Introspection: Use self-attention mechanisms to reveal how the model prioritizes internal computations, providing insights into its reasoning. Implement probing with auxiliary models to analyze and explain LLM’s internal activities, helping clarify how decisions are formed.

Use Open-Source Explainability Tools: Implement tools and techniques like LIME (Local Interpretable Model-agnostic Explanations) or SHAP (SHapley Additive exPlanations) to help understand how input features influence predictions. These tools can break down the decision-making process, offering a detailed breakdown of each feature’s contribution to the final decision. LIME and SHAP, however, have limitations. These more traditional methods can be computationally intensive and may not fully capture the complex interactions within LLMs.

Enhance Data Transparency: Ensure that all data used in training the AI is well-documented, clearly describing the source, type, and characteristics of the data. This transparency helps  developers, engineers, and relevant stakeholders understand the data’s influence on the AI decision-making and behavior.

Develop Clear Documentation and Visualizations: Create comprehensive documentation and visual explanations of how the AI system works. This includes visualizing a model’s decision-making process, which can make the AI’s outputs more accessible to non-experts and facilitate easier identification of potential bias.

2. Perform Continuous Bias Monitoring and Assessment.

Given the dynamic and fast-paced nature of AI development, continuous monitoring and assessment is essential for identifying “bias creep” and addressing issues proactively. This ongoing vigilance helps ensure AI systems remain fair and effective over time.

Develop Tests to Measure Implicit Bias: To effectively monitor bias, particularly in LLMs, it is crucial to establish robust tests that measure and reveal hidden prejudices that are not immediately apparent through standard performance metrics. Consider using the LLM Implicit Association Test (IAT) Bias and the LLM Decision Bias as systematic measurements of bias. The LLM IAT Bias utilizes a prompt-based approach to reveal underlying biases, while the LLM Decision Bias specifically detects subtle discriminatory patterns in decision-making tasks. Inspired by the psychological insights from Project Implicit, these measures provide a comprehensive framework for assessing both explicit and implicit biases in AI systems. Consider testing your own implicit bias by taking one of the many assessments.

Use Open-Source Bias Evaluation Tools: Leverage open-source tools to facilitate regular and systematic bias assessments. The Evaluate library by Hugging Face, for example, is an excellent resource for metrics on toxicity, polarity, and hurtfulness that are specifically designed for bias detection. Similarly, responsibility metrics by DeepEval, such as the BiasMetric and ToxicityMetric, allow developers to identify and mitigate potential biases within LLMs.

Implement Automated Monitoring Software: Consider implementing automated monitoring software that regularly reviews the AI system’s performance and benchmarks it against a breadth of bias indicators – such as LLM IAT Bias, LLM Decision Bias, DeepEval’s Responsibility metrics, and measurements within in the Hugging Face Evaluate library. Configure software to flag deviation in real-time, alerting teams to potential biases as they arise. Automated tools not only facilitate a more consistent review process, but also enable teams to directly respond to issues, ensuring that AI systems continue to operate within ethical bounds.

3. Apply “Blind Taste Tests.”

Another practical approach to bias reduction is the application of “blind taste tests,” a method adapted from consumer marketing research to the realm of AI decision-making processes.

The concept of blind taste tests dates back to marketing strategies like the Pepsi Challenge of the 1970s, where consumers were asked to taste a product without knowing the brand, thus eliminating preconceived biases towards a particular label. This approach has proven effective in revealing true preferences and overcoming bias based on brand perception alone.

We can employ a similar strategy by withholding certain information from an AI model during the training phase. This method involves initially training the model on all available data and then selectively retraining it without the data suspected of introducing bias.

Such forms of data redaction force AI systems to function “blindly” – just like consumers in the Pepsi Challenge – and break the self-perpetuating effect of bias.

4. Commit to Interdisciplinary Collaboration & Diversity in Teams.

Responsible AI development requires more than a team of intelligent engineers – it demands collaboration among teams with different backgrounds, experiences, identities, and skill sets.

Incorporating insights from various disciplines enhances our understanding of bias and fairness in AI systems. For example, social scientists and ethicists can provide crucial insights into the social impacts and ethical considerations of AI, while legal experts ensure compliance with regulations and foster accountability.

Diverse teams are also instrumental in making better decisions, questioning the status quo, and challenging assumptions. According to a 2017 study by Cloverpop, teams with age, gender, and geographic diversity make better decisions up to 87% of the time, in comparison to individual decision makers.

As we design the AI of tomorrow, it is essential that we leverage diversity of thought. A collaborative approach will pave the way for transformative AI solutions that are not only technically sound, but also socially responsible.

Conclusion

These strategies represent a starting point for organizations to develop rigorous regulations to mitigate bias in AI system design.

To truly benefit from AI, we must commit to continuous learning and improvement, engage with new research, embrace regulatory changes, and listen to the global community affected by new technologies.

Every developer, engineer, designer and organization must recognize their role in (re)-shaping the future with AI. It is not just about building AI – it is about building AI that builds a better world for everyone.

2023 Year of #GirlhoodThe Power of Women in 2023

Build Better Lines of Connection.