{"id":3774,"date":"2023-11-13T21:35:28","date_gmt":"2023-11-14T02:35:28","guid":{"rendered":"https:\/\/dfpdigital.com\/?p=3774"},"modified":"2023-12-07T09:43:21","modified_gmt":"2023-12-07T14:43:21","slug":"how-to-use-ai-responsibly-5-ways-to-confront-ai-bias","status":"publish","type":"post","link":"https:\/\/dfpdigital.com\/how-to-use-ai-responsibly-5-ways-to-confront-ai-bias\/","title":{"rendered":"How to Use AI Responsibly: 5 Ways to Confront AI Bias"},"content":{"rendered":"
By Amy Chivavibul<\/h5>\n

AI is not neutral – but there are ways to keep its biases in check.<\/h2>\n

My previous blog post, 3 Myths of Artificial Intelligence: The Hidden Bias Behind AI Tools<\/a>, dispelled the myths that AI is neutral, that it’s only about the future, and that it’s completely automated.<\/p>\n

AI is already here: Amazon uses generative AI to summarize product reviews<\/a>, and OpenAI has announced new capabilities that allow ChatGPT to “see, hear, and speak<\/a>.” Gartner<\/a> even predicts that conversational AI will be embedded within 40% of enterprise applications by 2024 – a huge increase from the less than 5% in 2020.<\/p>\n

AI, while imperfect, is a powerful tool that can improve efficiency<\/a> –\u00a0 from writing emails and creating Excel formulas to organizing a messy computer desktop.<\/p>\n

AI bias must be confronted to create a more equitable world in the Information Age. Follow these strategies to use AI in more ethical and responsible ways:<\/p>\n<\/div>

1. Fact-check and request citations.<\/h2>\n

Generative AI<\/a> can instantly create content like emails, poetry, images, and even code. Despite their near-human fluency, AI outputs are unpredictable.<\/p>\n

AI can ‘hallucinate<\/a>‘ and create outright falsehoods: ChatGPT has fabricated citations<\/a> and made definition statements on uncertain historical events; and applying biased text-to-image models<\/a> for suspect sketching could lead to wrongful arrests.<\/p>\n

When using chatbots and other generative AI tools, fact-checking is essential.<\/p>\n

Jeff Hunter, a leader of AI innovation at Ankura, describes one way his team keeps misinformation and bias at bay when interacting with chatbots: “We wrote our own bot capability so that every time there’s a response it lists off the citations at the end, like a book would do, and then we have a validator engine that verifies all those links are real and can find representative data of the answers it gave back to us.”<\/p>\n

What does this look like for the average user? Requesting citations in prompts as often as possible.<\/p>\n

But fact-checking doesn’t end with just a citation, as Hunter is quick to point out: “Always check your citations. If you can’t verify the source, you can’t verify the answer.”<\/p>\n

For example, when I prompted ChatGPT to “Write me a list of 3 dinner ideas for someone diagnosed with high blood pressure. Ideas must include a citation,” the chatbot returned a detailed list with accurate-sounding sources.<\/p>\n<\/div>

\"Write<\/picture><\/span><\/div>
\"\"<\/picture><\/span><\/div>

However, only one of the three links was legitimate. The link to the Mayo Clinic’s DASH Diet took me to a random image on the Mayo Clinic website, and the American Heart Association’s “Vegetarian Chickpea Curry” recipe returned a 404 Error.<\/p>\n<\/div>

\"Screenshot<\/picture><\/span><\/div><\/div><\/div>
\"\"<\/picture><\/span><\/div><\/div><\/div><\/div>

The takeaway? ChatGPT and other generative AI tools are not always trustworthy.<\/p>\n

2. Educate yourself (and others) on AI bias.<\/h2>\n

AI can contribute to systemic bias in several ways – from flawed facial recognition<\/a> technology to models that target Black and Latin American neighborhoods for daily police patrol<\/a>.<\/p>\n

It is important to consider the different social contexts in which AI has disproportionately impacted Black, Indigenous, and People of Color (BIPOC). Learn more about AI bias here<\/a>.<\/p>\n

With this knowledge base, users can empower themselves, becoming active learners and critical thinkers while interacting with AI tools.<\/p>\n

Encourage your peers and teams to avoid taking AI at face value, and avoid becoming passive users.<\/p>\n

Consider discussing the following questions:<\/p>\n