{"id":3774,"date":"2023-11-13T21:35:28","date_gmt":"2023-11-14T02:35:28","guid":{"rendered":"https:\/\/dfpdigital.com\/?p=3774"},"modified":"2023-12-07T09:43:21","modified_gmt":"2023-12-07T14:43:21","slug":"how-to-use-ai-responsibly-5-ways-to-confront-ai-bias","status":"publish","type":"post","link":"https:\/\/dfpdigital.com\/how-to-use-ai-responsibly-5-ways-to-confront-ai-bias\/","title":{"rendered":"How to Use AI Responsibly: 5 Ways to Confront AI Bias"},"content":{"rendered":"
My previous blog post, 3 Myths of Artificial Intelligence: The Hidden Bias Behind AI Tools<\/a>, dispelled the myths that AI is neutral, that it’s only about the future, and that it’s completely automated.<\/p>\n AI is already here: Amazon uses generative AI to summarize product reviews<\/a>, and OpenAI has announced new capabilities that allow ChatGPT to “see, hear, and speak<\/a>.” Gartner<\/a> even predicts that conversational AI will be embedded within 40% of enterprise applications by 2024 – a huge increase from the less than 5% in 2020.<\/p>\n AI, while imperfect, is a powerful tool that can improve efficiency<\/a> –\u00a0 from writing emails and creating Excel formulas to organizing a messy computer desktop.<\/p>\n AI bias must be confronted to create a more equitable world in the Information Age. Follow these strategies to use AI in more ethical and responsible ways:<\/p>\n<\/div> Generative AI<\/a> can instantly create content like emails, poetry, images, and even code. Despite their near-human fluency, AI outputs are unpredictable.<\/p>\n AI can ‘hallucinate<\/a>‘ and create outright falsehoods: ChatGPT has fabricated citations<\/a> and made definition statements on uncertain historical events; and applying biased text-to-image models<\/a> for suspect sketching could lead to wrongful arrests.<\/p>\n When using chatbots and other generative AI tools, fact-checking is essential.<\/p>\n Jeff Hunter, a leader of AI innovation at Ankura, describes one way his team keeps misinformation and bias at bay when interacting with chatbots: “We wrote our own bot capability so that every time there’s a response it lists off the citations at the end, like a book would do, and then we have a validator engine that verifies all those links are real and can find representative data of the answers it gave back to us.”<\/p>\n What does this look like for the average user? Requesting citations in prompts as often as possible.<\/p>\n But fact-checking doesn’t end with just a citation, as Hunter is quick to point out: “Always check your citations. If you can’t verify the source, you can’t verify the answer.”<\/p>\n For example, when I prompted ChatGPT to “Write me a list of 3 dinner ideas for someone diagnosed with high blood pressure. Ideas must include a citation,” the chatbot returned a detailed list with accurate-sounding sources.<\/p>\n<\/div>1. Fact-check and request citations.<\/h2>\n