Khan Academy presents its AI tutors as “long-term learning” on its website, but the fact is a bit more complicated. What the site doesn’t suggest beforehand is that its service allows students to choose other ancient figures like Genghis Khan, Montezuma, Abigail Adams, and Harriet Tubman. Lately, the service is not available to everyone; It is limited to a few school districts as well as voluntary product testers.
Similar to ChatGPT, avatars mine knowledge held on the web to create a repository of words in the “vocabulary” of the bot a user is talking to.
The Washington Post tested the limitations of this technology, Harriet Tubman’s avatar, to see if AI would mimic Tubman’s speech and brain tendency or if it emerged as an offensive impression or regurgitation of Wikipedia information.
According to the article, the tool is designed to help educators spark students’ interest in ancient figures, but there are limitations to the way the bot is programmed, resulting in avatars that don’t constitute the characters they should represent.
These AI interviews raised questions, not only about the ethics in the fledgling synthetic intelligence box, but also about the ethics of conducting such an “interview” in the interest of journalism. Many black Twitter users were horrified at the idea of digitally exhuming a respected icon and ancestor of Harriet Tubman. These considerations seem to reside in the practical wisdom that the creators of those apps and robots are not interested in the constancy of the spirits of the dead, as they don’t seem to care much about living blacks. who continually fail to do the right thing.
Even the Washington Post acknowledges that the bot fails in its fundamental fact checks, and the Khan Academy claims that the bot is not meant to serve as an ancient record of events. Why introduce such a generation if it cannot even be trusted to mimic an updated “version” of ancient figures?
– CiCi Adams (@CiCiAdams_) July 18, 2023
UNESCO presents on its website some fundamental principles and moral recommendations in the field of synthetic intelligence. The organization created the world’s first popular morality in synthetic intelligence, which was accepted in 193 countries around the world in 2021.
Its 4 pillars are human rights and human dignity, living in peace with the aim of creating a just society, ensuring diversity and inclusion, and the environment. Even a quick look at those pillars would find that the Khan Academy bot posing as ancient figures who cannot consent to their likenesses and names being used is a blatant violation of ethics and, some would say, ethical guidelines.
If the dead have dignity, digging them up for what amounts to reflex image training is a whole for their desires and a lack of reflected image of those moral principles. In its discussion on equity and non-discrimination, UNESCO writes: “AI actors deserve to promote social justice, equity and non-discrimination while adopting an inclusive technique to ensure that the benefits of AI are available to all. “
It turns out that Khan Academy wants to take those words at the center, because at present, it doesn’t seem that social justice, equity and accessibility are precisely at the center of this project. Reactions to this experience on social media tell this story to the world.
RELATED CONTENT: Redman doesn’t need any synthetic intelligence component says: ‘Don’t let the generation ruin hip-hop’