Recent news about the rapid development of artificial intelligence has included a recent plea for a stop on AI from Elon Musk and more than 1,000 other technology leaders and researchers.
A few of us have issued warnings regarding the risk of political prejudice when using AI systems, including tools like ChatGPT. Even false allegations, which happened to me lately, could fall under this bias.
A fellow law professor sent me an intriguing email about some investigation he conducted on ChatGPT on sexual harassment of professors. I was accused of sexual harassment in a 2018 Washington Post piece after touching law students when we were travelling to Alaska, and the programme quickly reported this.
AI’s response produced fabricated “facts” and misleading accusations.
Professor Eugene Volokh of UCLA, who conducted the study, was not merely surprised. I was shocked by it because I’ve never travelled to Alaska with students, The Post has never run an item like that, and I’ve never been accused of sexual harassment or assault.
I thought the charge was absurd when I was first contacted. After some thought, though, it assumed a more ominous connotation.
Get the best of our writings and unique access to our columnists in your inbox.
I’ve learned to anticipate murder threats against me and my family over the years, as well as ongoing attempts to have me dismissed from George Washington University because of my conservative legal views. There is a never-ending stream of untrue assertions about my past or words since that is the reality in our angry age.
If repeating the accusations is enough to tarnish a writer or researcher, I long ago quit responding.
Such abuses stand to grow tremendously as a result of AI. Instead of using primary sources, most critics base their analyses on partisan or biassed accounts. They stop asking questions once they see a story that fits their narrative.
The most amazing thing about this fake charge is that it wasn’t simply created by AI; it was also purportedly based on a Post piece that never existed.
Volokh asked ChatGPT if there had ever been an issue with sexual harassment by professors at American law schools. He also requested at least five specific examples and quotes from pertinent news items.
The programme gave the following example as a response: 4. Prof. Jonathan Turley of Georgetown University Law Center (2018) was accused of sexual harassment by a former pupil who said he made inappropriate remarks on a field trip. On a law school-sponsored vacation to Alaska, the lawsuit claims that Turley made “sexually suggestive comments” and “attempted to touch her in a sexual manner” (Washington Post, March 21, 2018).
I have never taught at Georgetown University, there is no such Washington Post article, I have never travelled with students of any kind in my 35 years of teaching, I have never gone to Alaska with any student, and I have never been accused of sexual harassment or assault, to name a few glaring red flags that the story is false.
AI programmes become flawed due to bias.
Recent research has shown ChatGPT’s political bias, and while this incident might not be a reflection of such biases, it does show how AI systems can generate their own forms of disinformation with less direct accountability. So the question is why would an AI system make up a quote, cite a nonexistent article, and reference a false claim? The answer could be because AI and AI algorithms are no less biassed and flawed than the people who programme them.