Legal expert issues AI warning after being ‘defamed by ChatGPT’
Report cited claim never made on a trip that didn’t happen
Artificial intelligence has been advancing at light speed in recent weeks and months, to the point that some of the technology’s own top experts are insisting on a halt – because of the threat it poses to mankind.
It’s been used, controversially, by tech companies to suppress reasonable questions and concerns about COVID treatments, election security and more, and there even have been schemes in the Biden administration to use it to suppress opinions that differ from the administration’s.
Now there’s new confirmation that AI has moved beyond any boundaries of control.
It comes from the experience of constitutional expert, George Washington University law professor and frequent witness to Congress on legal issues Jonathan Turley.
At his website, he has a column called, “Defamed by ChatGPT: My own bizarre experience with artificiality of ‘artificial intelligence.'”
There, he explains, there was a false report about him that surfaced, via AI, to a query about sexual harassment.
“Yesterday, President Joe Biden declared that ‘it remains to be seen’ whether Artificial Intelligence (AI) is ‘dangerous.’ I would beg to differ. I have been writing about the threat of AI to free speech. Then recently I learned that ChatGPT falsely reported on a claim of sexual harassment that was never made against me on a trip that never occurred while I was on a faculty where I never taught.”
He pointed out the AI, ChapGPT, “relied on a cited Post article that was never written and quotes a statement that was never made by the newspaper. When the Washington Post investigated the false story, it learned that another AI program ‘Microsoft’s Bing, which is powered by GPT-4, repeated the false claim about Turley.'”
“It appears that I have now been adjudicated by an AI jury on something that never occurred,” Turley warned.
He explained, too, the total rejection of any responsibility.
“When contacted by the Post, ‘Katy Asher, Senior Communications Director at Microsoft, said the company is taking steps to ensure search results are safe and accurate.’ That is it and that is the problem. You can be defamed by AI and these companies merely shrug that they try to be accurate. In the meantime, their false accounts metastasize across the Internet. By the time you learn of a false story, the trail is often cold on its origins with an AI system. You are left with no clear avenue or author in seeking redress. You are left with the same question of Reagan’s Labor Secretary, Ray Donovan, who asked ‘Where do I go to get my reputation back?'”
Turley explained he became aware of the catastrophic AI failure when he got an email from a fellow professor who was doing research.
That professor was told, via AI, that Turley had been accused in a 2018 Washington Post article.
The facts are, he said, he’s never gone to Alaska with students, the Post never published such an article, and he’s never been accused of sexual harassment.
The result, he found, was “menacing.”
While his normal response to “death threats” and the like is to not respond, he said now, “AI promises to expand such abuses exponentially.”
He explained the AI involved “appears to have manufactured baseless accusations.”
“So the question is why would an AI system make up a quote, cite a nonexistent article and reference a false claim? The answer could be because AI and AI algorithms are no less biased and flawed than the people who program them. Recent research has shown ChatGPT’s political bias, and while this incident might not be a reflection of such biases, it does show how AI systems can generate their own forms of disinformation with less direct accountability.”
He warned of the political agenda being pushed by “some high-profile leaders” for the faulty tech.
“The most chilling involved Microsoft founder and billionaire Bill Gates, who called for the use of artificial intelligence to combat not just ‘digital misinformation’ but ‘political polarization.'” He noted Gates has called for “unleashing AI to stop ‘various conspiracy theories’ and to prevent certain views from being ‘magnified by digital channels.'”
“The most obvious explanation for what occurred to me and the other professors is the algorithmic version of ‘garbage in, garbage out.’ However, this garbage could be replicated endlessly by AI into a virtual flood on the internet,” Turley warned.
He also warned, “Some Democratic leaders have pushed for greater use of algorithmic systems to protect citizens from their own bad choices or to remove views deemed ‘disinformation.'” And he cited arguments by Sen. Elizabeth Warren, D-Mass., that people were not listening to the “right people” regarding COVID. She then called for using “enlightened algorithms to steer citizens away from bad influences.”