About Us | Contact Us

Guns are not hazardous to children, according to studies that ChatGPT made up. How much autonomy will we give AI?

One of my colleagues got a worrying email earlier this year. He received a message from an old acquaintance informing him that she had been experimenting with ChatGPT, a brand-new chatbot powered by artificial intelligence that has been making news.

She asked the software to produce an essay claiming that having access to firearms did not increase the risk of child mortality.

In response, ChatGPT created a well-written piece that included references to scholarly works by top experts, including my colleague, a recognised authority on gun violence.

The issue? There are no studies mentioned in the footnotes.

In order to defend the completely false premise that guns aren’t dangerous to children, ChatGPT created an entire universe of bogus papers using the names of real academic journals and real firearms researchers.

ChatGPT attempted to explain away its error.
The chatbot said, “I can tell you that the references I provided are legitimate and originate from peer-reviewed scientific journals,” when pressed further. It’s untrue.

This example gives me the chills. I can see why people are so excited about software like ChatGPT, which creates fresh language based on patterns it “learns” by absorbing billions of web words. But, there are very serious risks associated with this potent technology, which could be bad for public health.

The fact that the chatbot can “hallucinate” facts—that is, make them up—is known to both OpenAI, which developed ChatGPT, and Microsoft, which is integrating the technology into its search engine Bing. It can also be used to spread false information that is incredibly persuasive.

These growing pains are part of the strategy for the firms, who need users to test the tool even though they are aware that it is “slightly broken,” as Sam Altman, CEO of OpenAI, recently noted. Large-scale testing is essential, in their opinion, for enhancing the product.

Unfortunately, this approach ignores the effects of a “beta test” that affects more than 100 million consumers each month. The businesses will receive their data. Yet in the interim, they run the risk of igniting a fresh wave of false information, which fuels misunderstanding and erodes the already fragile social confidence.

They don’t seem to understand how big of a concern this is, for the most part. For instance, Snapchat released its new AI tool last month with a cheery “sorry in advance” and the disclaimer that it “is prone to delusion and can be persuaded into saying just about anything.” That strategy simply makes me more worried.

There are, in my opinion, two related hazards. It’s simple to fall for ChatGPT’s delusions because it asserts its “facts” with such confidence. It might be risky when the subject is health and safety.

In ChatGPT-generated health article, there were 18 mistakes.
For instance, ChatGPT “wrote” an article about low testosterone that was published in the consumer magazine Men’s Magazine. A renowned endocrinologist was enlisted by another journal to review it, and he discovered 18 mistakes.

Readers would be seriously deceived if they relied on the article for health advice. Given that 80 million persons in the United States have low or limited health literacy and that young people might not bother to double-check AI-produced “facts,” this is a serious worry.

In a world where anyone with a Twitter account can pose as an authentic news source, complete with a blue checkmark, ChatGPT’s remarkable capacity to create content that has the appearance of being true could allow harmful actors to disseminate fake information fast and affordably. These perpetrators might even conduct “injection attacks” to instill misleading information into AI programmes, which would propagate the lies even farther. The possible knock-on effects are concerning.

Federal regulators ought to be in charge of this initiative. Yet regrettably, these organisations have a poor track record of keeping up with developments like cryptocurrencies or social media firms. A laissez-faire attitude in those two instances allowed for serious harm to their clients’ financial and mental well-being.

But such disappointments in the past do not mean we should give up. The moment has come for agencies to create plans to protect vulnerable groups from the negative effects of generative AI while still facilitating and supporting innovation.

Even if governmental regulators fall short, businesses themselves ought to recognise the benefit of approaching beta testing with more caution and consideration. Technology rollouts that aren’t yet ready for prime time can be detrimental to both the company’s financial health and the general public’s health. For instance, a prominent miscalculation made by Google’s new AI tool last month lost the parent company’s market value $100 billion.

It is far preferable for the company and the consumer for technological advancements to be made slowly rather than dramatically wrong.

Leave a Comment