To combat “awake” AI, Elon Musk is working on a ChatGPT competitor.
According to The Information, he has approached AI researchers about establishing a research centre and is in talks to develop a substitute for OpenAI’s ChatGPT.
Musk has regularly raised the issue of “awakened mind virus” and AI awakening.
Does ChatGPT harbour any anti-conservative bias? Musk tweeted, “That is a significant worry.
The danger of teaching AI to be woke, or to lie, as he put it in a tweet from December, is fatal.
Musk posted a meme on Tuesday that featured a “Based AI” dog going up against “Woke AI” and “Closed AI” monsters. Based is slang for being anti-woke on the internet.
Describe ChatGPT.
Musk has a history of making AI investments as a supporter of DeepMind and OpenAI. In order to conduct nonprofit research, Musk co-founded OpenAI. In 2018, he severed ties.
After becoming live late last year, ChatGPT immediately won over the public’s attention. Millions were astounded by its ability to respond to challenging queries in a conversational manner while sounding like a real person.
As AI becomes more prevalent, conservatives worry that chatbot responses to questions about affirmative action, diversity, and transgender rights scream liberal bias.
Google and Microsoft also have AI chatbots.
Microsoft, a financial supporter of OpenAI, recently released a new version of Bing that uses OpenAI technology. Google is getting ready to introduce Bard, a tool that is similar to ChatGPT.
Does ChatGPT harbour any anti-conservative bias?
Conservatives have long charged left-leaning technology leaders with stifling conservative voices and viewpoints. They now worry that this new technology is displaying unsettling anti-conservative prejudice.
Conservatives believe OpenAI staffers are behind ChatGPT’s liberal responses on issues like affirmative action, diversity, and transgender rights. Huge amounts of data are gathered by ChatGPT from the internet, and after that, people train it to create responses to queries.
ChatGPT exhibits “bias-related issues”
CEO of OpenAI Sam Altman concedes that ChatGPT, like other AI systems, has “bias-related limitations.”
According to Mark Riedl, a computing professor and associate director of the Georgia Tech Machine Learning Institute, ChatGPT is trained to avoid politically contentious issues and to be careful about how it replies to questions regarding marginalised or vulnerable groups of people.
OpenAI is also attempting to avoid what occurred to Microsoft in 2016, when the company launched Tay, a chatbot on Twitter that started spouting nasty language. Microsoft expressed regret and terminated it.