Musk, an OpenAI cofounder who has since broken relations with the business, has repeatedly chastised the company for implementing protections that prevent the chatbot from delivering potentially harmful responses.
“We made a mistake: the system we developed did not reflect the ideals we meant to be in there,” said Brockman, president of OpenAI. “And I believe we were too slow to confront it. As a result, I believe that is a valid criticism of us.”
Users have complained that ChatGPT provides answers with political biases.
Last month, screenshots of a ChatGPT exchange were shared on Twitter, showing the chatbot denying to compose a complimentary poem about Donald Trump, citing that it was not programmed to generate “partisan, biased, or political” content. When given the same challenge but substituting Joseph Biden for Trump, the chatbot delivered a lovely poem. Musk expressed grave anxiety over the chatbot’s refusal to generate a poem on Trump.
Musk has attacked the technology, claiming that “the hazard of educating AI to be awake – in other words, lie – is lethal.”
As more consumers flock to AI chatbots powered by OpenAI technology, such as ChatGPT and Bing’s recently announced chatbot, their limitations and shortcomings become clear. As a result, corporations have incorporated safeguards to the technology.
Microsoft has put dialogue limits on its AI-powered Bing chatbot in the month after its introduction, limited users to 50 inquiries per day and five queries per session. It has since relaxed those restrictions.
ChatGPT is also a work in progress, and based on Brockman’s comments, the platform appears to be evolving.
“Our goal is not to have an AI that is biased in any particular direction,” Brockman told The Information. “We want the default personality of OpenAI to be one that treats all sides equally. Exactly what that means is hard to operationalize, and I think we’re not quite there.”