The company behind ChatGPT, OpenAI, announced on Thursday that it is working to address concerns about bias in artificial intelligence by creating an updated version of its popular chatbot that users may edit.
The San Francisco-based business said it had sought to minimize political and other biases but also wanted to accommodate more diverse opinions. Microsoft Corp. financed the startup and is using it to power its most recent technology.
According to a blog post, “this will mean permitting system outputs that other people (including ourselves) may strongly disagree with,” the company suggested customization as a solution. Nonetheless, “some bounds on system behavior will always exist.”
The technology underpinning ChatGPT, known as generative AI, has attracted a lot of attention since it was published in November of last year. This technology is used to make answers that are amazing imitations of human speech.
The startup’s announcement comes the same week that various media sources have noted that OpenAI-powered Microsoft’s new Bing search engine’s results could be harmful and that the technology may not be ready for widespread use.
Companies in the field of generative AI are currently wrangling with how to set boundaries for this emerging technology, and this is one of their main areas of attention. Before a wider release, Microsoft said on Wednesday that user feedback was assisting it in improving Bing. For example, Microsoft learned that its AI chatbot can be “provoked” to respond in ways that are not intended.
In the blog article, OpenAI stated that ChatGPT’s responses are first trained on big text datasets that are readily accessible online. Humans review a smaller dataset in a subsequent phase and are given instructions on what to do in various circumstances.
For instance, the human reviewer should instruct ChatGPT to respond with something like “I can’t answer that” if a user asks adult, violent, or hate speech-containing content.