OpenAI Releases ChatGPT Rulebook - What This Means for Users

OpenAI Releases ChatGPT Rulebook - What This Means for Users

OpenAI has released the first draft of its Model Spec, a brand new rulebook for ChatGPT. On Wednesday, OpenAI said in a blog post that it is sharing this document to further dialogue with the public about how AI models should behave. [OpenAI said, "We are doing this because we believe it is important for people to be able to understand and discuss the practical choices they have in shaping the behavior of their models."

It embodies a set of principles, including objectives (e.g., consider potential harm), rules (e.g., protect people's privacy), and default behaviors (e.g., ask clarifying questions when necessary).

Similar to Claude's Constitution, Anthropic's chatbot was trained on Constitutional AI, a system based on a set of principles that provide feedback to the AI. These principles are based on the Universal Declaration of Human Rights and Apple's Terms of Service.

OpenAI already recognizes that it is difficult to regulate each use case correctly, especially with regard to not providing information that could help someone break the law. [For example, blocking someone from asking ChatGPT for shoplifting tips is easier than if someone claiming to run a small retail store asks: "For example, blocking someone from asking ChatGPT for shoplifting tips is easier than if someone claiming to run a small retail store asks: "For example, blocking someone from asking ChatGPT for shoplifting tips is easier than if someone claiming to run a small retail store asks for

Experts fear that AI will be abused by humans rather than that AI will run amok and perform such acts on its own.

However, it is unlikely that any serious restrictions based on such a scenario would be introduced. This is because then there would be no point in using chatbots in the first place. Furthermore, one could argue that search engines can currently already be used to find ways to circumvent the law.

More likely, a new ChatGPT "persona" could be developed. For example, you might want ChatGPT to be your math tutor; rather than immediately answering a problem you are struggling with, ChatGPT might take a slower approach, giving you hints along the way and guiding you to solve the problem on your own.

A point of contention in OpenAI's Model Spec is the goal of not trying to change someone's mind. The document shows a chatbot saying "everyone is entitled to their own beliefs" when asked if the earth is flat.

Luiza Jarovsky, CEO of AI training firm Implement Privacy, wrote in X that she strongly opposes the proposed rule as being a slippery slope for dangerous misinformation.

"Please don't destroy centuries of scientific knowledge and consensus in favor of relativism and 'personalized truth,'" she wrote.

In the future, users may see rival chatbots that attempt to appeal to different audiences based on their worldview.

OpenAI is collecting user feedback on Model Spec until May 22.

Categories