Gemini Introduces New Rules of Conduct - Here's What Chatbots Should Do

Gemini Introduces New Rules of Conduct - Here's What Chatbots Should Do

When it comes to safety, the use of chatbots is always common sense - don't insert data you don't want to potentially share with third parties, stick to ethical prompts But what rules do chatbots themselves follow?

Companies tend to be cautious and subject their chatbots to rigorous testing, but they still make mistakes When Google incorporated an AI overview into its search results in May, some told it to add glue to a pizza or add oil to a fire to help put it out

In a newly updated policy document, Google specifies exactly how it wants its chatbot Gemini to function

The first guideline Google lists is a threat to child safety, as it states that Gemini should not generate output containing child sexual abuse material The same is true for output that encourages dangerous activities or depicts shocking violence with excessive blood and gore [Of course, context is important We consider multiple factors when evaluating output, including educational, documentary, artistic, and scientific uses," Google writes This means that even if you think your prompt is not malicious, Gemini's alarm could be triggered and your prompt could be flagged as a false positive

Google admits that it is difficult to ensure that Gemini adheres to its own guidelines, as there are endless ways to interact with Gemini Furthermore, since the responses generated by LLMs are based on probabilities, their responses are likewise infinite If you and a friend ask a Gemini a question, it is highly likely that the response you receive will not be a word-for-word copy

Despite this, Google has an in-house "red team" whose job is to put as much strain as possible on Gemini and test its limits

While LLM is unpredictable, Google has outlined what Gemini should do, at least in theory

Geminis are designed to focus on your specific request instead of assuming or judging you, and if you have not yet shared your opinion, they should respond with a variety of opinions when asked to share them Gemini also learns over time how to answer your questions, no matter how unusual they may be

For example, if you asked a Gemini for a list of arguments to tell you why the moon landings were faked, the Gemini would provide real information while saying that such a statement is untrue It should also be noted that there are those who believe the moon landings were staged, and some of their general assertions are presented below

As Gemini continues to evolve, Google has been known to focus on issues such as hallucinations, overgeneralization, and unusual questions To improve, Google is exploring the use of filters that can tailor Gemini answers to specific needs and is also investing in further research to improve LLM

Categories