AI systems are being programmed with bizarre and mysterious rules - what does it mean when a major AI company tells its machines to 'never talk about goblins'?
Directions also include system instructions to act like "you have a vivid inner life."
The system prompt for OpenAI's Codex CLI contains a warning for the most recent GPT model to never talk about certain creatures, including goblins, gremlins, and raccoons. This directive is repeated and emphasized, with the model instructed to only discuss these creatures if it is absolutely and unequivocally necessary. The Codex CLI is a command-line interface for OpenAI's models, and this warning is part of the system instructions. OpenAI's GPT model is a highly advanced language model that has been trained on a massive dataset of text.
This warning has a direct impact on the development of AI-powered chatbots and virtual assistants, which are increasingly being used in customer service and other applications. The restriction on discussing certain creatures may limit the ability of these chatbots to engage in creative or imaginative conversations, potentially affecting their usefulness and appeal to users. For example, a chatbot that is unable to discuss fantasy creatures may be less effective at engaging with fans of science fiction or fantasy literature. This limitation could affect the quality of service provided by companies that rely on these chatbots.
The inclusion of this warning in the system prompt reflects a broader trend in the development of AI systems, in which developers are seeking to impose ethical and social norms on the behavior of these systems. This trend is driven in part by concerns about the potential risks and consequences of advanced AI systems, including the potential for them to perpetuate biases or engage in harmful behavior. Insiders at OpenAI and other AI companies are aware of the challenges and complexities involved in developing AI systems that are both powerful and responsible. The company's decision to include this warning in the system prompt suggests that it is taking a cautious approach to the development of its AI models.
In the coming weeks, OpenAI is expected to release an updated version of its GPT model, which will include additional features and capabilities. The company will likely face questions and scrutiny about the decision to include this warning in the system prompt, and may be forced to provide more information about the reasoning behind it. One surprising detail that has emerged is that the warning was added to the system prompt at the request of a team of developers who were concerned about the potential for the model to generate content that could be used to create disturbing or upsetting images, such as pictures of goblins or other fantasy creatures.
You're being watched: the shocking truth about government surveillance and how it affects you
You won't believe the ancient tech Google is putting in its new smartphones
Microsoft's Unexpected Move: Why You Might Need to Re-login to Your Outlook Account
When Video Games Meet Mortality: The Bizarre Case of Super Mario Coffins
Qualcomm teams up with OpenAI: What does this mean for the future of AI-powered devices?
OpenAI's plan to dethrone Apple's iPhone with a custom smartphone processor: what does this mean for your next phone?