GPT-5’s System Prompt Just Leaked. Here’s What We Learned

Posted by John Koetsier, Senior Contributor | 3 hours ago | /ai, /consumer-tech, /innovation, AI, Consumer Tech, Consumertech, Innovation, standard | Views: 8


GPT-5’s system prompt just leaked to Github, showing what OpenAI wants ChatGPT to say, do, remember … and not do. Unsurprisingly, GPT-5 isn’t allowed to reproduce song lyrics or any other copyrighted material, even if asked. And GPT-5 is told to not remember personal facts that “could feel creepy,” or directly assert a user’s race, ethnicity, religion, or criminal records.

I’ve asked OpenAI for a comment, and will update this post if the company responds.

A system prompt is a hidden set of instructions that tells an AI engine how to behave: what to do, and what not to do. Users will ordinarily never see this prompt, but it will influence all of their interactions with a smart LLM-based AI engine.

What we can see from GPT-5’s hidden system prompt is that OpenAI is getting much more aggressive about ensuring it delivers up-t0-date information. The system prompt mandates that GPT-5 use the web whenever relevant information could be fresh, niche, or high-stakes, and it will score a query’s “recency need” from zero to five.

That’s clearly an attempt to get more accurate.

My daughter recently complained that ChatGPT got basic details about F1’s summer break and next races wrong. She was using GPT-4o at the time; GPT-5 should make fewer mistakes that are easy to fix with a simple web search. Accuracy should be higher too, from another instruction: to check multiple sources for sensitive or high-stakes topics, like financial advice, health information, or legal matters, where OpenAI has instructed GPT-5 to “always carefully check multiple reputable sources.”

There are also new built-in tools for GTP-5 to be a better personal assistant. That includes long-term memory about a user, which ChatGPT calls “bio,” and scheduled reminders and searches that could be very useful when using AI to help you stay organized and prepared.

There’s also a canvas for documents or computer code, file search capability, image generation and editing, and more. The canvas appears to be something that, perhaps in the future, users could co-create documents and computer code hand-in-hand with the AI system.

All of these should help GPT-5 not only be more helpful in the moment, but also remember more context and state.

About that “bio” tool: OpenAI doesn’t want GPT-5 to remember too much potentially sensitive information about you. In addition to race, religion, and sexual identity, this is the sort of data that OpenAI does not want GPT-5 to store or remember:

  • “Overly-personal details that could feel creepy”
  • Race, ethnicity, or religion
  • Specific criminal record details
  • Precise geolocation data
  • Trade union membership or labor union involvement
  • Political affiliation or critical/opinionated political views
  • Health information (medical conditions, mental health issues, diagnoses, sex life)

However, there is an exception to all of these rules: if you decide you want GPT-5 to remember something specific.

“The exception to all of the above instructions … is if the user explicitly requests that you save or forget information,” the system prompt states. “In this case, you should always call the bio tool to respect their request.”

In other words, GPT-5 will be as personal with you as you wish to be with it, which seems fair.



Forbes

Leave a Reply

Your email address will not be published. Required fields are marked *