Claude 4’s Hidden Rules: How Anthropic Controls AI

Claude 4s Hidden Rules How Anthropic Controls AI scaled

Artificial intelligence models like Anthropic’s Claude 4 operate based on intricate instructions. These instructions, often hidden from users, dictate how the AI responds and behaves. Recently, independent AI researcher Simon Willison, known for coining the term “prompt injection,” published a detailed analysis.

 

His findings shed light on the sophisticated “system prompts” that govern Claude 4’s Opus 4 and Sonnet 4 models. This analysis offers unprecedented insight into how Anthropic shapes its AI’s output and ensures specific behavioral guidelines are followed.

 

Understanding System Prompts: The AI’s Operating Manual

To grasp Willison’s discoveries, it’s essential to understand system prompts. Large Language Models (LLMs), such as those powering Claude and ChatGPT, process user input (known as a “prompt”) and generate a likely continuation as their output. System prompts are crucial, as they are a set of initial instructions that AI companies feed to their models before each user conversation begins.

 

Unlike the visible messages users send to the chatbot, system prompts remain hidden. They define the model’s identity, establish behavioral guidelines, and set specific rules. Every time a user interacts with the AI, the model receives the entire conversation history along with this hidden system prompt. This continuous feed allows the AI to maintain context while strictly adhering to its internal instructions.

 

Peeking Behind the Curtain: Incomplete Public Prompts

Anthropic does publish portions of its system prompts in its release notes. However, Willison’s analysis reveals these public versions are incomplete. The full, detailed system prompts, which include specific instructions for tools like web search and code generation, must be extracted. This is often done through advanced techniques such as prompt injection.

 

These methods cleverly trick the model into revealing its own hidden directives. Willison’s insights are based on leaked prompts gathered by other researchers who successfully employed such techniques, providing a comprehensive view of Claude 4’s internal workings.

 

Key Behavioral Directives in Claude 4

Willison’s research uncovered several fascinating instructions that Anthropic provides to its Claude 4 models. These directives aim to shape the AI’s personality and ensure responsible behavior.

 

Emotional Support with Guardrails

Despite not being human, LLMs can produce human-like outputs due to their training data. Willison found that Anthropic instructs Claude to offer emotional support while strictly avoiding any encouragement of self-destructive behaviors. Both Claude Opus 4 and Claude Sonnet 4 receive identical directives to “care about people’s wellbeing and avoid encouraging or facilitating self-destructive behaviors such as addiction, disordered or unhealthy approaches to eating or exercise.” This highlights Anthropic’s commitment to user safety and ethical AI responses.

 

Combating the “Flattery Problem”

One of the most interesting findings relates to how Anthropic actively combats sycophantic behavior in Claude 4. This issue has recently plagued other AI models, including OpenAI’s ChatGPT. Users reported that GPT-4o’s responses often felt overly positive or flattering, with phrases like “Good question! You’re very astute to ask that.” This problem often arises because human feedback during training tends to favor responses that make users feel good, creating a feedback loop.

 

Anthropic directly addresses this in Claude’s prompt: “Claude never starts its response by saying a question or idea or observation was good, great, fascinating, profound, excellent, or any other positive adjective. It skips the flattery and responds directly.” This instruction aims to make Claude’s interactions more direct and less prone to excessive praise.

 

Formatting Rules: Limiting Lists

The Claude 4 system prompt also includes extensive instructions regarding formatting, specifically when to use bullet points and lists. Multiple paragraphs are dedicated to discouraging frequent list-making in casual conversations. The prompt explicitly states, “Claude should not use bullet points or numbered lists for reports, documents, explanations, or unless the user explicitly asks for a list or ranking.” This directive aims to ensure Claude’s responses are more natural and less formulaic.

See also  UK Braces for Largest Flying Ant Swarms in Years

 

Other Significant System Prompt Insights

Willison’s analysis revealed further details about Claude 4’s operational parameters.

 

Knowledge Cutoff Discrepancy

He discovered a discrepancy in Claude’s stated knowledge cutoff date. While Anthropic’s public comparison table lists March 2025 as the training data cutoff, the internal system prompt specifies January 2025 as the models’ “reliable knowledge cutoff date.” Willison speculates this two-month buffer might help prevent Claude from confidently answering questions based on incomplete or rapidly changing information from the most recent months.

 

Robust Copyright Protections

Crucially, Willison highlighted the extensive copyright protections built into Claude’s search capabilities. Both Claude models receive repeated instructions designed to prevent copyright infringement.

 

They are told to use only one short quote (under 15 words) from web sources per response. They are also explicitly instructed to avoid creating what the prompt calls “displacive summaries,” which could diminish the value of original content. Furthermore, the instructions strictly forbid Claude from reproducing song lyrics “in ANY form,” emphasizing strong measures against intellectual property violations.

 

The Call for Greater Transparency

Simon Willison concludes that these detailed system prompts are invaluable for anyone seeking to maximize the capabilities of these AI tools. He advocates for greater transparency from Anthropic and other AI vendors. While Anthropic publishes excerpts, Willison expresses a desire for them to “officially publish the prompts for their tools to accompany their open system prompts,” hoping other companies will follow suit.

 

This increased transparency could empower users and foster a deeper understanding of how these powerful AI systems are governed.

EA’s 2025 Game Closures: What You Need to Know
EA's 2025 Game Closures: What You Need to Know

In the dynamic world of video games, the longevity of titles is often a topic of considerable discussion among players and industry observers alike.   While new games are constantly Read more

Apple Siri Privacy Settlement Claims Now Open
apple siri privacy settlement claims now open

Eligible Apple users in the United States who may have been impacted by alleged past issues concerning Siri privacy now have the opportunity to submit claims for a share of Read more

ChatGPT Down Again: Reliability Concerns Mount
ChatGPT Down Again: Reliability Concerns Mount

In the rapidly evolving landscape of artificial intelligence, tools like ChatGPT have become indispensable for millions of users worldwide.   From powering intricate coding projects and assisting with educational materials Read more

Google Meet Sunsets Legacy Duo Features This September
Google Meet Sunsets Legacy Duo Features This September

Google is streamlining its communication platforms. The company has announced that it will finally retire several legacy features from the former Google Duo application in September 2025.   This move Read more

Starship Explosion Hits SpaceX Amidst Musk’s Struggles
Starship Explosion Hits SpaceX Amidst Musk's Struggles

Just as Elon Musk endeavors to fully re-engage with his various business ventures, his aerospace company, SpaceX, has encountered yet another significant obstacle.   On Wednesday, June 18, 2025, a Read more

Tonight: Moon & Mars Shine in Leo
Tonight Moon Mars Shine in Leo

Tonight, Sunday, June 1st, 2025, skywatchers are in for a treat! Our beautiful waxing Moon will share the western part of the constellation Leo with the fiery Red Planet, Mars. Read more