Claude 4’s Hidden Rules: How Anthropic Controls AI

Claude 4s Hidden Rules How Anthropic Controls AI scaled

Artificial intelligence models like Anthropic’s Claude 4 operate based on intricate instructions. These instructions, often hidden from users, dictate how the AI responds and behaves. Recently, independent AI researcher Simon Willison, known for coining the term “prompt injection,” published a detailed analysis.

 

His findings shed light on the sophisticated “system prompts” that govern Claude 4’s Opus 4 and Sonnet 4 models. This analysis offers unprecedented insight into how Anthropic shapes its AI’s output and ensures specific behavioral guidelines are followed.

 

Understanding System Prompts: The AI’s Operating Manual

To grasp Willison’s discoveries, it’s essential to understand system prompts. Large Language Models (LLMs), such as those powering Claude and ChatGPT, process user input (known as a “prompt”) and generate a likely continuation as their output. System prompts are crucial, as they are a set of initial instructions that AI companies feed to their models before each user conversation begins.

 

Unlike the visible messages users send to the chatbot, system prompts remain hidden. They define the model’s identity, establish behavioral guidelines, and set specific rules. Every time a user interacts with the AI, the model receives the entire conversation history along with this hidden system prompt. This continuous feed allows the AI to maintain context while strictly adhering to its internal instructions.

 

Peeking Behind the Curtain: Incomplete Public Prompts

Anthropic does publish portions of its system prompts in its release notes. However, Willison’s analysis reveals these public versions are incomplete. The full, detailed system prompts, which include specific instructions for tools like web search and code generation, must be extracted. This is often done through advanced techniques such as prompt injection.

 

These methods cleverly trick the model into revealing its own hidden directives. Willison’s insights are based on leaked prompts gathered by other researchers who successfully employed such techniques, providing a comprehensive view of Claude 4’s internal workings.

 

Key Behavioral Directives in Claude 4

Willison’s research uncovered several fascinating instructions that Anthropic provides to its Claude 4 models. These directives aim to shape the AI’s personality and ensure responsible behavior.

 

Emotional Support with Guardrails

Despite not being human, LLMs can produce human-like outputs due to their training data. Willison found that Anthropic instructs Claude to offer emotional support while strictly avoiding any encouragement of self-destructive behaviors. Both Claude Opus 4 and Claude Sonnet 4 receive identical directives to “care about people’s wellbeing and avoid encouraging or facilitating self-destructive behaviors such as addiction, disordered or unhealthy approaches to eating or exercise.” This highlights Anthropic’s commitment to user safety and ethical AI responses.

 

Combating the “Flattery Problem”

One of the most interesting findings relates to how Anthropic actively combats sycophantic behavior in Claude 4. This issue has recently plagued other AI models, including OpenAI’s ChatGPT. Users reported that GPT-4o’s responses often felt overly positive or flattering, with phrases like “Good question! You’re very astute to ask that.” This problem often arises because human feedback during training tends to favor responses that make users feel good, creating a feedback loop.

 

Anthropic directly addresses this in Claude’s prompt: “Claude never starts its response by saying a question or idea or observation was good, great, fascinating, profound, excellent, or any other positive adjective. It skips the flattery and responds directly.” This instruction aims to make Claude’s interactions more direct and less prone to excessive praise.

 

Formatting Rules: Limiting Lists

The Claude 4 system prompt also includes extensive instructions regarding formatting, specifically when to use bullet points and lists. Multiple paragraphs are dedicated to discouraging frequent list-making in casual conversations. The prompt explicitly states, “Claude should not use bullet points or numbered lists for reports, documents, explanations, or unless the user explicitly asks for a list or ranking.” This directive aims to ensure Claude’s responses are more natural and less formulaic.

See also  Apple's 2027 Vision: Curved Glass iPhone, AI & More

 

Other Significant System Prompt Insights

Willison’s analysis revealed further details about Claude 4’s operational parameters.

 

Knowledge Cutoff Discrepancy

He discovered a discrepancy in Claude’s stated knowledge cutoff date. While Anthropic’s public comparison table lists March 2025 as the training data cutoff, the internal system prompt specifies January 2025 as the models’ “reliable knowledge cutoff date.” Willison speculates this two-month buffer might help prevent Claude from confidently answering questions based on incomplete or rapidly changing information from the most recent months.

 

Robust Copyright Protections

Crucially, Willison highlighted the extensive copyright protections built into Claude’s search capabilities. Both Claude models receive repeated instructions designed to prevent copyright infringement.

 

They are told to use only one short quote (under 15 words) from web sources per response. They are also explicitly instructed to avoid creating what the prompt calls “displacive summaries,” which could diminish the value of original content. Furthermore, the instructions strictly forbid Claude from reproducing song lyrics “in ANY form,” emphasizing strong measures against intellectual property violations.

 

The Call for Greater Transparency

Simon Willison concludes that these detailed system prompts are invaluable for anyone seeking to maximize the capabilities of these AI tools. He advocates for greater transparency from Anthropic and other AI vendors. While Anthropic publishes excerpts, Willison expresses a desire for them to “officially publish the prompts for their tools to accompany their open system prompts,” hoping other companies will follow suit.

 

This increased transparency could empower users and foster a deeper understanding of how these powerful AI systems are governed.

De-Extinct Dire Wolves: Rapid Growth & Milestones
De-Extinct Dire Wolves: Rapid Growth & Milestones

The audacious endeavor of bringing extinct species back to life has taken a significant leap forward. Colossal Biosciences, a pioneering company at the forefront of de-extinction efforts, recently shared a Read more

Urgent: Over 1M Power Banks Recalled for Fire Risk
Urgent: Over 1M Power Banks Recalled for Fire Risk

More than 1.15 million power banks are currently under an urgent recall across the United States. This significant safety measure comes after multiple consumers reported incidents involving fires and explosions Read more

How to See June’s Strawberry Micromoon
How to See June's Strawberry Micromoon

Skywatchers are in for a treat as June's Strawberry Moon prepares to dazzle the night sky. This year, the full moon will be a micromoon, meaning it will appear slightly Read more

Scale AI Founder Defends Military AI Work
Scale AI Founder Defends Military AI Work

Alexandr Wang, the founder of the prominent artificial intelligence technology company Scale AI, has taken a firm and unapologetic stance regarding his company's contracts with the United States military. Unlike Read more

Whoop 5.0 & MG: New Wearable Tech Review
whoop 5.0 mg new wearable tech review

Whoop Inc., a company known for its increasingly popular screen-less fitness bands that prioritize data collection and analysis, has debuted updated hardware designed to appeal to a wider audience. [caption Read more

Spot the Celestial ‘Smiley Face’: A Guide to the Rare Moon-Venus-Saturn Alignment on April 25
25 April to look like this But what will you see

Astronomy enthusiasts and casual skygazers alike have likely seen headlines buzzing about a potentially visible 'smiley face' alignment set to grace the morning sky this Friday, April 25th. This much-discussed Read more