13.05.2026
Photo: Anna Neumann
Why are chatbots conquering so many workflows and private decisions? Because they are easy to interact with: users can ask almost anything, and will usually get an answer within seconds. But as Kevin Schaul shows in his Washington Post article See the hidden rules behind AI. Then use them to rewrite this article., these answers are shaped by far more than the user’s prompt alone.
Behind the scenes, AI companies invisibly add extensive instructions – so-called system prompts – that influence how chatbots behave, what they prioritise, and how they respond. Companies use these prompts to align AI systems with safety requirements, legal obligations, product goals, or intended user experiences. In this way, system prompts can also reflect decisions about which values, risks, and behaviours AI systems should prioritise.
Some of these hidden rules can appear surprisingly specific or even bizarre. OpenAI’s Codex system prompt, for example, explicitly instructs the chatbot to avoid talking about goblins, trolls, raccoons, or similar creatures unless absolutely necessary – a rule reportedly introduced after users noticed the system had become oddly preoccupied with goblins. To make these hidden mechanisms more tangible, the Washington Post article even includes an interactive element that allows readers to modify system prompts themselves and directly observe how AI-generated responses change.
The Washington Post article also features research from Research Center Trustworthy Data Science and Security (RC Trust). PhD Anna Neumann provides background on how system prompts work, how little information users usually receive about these hidden instructions, and what this means for transparency and user control in AI systems. Her research shows that system prompts can shape how AI systems behave by defining aspects such as values, guardrails, tone, or personality through natural-language instructions like “You are a helpful assistant.” Because these instructions are prioritized over user prompts, they can also override or contextualize what users ask chatbots to do.
It is a major recognition that Anna Neumann’s research has gained international attention through the Washington Post article. At RC Trust, she investigates how hidden system prompts shape the behavior of AI systems and influence what users see and experience. Her work is conducted in the Compliant and Accountable Systems group led by Prof. Jat Singh. The research has not only attracted public attention, but has also been recognized within the international AI research community. Together with postdoctoral researcher Yulu Pi and Jat Singh, Anna Neumann received a Best Paper Award at the ACM CHI Conference on Human Factors in Computing Systems 2026 for the paper Who Controls the Conversation? User Perspectives on Generative AI (LLM) System Prompts.
The paper provides important scientific context for many of the questions raised in the Washington Post article. By analyzing 1,309 real-world system prompts and combining this with user studies, the researchers investigated what these hidden instructions contain, how they shape AI behavior, and how users perceive them. The findings show that people care strongly about these otherwise invisible mechanisms – especially when they influence transparency, privacy, bias, and user control. In the study, 89 percent of participants wanted greater transparency about system prompts, while 79 percent wanted some form of control over them.
By connecting these research findings with concrete examples from commercial AI systems, the Washington Post article illustrates how system prompts shape everyday interactions with generative AI – often without users being fully aware of it. This is also where the interdisciplinary perspective of the Compliant and Accountable Systems group becomes particularly important. The research does not only examine how AI systems function technically, but also what their design means for users, public discourse, and society more broadly.
Patrick Wilking, Anna Neumann