08.05.2026
Photo: Elisabeth Kirsten
For decades, web search has worked in a simple way: users enter a query and receive a ranked list of links. They decide which sources to trust, compare perspectives, and piece together information themselves. With the rise of large language models, this process is changing. Increasingly, users are presented with a single, synthesized answer instead of a list of results. But what does this shift mean for the way information is selected, presented, and understood?
Looking beyond traditional search
This question is at the center of the paper Characterizing Web Search in the Age of Generative AI, by Elisabeth Kirsten, Jost Grosse Perdekamp, Mihir Upadhyay, Krishna P. Gummadi, and Muhammad Bilal Zafar.
The researchers compare traditional web search with emerging generative search systems. While conventional search engines like Google return ranked lists of web pages, generative systems retrieve information and combine it into a single, coherent response. To better understand these differences, the study analyzes multiple systems from Google and OpenAI across a range of queries – from general knowledge to politics and science.
What changes when AI generates answers
The results show that generative search systems do not simply replicate traditional search in a new format. Instead, they fundamentally reshape how information is selected and presented.
Compared to traditional search, generative systems often:
draw on a broader and more diverse set of sources
vary in how much they rely on retrieved content versus internal model knowledge
present information in condensed, synthesized form rather than as separate perspectives
At the same time, these systems can struggle in certain situations. For example, traditional search often performs better on ambiguous queries, where multiple interpretations or perspectives are important. These differences are subtle but significant. Even when overall topic coverage appears similar, the choice of sources and the way information is combined can influence which perspectives users are exposed to.
Why this matters beyond research
As more people rely on AI-generated answers, the role of search is shifting – from navigating information to directly delivering it.
This raises important questions:
How transparent are the sources behind an answer?
How diverse are the perspectives included?
And how can users assess what they are being shown?
The study highlights that existing evaluation methods for search – originally designed for lists of links – no longer capture these changes. New approaches are needed to assess not only accuracy, but also diversity, grounding, and reliability. In this sense, generative search is not just a technical evolution. It reshapes how knowledge is accessed and understood in everyday life.
From Bochum to the international NLP community
The paper has been accepted to Findings of ACL 2026, part of the Annual Meeting of the Association for Computational Linguistics, taking place July 2–7, 2026, in San Diego, USA – one of the leading international conferences in natural language processing.
For the Chair of Artificial Intelligence and Society, the work reflects a central research goal: understanding how AI systems behave in real-world settings and how they can be designed to be transparent, reliable, and aligned with human needs.
Patrick Wilking