02.07.2024
Lydia Y. Chen is a Professor in the Department of Computer Science at the University of Neuchatel in Switzerland and Delft University of Technology in the Netherlands. Prior to joining TU Delft, she was a research staff member at the IBM Research Zurich Lab from 2007 to 2018. She holds a PhD from Pennsylvania State University and a BA from National Taiwan University. Her research interests are federated machine learning, generative AI, and their robustness and privacy management. She has published papers in peer-reviewed journals and serves on the technical program committees of system and AI conferences and the editorial boards on multiple IEEE Transactions journals.
Generative artificial intelligence (GAI) models, such as language models and diffusion models, are widely used for generating texts, images, tables, and graphs. There are increasing risks of abusing GAI to produce incorrect and adversarial contents. Watermarking GAI content is one of the essential solutions to govern the GAI applications and guardrail their misuse and harm to the society, even requested by the governmental policies. In this talk, I will discuss our ongoing work on post watermarking the content of different modalities from language models and diffusion models, i.e., synthetic text, tables, and graphs. We aim to design watermarking schemes to achieve objectives of having minimal degradation of the generated content quality, imperceptible to humans for avoiding alteration, detectable by machines for rigorous auditing, and robust against post editing attacks. I will highlight the key requirements and technical challenges with respect to the GAI models and the data modalities, through our preliminary results.
Further information here.