Forskning

Transparency of AI-generated content when AI is the norm

CAISA - Research brief

Through six interventions from leading European scholars in their field, this research brief examines the challenges of governing AI-generated content in an information environment where such content is rapidly becoming the norm. Drawing on interdisciplinary perspectives, the contributions assess the effectiveness and limitations of emerging AI transparency governance, particularly labelling requirements under the EU AI Act and the forthcoming Code of Practice on marking and labelling of AI-generated content. While transparency labels are normatively important for informing users about content provenance, research suggests that labelling alone is unlikely to mitigate manipulation, restore trust, or empower citizens. The research brief therefore argues for a broader transparency ecosystem that combines labelling with governance infrastructure, organisational accountability, and ongoing research to develop adaptive, evidence-based approaches to AI transparency.

Læs mere
Læs mere
Read the research brief in English here
Eventos
Read more
newness
Read more
Research
Transparency of AI-generated content when AI is the norm

Through six interventions from leading European scholars in their field, this research brief examines the challenges of governing AI-generated content in an information environment where such content is rapidly becoming the norm. Drawing on interdisciplinary perspectives, the contributions assess the effectiveness and limitations of emerging AI transparency governance, particularly labelling requirements under the EU AI Act and the forthcoming Code of Practice on marking and labelling of AI-generated content. While transparency labels are normatively important for informing users about content provenance, research suggests that labelling alone is unlikely to mitigate manipulation, restore trust, or empower citizens. The research brief therefore argues for a broader transparency ecosystem that combines labelling with governance infrastructure, organisational accountability, and ongoing research to develop adaptive, evidence-based approaches to AI transparency.

Read more
Eventos
Read more
newness
AI-seminar på Marienborg

Kunsting intelligens er rykket helt ind i kernen af dansk politik. Under regeringsforhandlingerne på Marienborg i april 2026 blev forhandlingerne midlertidigt sat på pause, så toppolitikere kunne deltage i et seminar om AI og dets betydning for samfundet.

Her bidrog CAISA's centerleder Rebecca Adler-Nissen sammen med professor Abraham Newman (Georgetown Univeristy) med forskningsbaserede perspektiver på AI's rolle i geopolitik, økonomi og demokrati. Deres oplæg adresserede blandt andet, hvordan kunstig intelligens påvirker sikkerhed, arbejdsmarked, uddannelse og Europas strategiske position.

Ifølge TV2 var der stor interesse blandt politikkerne, som engagerede sig i spørgsmål om både teknologisk udvikling og samfundsmæssige konsekvenser. Rebecca Adler-Nissen fremhævede selv den høje grad af engagement og efterspørgsel på viden om AI's betydning på tværs af politikområder.

CAISA's deltagelse understreger centerets rolle i at bringe forskningsbaseret viden ind i politiske beslutningsprocesser og bidrage til en ansvarlig udvikling af AI i samfundet.

Read more
Research
Read more
Eventos
Read more
newness
Read more
Research
Digital Suverænitet: Fra begreb til strategisk ramme

This brief is currently only available in Danish.

Summary (Translated)

Digital sovereignty is multidimensional and requires priority

In a time of geopolitical instability and rapid AI development, control over digital infrastructure and data has become critical. While there is broad agreement on the need for action at the national, Nordic, and EU levels, a shared language around digital sovereignty is still lacking. This lack of alignment leads either to inaction or to narrow technical solutions without strategic direction. The core argument of the brief is that digital sovereignty is a multidimensional concept, involving both principled positions and pragmatic choices. Reducing it to technical solutions risks overlooking the values and trade-offs that determine who controls and benefits from these systems. Conversely, focusing solely on values leads to abstract principles without practical implementation or real impact. Digital sovereignty is rarely about choosing between full self-sufficiency and total dependence. Rather, it is about balancing often competing demands for openness, security, competitiveness, growth, values, and rights in a world where capabilities are unevenly distributed. This means that it is necessary to define who or what is to be protected or promoted, within the domains of security, economic growth, or citizens’ rights, and to recognize that choices in one domain may strengthen or undermine another. The brief focuses on AI as the area where digital sovereignty is most acutely at stake, but the concepts apply more broadly to digital infrastructure and data. It provides decision-makers with tools to navigate these dilemmas by presenting:

§  A conceptual framework for identifying who or what should be digitally sovereign.
§  An overview of how digital sovereignty is prioritized around the world.
§  An understanding that sovereignty can be exercised through three control regimes: ownership, expertise, or regulation – but that none of these are sufficient on their own.

The central implication of the brief is that digital sovereignty requires an integrated strategy that combines ownership, expertise, and regulation, while managing the interdependencies and trade-offs between security, economic growth, and citizens’ rights through clear objectives. Without this holistic approach, there is a risk of ineffective regulation, unusable infrastructure, or a lack of capacity to develop, maintain, and apply solutions in practice, potentially undermining security, growth, or rights.

Read more