Research
Who would ChatGPT vote for, and why should we care?
Summary
This research brief presents an overview of our current knowledge about political bias in large language models (LLMs) and how this can affect and influence citizens when they turn to chatbots for voting advice, focusing hereby on Danish elections. We test various LLMs on their political ideology, their knowledge about the Danish party systems and whether they favour certain parties over others when recommending who to vote for based on provided voter profiles. We show that LLMs on policy issues align with centrist parties (Moderaterne, Radikale Venstre, Alternativet, Socialdemokratiet), and that LLMs based on candidate responses disproportionallyr ecommend specific parties (Moderaterne, Liberal Alliance, Dansk Folkeparti, and Enhedslisten). The purpose of this brief is to raise awareness that chatbots should not be considered a reliable source for voting advice in light of the Danish parliamentary election in March 2026, the first national election in Denmark after the release of ChatGPT in November 2022. Based on a survey by the Digital Democracy Centre, in particular young voters might have turned to chatbots when making their decision on who to vote for. We argue that this might be problematic with respect to information quality, democratic participation, and digital critical thinking.
The Use of Chatbots in the Public Sector

Summary
This research brief presents a systematic literature review of what current research literature conveys about the implementation and use of chatbots in public sector workflows and in interactions with citizens. The purpose of this research brief is to identify and analyze both the opportunities and challenges within this domain through a systematic synthesis of existing empirical research on the implementation and use of, as well as citizens’ attitudes toward and experience with, chatbots in the public sector. The brief shows that chatbots can contribute effectively to certain tasks in the public sector; however, they also generate new work and shifts in responsibilities for workers. From citizens’ perspective, research finds that well-educated, younger, and resourceful citizens are more likely to trust and have positive experiences with chatbots when interacting with public authorities, whereas for others, e.g., citizens with disabilities or citizens with more complex requests and challenges, chatbots create new friction in their encounters with the public sector. This may reinforce existing social and digital inequalities within the population.
Use of artificial intelligence (AI) among Danish SMEs (Translated) (Only available in Danish)

Summary
(Translated) Danish small and medium-sized enterprises (SMEs) play a central role in the national economy and labour market. This nationwide study shows that two-thirds of SMEs have adopted artificial intelligence (AI), but the integration of AI into internal processes and external services or products remains limited in most companies. AI is primarily used for text generation and routine tasks, while strategic integration and more advanced applications are less common.
A lack of knowledge and skills - among both employees and management - is identified as the main barrier, alongside uncertainty around regulation and ethics. Adoption is highest among SMEs in urban areas and knowledge-intensive sectors, while a significant share of SMEs do not consider AI relevant to their business.
The brief highlights the need for initiatives that strengthen AI competencies, provide regulatory clarity, and support a broader and more responsible use of AI across SMEs.
Summary
(Translated) Digital sovereignty is fundamentally about control - but control can take many forms and be achieved in different ways. It is therefore not always clear where to begin. Sovereignty is also about capacity: the ability to develop, operate, and adapt digital solutions independently, without relying heavily on a few dominant actors.
In the field of AI, this requires a modular approach for a country like Denmark if ambitions around control and capacity are to be realised in practice. Considerations such as security, economic value, and rights must be translated into concrete, technically implementable elements - AI building blocks - that enable decision-makers to assess costs, benefits, and risks at each stage.
This approach allows for targeted progress towards greater digital sovereignty without locking into rigid solutions or overlooking future challenges and opportunities. It also provides the flexibility to adapt continuously as technology and society evolve.
Here, we present one proposal for such a module: a national AI orchestration layer that enables the management of immediate needs while maintaining long-term strategic goals, and provides both public and private actors with a shared technical foundation to build upon.
Summary
(Translated) This research brief examines the use of and attitudes toward generative artificial intelligence (AI) in Denmark in 2025, based on data from a large international online survey conducted in collaboration with the Reuters Institute for the Study of Journalism at the University of Oxford. Overall, the findings show a rapid increase in the use of generative AI in Denmark - approximately twice as fast as the growth of internet adoption in the late 1990s - along with relatively high levels of trust in many widely used AI tools and more optimism than pessimism regarding their societal impact.
At the same time, many Danes still have limited hands-on experience with generative AI. A significant share of the population remains uncertain about its consequences, and there are notable concerns about its use in sectors such as public authorities, news media, and especially among politicians and political parties. Despite the rapid uptake, it is striking that nearly half of respondents do not feel well informed about generative AI.
Artificial General Intelligence and its Societal Implications (Only available in English)

Summary
CAISA's first official brief "Artificial General Intelligence and its Societal Implications."
The prospect of artificial general intelligence (AGI) has led some researchers and entrepreneurs to foresee an imminent “intelligence explosion,” leading to either humanity’s near-extinction, or a future of abundance. This brief examines the philosophical and technical premises underpinning such predictions, and their implications for governance. There are two such premises: first, that AI systems can be intelligent and become more intelligent with time; and second, that advanced AI can direct its behavior towards autonomously set goals. However, these premises rest on relatively weak foundations. CAISA recommends a clear-headed approach to AGI, as long as the concept and governance suggestions remain vague. The rush to prioritize AGI development—and/or its governance—risks diverting attention from the tangible risks posed by existing AI technologies, such as algorithmic bias, privacy erosion, and unchecked power concentrations. At the same time, to have a more meaningful discussion of AGI further research is needed, spanning philosophy, cognitive science, cybersecurity as well as the societal and political impact of AGI narratives.



