Highlights

Collaboration and networks

At CAISA, we collaborate with the public sector, industry, and civil society on a wide range of current projects that bridge the gap between research and real-world practice. We also establish networks that bring together researchers and stakeholders to exchange knowledge, share experiences, and advance the responsible development and application of artificial intelligence.
Are you interested in entering a strategic or professional partnership? Or would you like to learn more about our existing collaborations and networks? You are always welcome to contact us.

Please use the contact form via the button below, and we will get back to you as soon as possible.
Contact us for collaboration

Current collaborations

CAISA has received a grant of DKK 45 million from the research reserve for a collaboration with Statistics Denmark. The partnership aims to strengthen both research in responsible artificial intelligence and the infrastructure that enables such research through two key initiatives:

  • The first initiative will modernize Statistics Denmark’s infrastructure for AI research, allowing researchers to work securely with complex data sources and train large AI models on Danish register data.
  • The second initiative will generate new knowledge on the responsible use of AI algorithms, ensuring they meet requirements for fairness, transparency, and trust - while maintaining their functionality over time.

The CPH Tech Policy Committee brings together researchers and professionals from the public sector, industry, and civil society to address key challenges in technology and digital policy, grounded in the latest research.

The committee works to connect Danish experiences with international best practices and to foster new, long-term global partnerships.

The Tech Policy Youth Committee (TPYC) is a student-led initiative that brings together engaged students to discuss and influence the future of technology policy. The committee explores critical topics including digitalization, welfare, mental well-being, cybersecurity, geopolitics, disinformation, inequality, and the green transition.

As a member, you become part of an active network, participate in meetings and events, and collaborate with key stakeholders from civil society, industry, and the public sector. the tech Policy Youth Committee aims to amplify young people's voices in debates on digitalization and technology, contributing to a more inclusive and equitable digital future.

Tech Policy Youth Committee's logo

CAISA is part of the Danish government’s strategic initiative for artificial intelligence (AI) and one of four new initiatives aimed at advancing responsible AI. The other initiatives include:

  • The Digital Taskforce for Artificial Intelligence, established in collaboration with KL and Danish Regions
  • The development of a platform to accelerate secure and transparent Danish language models
  • The provision of Danish text data as open-source resources
CPH Tech Policy Committee

The CPH Tech Policy Committee brings together researchers and professionals from the public sector, businesses and civil society to discuss challenges in technology and digitalisation policy based on the latest research. The Committee works to bring Danish experience into dialogue with international practices and create new, lasting global partnerships.

CPH Tech Policy Youth Committee

The CPH Tech Policy Youth Committee is a student-run committee that brings together engaged students to debate and shape the technology policy of the future. The Committee works on issues such as digitalisation, welfare, psychological well-being, cyber security, geopolitics, disinformation, inequality and the green transition.

As a member, you become part of an active network, participate in meetings, events and cooperate with key actors from civil society, business, and the public sector.

CPH Tech Policy Youth Committee works to strengthen the voice of young people in the debate on digitalization and technology, thus contributing to a fairer and more inclusive digital future.

Strategic efforts for artificial intelligence

CAISA is part of the Government's Strategic Action for Artificial Intelligence and one of four new initiatives to help pave the way for responsible artificial intelligence. The other initiatives are:

Heading 1

Heading 2

Heading 3

Heading 4

Heading 5
Heading 6

Lorem ipsum dolor sit amet, consisttetur adipiscing elite, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad mínimo veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fuat nulla pariatur.

Block quote

Ordered list

  1. Item 1
  2. Item 2
  3. Item 3

Unordered list

  • Item A
  • Item B
  • Item C

Text link

Bold text

Emphasis

Superscript

Subscription

  • The Digital Task Force for Artificial Intelligence, established in collaboration with KL and Danish Regions
  • Establishing a platform that promotes the development of safe and transparent Danish language models
  • Making Danish text data available open source
Eventos
Read more
newness
CAISA prioritizes international engagements and welcomes opportunities to present and discuss our interdisciplinary research approach

CAISA actively prioritises international engagement and welcomes opportunities to present our distinctive interdisciplinary AI research model.

We have met with delegations from countries including Norway, Estonia, and Germany. Most recently, we hosted Nigeria's Minister of digital affairs, Innovation and Digital Economy, H.E. Dr Bosun Tijani, and his delegation. The Nigerian delegation shared their strategic plans to install 90,000 km of fibre-optic cables to strengthen national digital infrastructure, as well as the strong enthusiasm for AI among Nigeria’s young population.

Among other things, CAISA highlighted the importance of research on how artificial intelligence can be developed and applied in a responsible and democratic way.

Read more
Research
Read more
Eventos
Read more
newness
CAISA Deputy Head of Centre Thomas Moeslund appointed to the Danish Data Ethics Counsil

CAISA is proud to announce that our Deputy Director, Thomas Moeslund, has been appointed as a new member of the Danish Data Ethics Council. The appointment reflects his longstanding contributions to research in artificial intelligence and computer vision, as well as his strong commitment to resonsible AI and ethical technology development.

As a professor at Aalborg University and an internationally recognised researcher, Thomas has worked extensively on the intersection between advanced algorithmic methods and their societal implications. His research spans from foundational methodological development to applied AI solutions, with a focus on transparency, fairness, autonomy, and long-term impact.

Data ethics as the foundation for responsible AI

At a time when developments in artificial intelligence are advancing faster than both regulation and society’s shared understanding, the need for strong data ethics and responsible AI governanceis becoming increasingly urgent. Manipulated content, automated decision-making, and new applications of generative AI are creating significant challenges for citizens, businesses, and policymakers alike.

Thomas Moeslund highlights the importance of a robust ethical foundation:

Data ethics, for me, is not an afterthought, but an integral part of the research, development and implementation of technology.” (Translated)

His perspective emphasizes that responsible AI cannot be separated from technical development, but must be embedded from the outset - from datasets and model design to implementation and real-world use.

The role of the Council in a complex technological landscape

As a member of the Danish Data Ethics Council, Thomas Moeslund will play a key role in addressing the ethical challenges arising from the rapid development of artificial intelligence (AI). This includes issues related to misinformation, algorithmic bias, and the impact of AI systems on democratic processes and societal structures.

On the Council's role, Thomas explains:

The Council can act as a bridge between technical experts, policymakers, businesses, and citizens - both by establishing shared ethical standards and proactive solutions before problems escalate, and by communicating these issues to the broader public.” (Translated)

His appointment brings a strong technological and research-based perspective to the Council, helping to ensure a responsible and human-centred development and use of AI in Denmark.

CAISA's perspective

At CAISA, we work to advance human-centred and responsible AI, and the appointment of Thomas Moeslund reflects exactly the type og expertise needed to develop AI solutions that are both technically advanced and ethically robust.

We look forward to contributing to this work through research-based insights and interdeciplinary perspectives from CAISA - and to follow Thomas's important role in shaping Denmark's national data ethics agenda.

Read more
Research
Read more
Eventos
Read more
newness
Read more
Research
The Use of Chatbots in the Public Sector

This research brief presents a systematic literature review of what current research literature conveys about the implementation and use of chatbots in public sector workflows and in interactions with citizens. The purpose of this research brief is to identify and analyze both the opportunities and challenges within this domain through a systematic synthesis of existing empirical research on the implementation and use of, as well as citizens’ attitudes toward and experience with, chatbots in the public sector. The brief shows that chatbots can contribute effectively to certain tasks in the public sector; however, they also generate new work and shifts in responsibilities for workers. From citizens’ perspective, research finds that well-educated, younger, and resourceful citizens are more likely to trust and have positive experiences with chatbots when interacting with public authorities, whereas for others, e.g., citizens with disabilities or citizens with more complex requests and challenges, chatbots create new friction in their encounters with the public sector. This may reinforce existing social and digital inequalities within the population.

Read more