Forskning

A modular approach to digital sovereignty: From ambition to action with a national AI orchestration layer (Translated) (Only availabe in Danish)

CAISA - Brief

(Translated) Digital sovereignty is fundamentally about control - but control can take many forms and be achieved in different ways. It is therefore not always clear where to begin. Sovereignty is also about capacity: the ability to develop, operate, and adapt digital solutions independently, without relying heavily on a few dominant actors.

In the field of AI, this requires a modular approach for a country like Denmark if ambitions around control and capacity are to be realised in practice. Considerations such as security, economic value, and rights must be translated into concrete, technically implementable elements - AI building blocks - that enable decision-makers to assess costs, benefits, and risks at each stage.

This approach allows for targeted progress towards greater digital sovereignty without locking into rigid solutions or overlooking future challenges and opportunities. It also provides the flexibility to adapt continuously as technology and society evolve.

Here, we present one proposal for such a module: a national AI orchestration layer that enables the management of immediate needs while maintaining long-term strategic goals, and provides both public and private actors with a shared technical foundation to build upon.

Læs mere
Læs forskningsbriefet her
Læs mere
Eventos
Read more
newness
CAISA prioritizes international engagements and welcomes opportunities to present and discuss our interdisciplinary research approach

CAISA actively prioritises international engagement and welcomes opportunities to present our distinctive interdisciplinary AI research model.

We have met with delegations from countries including Norway, Estonia, and Germany. Most recently, we hosted Nigeria's Minister of digital affairs, Innovation and Digital Economy, H.E. Dr Bosun Tijani, and his delegation. The Nigerian delegation shared their strategic plans to install 90,000 km of fibre-optic cables to strengthen national digital infrastructure, as well as the strong enthusiasm for AI among Nigeria’s young population.

Among other things, CAISA highlighted the importance of research on how artificial intelligence can be developed and applied in a responsible and democratic way.

Read more
Research
Read more
Eventos
Read more
newness
CAISA Deputy Head of Centre Thomas Moeslund appointed to the Danish Data Ethics Counsil

CAISA is proud to announce that our Deputy Director, Thomas Moeslund, has been appointed as a new member of the Danish Data Ethics Council. The appointment reflects his longstanding contributions to research in artificial intelligence and computer vision, as well as his strong commitment to resonsible AI and ethical technology development.

As a professor at Aalborg University and an internationally recognised researcher, Thomas has worked extensively on the intersection between advanced algorithmic methods and their societal implications. His research spans from foundational methodological development to applied AI solutions, with a focus on transparency, fairness, autonomy, and long-term impact.

Data ethics as the foundation for responsible AI

At a time when developments in artificial intelligence are advancing faster than both regulation and society’s shared understanding, the need for strong data ethics and responsible AI governanceis becoming increasingly urgent. Manipulated content, automated decision-making, and new applications of generative AI are creating significant challenges for citizens, businesses, and policymakers alike.

Thomas Moeslund highlights the importance of a robust ethical foundation:

Data ethics, for me, is not an afterthought, but an integral part of the research, development and implementation of technology.” (Translated)

His perspective emphasizes that responsible AI cannot be separated from technical development, but must be embedded from the outset - from datasets and model design to implementation and real-world use.

The role of the Council in a complex technological landscape

As a member of the Danish Data Ethics Council, Thomas Moeslund will play a key role in addressing the ethical challenges arising from the rapid development of artificial intelligence (AI). This includes issues related to misinformation, algorithmic bias, and the impact of AI systems on democratic processes and societal structures.

On the Council's role, Thomas explains:

The Council can act as a bridge between technical experts, policymakers, businesses, and citizens - both by establishing shared ethical standards and proactive solutions before problems escalate, and by communicating these issues to the broader public.” (Translated)

His appointment brings a strong technological and research-based perspective to the Council, helping to ensure a responsible and human-centred development and use of AI in Denmark.

CAISA's perspective

At CAISA, we work to advance human-centred and responsible AI, and the appointment of Thomas Moeslund reflects exactly the type og expertise needed to develop AI solutions that are both technically advanced and ethically robust.

We look forward to contributing to this work through research-based insights and interdeciplinary perspectives from CAISA - and to follow Thomas's important role in shaping Denmark's national data ethics agenda.

Read more
Research
Read more
Eventos
Read more
newness
Read more
Research
The Use of Chatbots in the Public Sector

This research brief presents a systematic literature review of what current research literature conveys about the implementation and use of chatbots in public sector workflows and in interactions with citizens. The purpose of this research brief is to identify and analyze both the opportunities and challenges within this domain through a systematic synthesis of existing empirical research on the implementation and use of, as well as citizens’ attitudes toward and experience with, chatbots in the public sector. The brief shows that chatbots can contribute effectively to certain tasks in the public sector; however, they also generate new work and shifts in responsibilities for workers. From citizens’ perspective, research finds that well-educated, younger, and resourceful citizens are more likely to trust and have positive experiences with chatbots when interacting with public authorities, whereas for others, e.g., citizens with disabilities or citizens with more complex requests and challenges, chatbots create new friction in their encounters with the public sector. This may reinforce existing social and digital inequalities within the population.

Read more