Forskning

Artificial General Intelligence and its Societal Implications (Only available in English)

CAISA - Brief

CAISA's first official brief "Artificial General Intelligence and its Societal Implications."

The prospect of artificial general intelligence (AGI) has led some researchers and entrepreneurs to foresee an imminent “intelligence explosion,” leading to either humanity’s near-extinction, or a future of abundance. This brief examines the philosophical and technical premises underpinning such predictions, and their implications for governance. There are two such premises: first, that AI systems can be intelligent and become more intelligent with time; and second, that advanced AI can direct its behavior towards autonomously set goals. However, these premises rest on relatively weak foundations. CAISA recommends a clear-headed approach to AGI, as long as the concept and governance suggestions remain vague. The rush to prioritize AGI development—and/or its governance—risks diverting attention from the tangible risks posed by existing AI technologies, such as algorithmic bias, privacy erosion, and unchecked power concentrations. At the same time, to have a more meaningful discussion of AGI further research is needed, spanning philosophy, cognitive science, cybersecurity as well as the societal and political impact of AGI narratives.

Læs mere
Læs mere
Read the brief in English here
Eventos
Read more
newness
CAISA prioritizes international engagements and welcomes opportunities to present and discuss our interdisciplinary research approach

CAISA actively prioritises international engagement and welcomes opportunities to present our distinctive interdisciplinary AI research model.

We have met with delegations from countries including Norway, Estonia, and Germany. Most recently, we hosted Nigeria's Minister of digital affairs, Innovation and Digital Economy, H.E. Dr Bosun Tijani, and his delegation. The Nigerian delegation shared their strategic plans to install 90,000 km of fibre-optic cables to strengthen national digital infrastructure, as well as the strong enthusiasm for AI among Nigeria’s young population.

Among other things, CAISA highlighted the importance of research on how artificial intelligence can be developed and applied in a responsible and democratic way.

Read more
Research
Read more
Eventos
Read more
newness
CAISA Deputy Head of Centre Thomas Moeslund appointed to the Danish Data Ethics Counsil

CAISA is proud to announce that our Deputy Director, Thomas Moeslund, has been appointed as a new member of the Danish Data Ethics Council. The appointment reflects his longstanding contributions to research in artificial intelligence and computer vision, as well as his strong commitment to resonsible AI and ethical technology development.

As a professor at Aalborg University and an internationally recognised researcher, Thomas has worked extensively on the intersection between advanced algorithmic methods and their societal implications. His research spans from foundational methodological development to applied AI solutions, with a focus on transparency, fairness, autonomy, and long-term impact.

Data ethics as the foundation for responsible AI

At a time when developments in artificial intelligence are advancing faster than both regulation and society’s shared understanding, the need for strong data ethics and responsible AI governanceis becoming increasingly urgent. Manipulated content, automated decision-making, and new applications of generative AI are creating significant challenges for citizens, businesses, and policymakers alike.

Thomas Moeslund highlights the importance of a robust ethical foundation:

Data ethics, for me, is not an afterthought, but an integral part of the research, development and implementation of technology.” (Translated)

His perspective emphasizes that responsible AI cannot be separated from technical development, but must be embedded from the outset - from datasets and model design to implementation and real-world use.

The role of the Council in a complex technological landscape

As a member of the Danish Data Ethics Council, Thomas Moeslund will play a key role in addressing the ethical challenges arising from the rapid development of artificial intelligence (AI). This includes issues related to misinformation, algorithmic bias, and the impact of AI systems on democratic processes and societal structures.

On the Council's role, Thomas explains:

The Council can act as a bridge between technical experts, policymakers, businesses, and citizens - both by establishing shared ethical standards and proactive solutions before problems escalate, and by communicating these issues to the broader public.” (Translated)

His appointment brings a strong technological and research-based perspective to the Council, helping to ensure a responsible and human-centred development and use of AI in Denmark.

CAISA's perspective

At CAISA, we work to advance human-centred and responsible AI, and the appointment of Thomas Moeslund reflects exactly the type og expertise needed to develop AI solutions that are both technically advanced and ethically robust.

We look forward to contributing to this work through research-based insights and interdeciplinary perspectives from CAISA - and to follow Thomas's important role in shaping Denmark's national data ethics agenda.

Read more
Research
Read more
Eventos
Read more
newness
Read more
Research
The Use of Chatbots in the Public Sector

This research brief presents a systematic literature review of what current research literature conveys about the implementation and use of chatbots in public sector workflows and in interactions with citizens. The purpose of this research brief is to identify and analyze both the opportunities and challenges within this domain through a systematic synthesis of existing empirical research on the implementation and use of, as well as citizens’ attitudes toward and experience with, chatbots in the public sector. The brief shows that chatbots can contribute effectively to certain tasks in the public sector; however, they also generate new work and shifts in responsibilities for workers. From citizens’ perspective, research finds that well-educated, younger, and resourceful citizens are more likely to trust and have positive experiences with chatbots when interacting with public authorities, whereas for others, e.g., citizens with disabilities or citizens with more complex requests and challenges, chatbots create new friction in their encounters with the public sector. This may reinforce existing social and digital inequalities within the population.

Read more