Highlights

CAISA Fellowships

The CAISA fellowship program is an integral part of CAISA’s ambition to be a leading national and international center for interdisciplinary research and strategic advice on AI in society. The fellowship allows researchers working in Denmark to join the CAISA community for a duration of one to three months, to synthesize existing knowledge in an area related to AI in society. As part of a fellowship, new knowledge may be generated, but it should serve as a supplement and be a continuation of a researcher’s existing knowledge and field.

As a bridge between CAISA and the national community, fellows are expected to be physically present at one of our hubs in Aalborg and Copenhagen. In dialogue with one of the CAISA chief scientists, who will also act as their academic point of contact, fellows produce and publish a concise and cogent research-based brief for decision-makers and other interested parties. Fellows can apply for a stipend to cover their salaries for the duration of the fellowships, and/or a supplement for potential travel and accommodation costs.

Expected deadline for next call: May 1st 2026

This is some text inside of a div block.
Forskning
Læs mere
Forskning
Artificial General Intelligence and its Societal Implications

CAISA udgiver nu sit første research brief, Artificial General Intelligence and its Societal Implications. Briefet handler om kunstig generel intelligens (AGI) i samfundet. Læs et resumé på engelsk herunder.

Summary

The prospect of artificial general intelligence (AGI) has led some researchers and entrepreneurs to foresee an imminent “intelligence explosion,” leading to either humanity’s near-extinction, or a future of abundance. This brief examines the philosophical and technical premises underpinning such predictions, and their implications for governance. There are two such premises: first, that AI systems can be intelligent and become more intelligent with time; and second, that advanced AI can direct its behavior towards autonomously set goals. However, these premises rest on relatively weak foundations. CAISA recommends a clear-headed approach to AGI, as long as the concept and governance suggestions remain vague. The rush to prioritize AGI development—and/or its governance—risks diverting attention from the tangible risks posed by existing AI technologies, such as algorithmic bias, privacy erosion, and unchecked power concentrations. At the same time, to have a more meaningful discussion of AGI further research is needed, spanning philosophy, cognitive science, cybersecurity as well as the societal and political impact of AGI narratives.

Læs mere
Nyhed
Rebecca Adler-Nissen: Tro ikke på alt, hvad du hører om superin­tel­li­gens

CAISAs centerleder Rebecca Adler-Nissen skriver i en kommentar i Børsen om superintelligens, symbolpolitik og Silicon Valleys afledningsmanøvre.

"For big tech fungerer fascinationen af en nærtforestående superintelligens som et kommercielt røgslør, der fjerner fokus fra, hvad kunstig intelligens betyder allerede nu og her."

Kommentaren bygger på CAISAs seneste forskningsbrief, der handler om udsigterne til kunstig superintelligens eller Artificial General Intelligence (AGI).

Læs mere
Forskning
Læs mere