Aktuelle und vergangene Arbeitsgemeinschaften

Aktuelle Arbeitsgemeinschaften

As social media platforms increasingly take on roles and responsibilities traditionally associated with nation states, new frameworks to evaluate their fragility must be developed. Using The Fund For Peace’s Fragile State Index as a model, Haythornthwaite, Mai & Gruzd (2024) articulated the Social Media Fragility Indicators, a set of indicators to measure and evaluate the fragility of social media platforms. Building on this, the working group will discuss and refine the proposed indicators. The overarching goal is to develop a robust framework that can provide prescient insights into the long-term viability of platforms, inform strategic interventions, and highlight cross-platform issues.
To advance this work, the working group will convene a set of international experts from diverse fields to evaluate the sources of social media fragility, refine the initial set of indicators of social media fragility, and devise measures to assess the fragility of social media platforms based on these indicators.
The workshop outcomes in the form of a summary report will be shared on the working group’s page. We will also pursue a panel submission to disseminate the workshop results at relevant conferences such as the International Conference on Social Media & Society and Trust & Safety.

This project will research online gendered hate such as digital sexisms and gender-based online violence. They are particularly interested in visual misogynistic practices that “fly under the radar”. These include content moderation of visual gender violence; visual performance of gender identities, i.e. stereotyping, diminishing, branding, reinforcing imaginaries; and the role of aesthetics and design in (re)generating gender violence (Özkula et al., 2024).
The meeting at CAIS is intended to provide an opportunity to contrast and combine our collective methods and develop a methodology that better captures the complexity and diversity of cases of Platformed Visual Misogyny through comparative and multi-modal approaches (for which the groundwork has been laid in Özkula, Prieto-Blanco, Tan, & Mdege, 2024).

How is artificial intelligence reshaping the way political parties operate, engage with citizens, and lead in democratic systems? Our working group, Rethinking Party Politics: The Impact of AI on Governance, Membership, and Leadership, tackles this critical question. We bring together leading scholars in party politics, AI ethics, and democracy studies to explore AI’s transformative influence on political strategy, grassroots mobilization, and decision-making processes. Through interdisciplinary dialogue, expert presentations, and hands-on scenario planning, we’ll investigate how AI impacts political parties using Katz and Mair’s (1993) three-level framework: the party in public office, the party on the ground, and the party central office. Despite the rapid growth of research on AI and democracy, few studies have focused on its impact on political parties key actors in representative systems. By addressing this gap, we aim to make a significant contribution to understanding the future of political organizations in the age of AI, inspiring further scholarship.
Vergangene Arbeitsgemeinschaften

Our working group aims to bring together a variety of leading scholars from different disciplines and countries to study how industrial policy is actually done in Europe and beyond, and what the specific role of technology is therein. This promises not only to significantly advance the existing and rapidly growing literature on industrial policy and digital policy-making. It will also have practical relevance for the effectiveness and legitimacy of industrial and digital policy.

Der Diskurs über künstliche Intelligenz (KI) fokussiert sich stark auf Text-zu-Text-Generatoren wie Chat-GPT. Weniger beachtet wird die aktuell an Popularität gewinnende generative visuelle Kommunikation: Mit Tools wie Stable Diffusion können KI-Bilder mühelos generiert werden und finden bereits in vielen Kontexten Anwendung, ohne dass ihre Herkunft den Betrachter*innen offensichtlich wird. Diese Bilder haben das Potenzial, Produktion, Verwendung und Rezeption von Bildern grundlegend zu verändern. Die geplante AG zielt darauf ab, diese Lücke zu schließen und das Thema KI-generierter Bilder multidisziplinär zu untersuchen. In fünf Workshops (drei in Präsenz, zwei virtuell) analysieren wir die Charakteristika und Herausforderungen generativer Bilder (objektzentrierte Perspektive), die Produktions- und Präsentationskontexte (kommunikatorzentrierte Perspektive) sowie die Rezeption und Medienkompetenz der Nutzer (Rezeptions- und Nutzungsorientierten Perspektive). Die Ergebnisse werden in wissenschaftlichen Publikationen und auf einer Fachtagung präsentiert. Zudem erstellen wir einen Leitfaden zur ethischen Nutzung generativer Bilder und streben die Verstetigung der Zusammenarbeit sowie weitere Fördermittel an.

Since the 1990s, politicians, policymakers, scholars, technical experts and representatives of the private sector and civil society have been discussing and struggling over the role of the state in the […]

We are aiming to produce a joint publication in the format of a special issue in an international academic journal. Moreover, we also aim to publish a short article to the general public and the professional community of police organizations.

Our project makes a critical intervention in this debate by examining how four groups of actors – the tech and academic community, non-profit advocacy organisations, states, and corporations – have influenced how we conceptualise internet freedom and evaluating the real-world consequences of their ideas

Am CAIS werden wir ethische Fragen rund um die Internetsuche erforschen und vertiefen. Dazu stützen wir uns auf bisherige Ergebnisse der Fachgruppe Ethik der Open Search Foundation sowie auf Zwischenresultate des EU-Projekts OpenWebSearch.EU.
Suchmaschinen sind aus unserem Alltag nicht mehr wegzudenken. Die Websuche wirft jedoch zahlreiche Probleme auf, darunter einen monopolisierten Markt, mangelnde Transparenz, fehlende Privatsphäre, ethische und geopolitische Risiken. Künstliche Intelligenz verstärkt diese. Die Informationsvielfalt als Eckpfeiler der Demokratie ist gefährdet.
Am CAIS werden wir ethische Fragen rund um die Internetsuche erforschen und vertiefen. Dazu stützen wir uns auf bisherige Ergebnisse der Fachgruppe Ethik der Open Search Foundation sowie auf Zwischenresultate des EU-Projekts OpenWebSearch.EU

This workgroup, drawing on the linguistic and cultural diversity of its members, will analyze media coverage of generative AI in three different countries and three different languages: in Italian in Italy, In French in Switzerland and in English in the United States.

These three days (March 19-21) of collective work aim to bring together a diverse group of IBSA experts, ranging from representatives from civil society, academia, policymaking and survivors‘ experiences, to elaborate fresh definitions and data regarding online gender-based violence and envision more inclusive and gender-sensitive technological, political and educational solutions.