Research program: Design of trustworthy artificial intelligence

Prof. Hendrik Heuer's team works at the intersection of artificial intelligence and human-computer interaction.
Gesamtprozess
  1. Home
  2. Research program: Design of trustworthy artificial intelligence

As part of the research program, principles of trustworthy artificial intelligence are developed, technically implemented and tested together with users in practical fields of application. A particular focus is on social media and the question of how they can be improved.

Trust in AI through understanding

Trust in AI through co-design

The research program follows in the tradition of participatory and user-centered software development and extends this with a focus on artificial intelligence and machine learning. Participatory software development offers numerous methods for actively involving users in software development. Our goal is to further develop these methods with regard to artificial intelligence and machine learning as a new type of co-creation.

One area in which the research program will explore methods of human-AI interaction is social media. This application context is particularly suitable as AI systems play a central role on platforms such as Facebook, YouTube, TikTok, Instagram and Twitter. Up to 70% of videos viewed on YouTube were selected by an AI system. Research into human-AI interaction is socially relevant, as users generally have no control over AI systems and are often unaware of the systems’ existence.

Trust in AI through control

Prof. Dr. Hendrik Heuer

 

 

 

Head of Research Program
hendrik.heuer@cais-research.de

Straight to …

Research areas

The research focus aims to measure and analyze the understanding of users of AI systems (user beliefs, folk theories). We extend preliminary work on the understanding of users of AI systems, especially video recommendation systems based on machine learning. A particular focus is on the transparency and explainability of AI systems.

This research focus develops methods for controlling the output of ML-based systems through systematic audits. An important motivation is the prevention of discrimination. The focus is motivated by preliminary work on online radicalization on YouTube.

We are also engaged in the development of methods for the comprehensive inclusion of users in the design of AI systems. Our focus on deep user involvement is based on preliminary work such as an interdisciplinary collaboration with social scientists and an extensive literature review on the different approaches to user involvement. The focus is in the context of the democratization of machine learning.

News

Team