FakeXplain

Development of transparent and meaningful explanations in the context of disinformation detection

FakeXplain is a BIFOLD Agility Project pursues three goals:

(1) Development of different explanations for the AI-based disinformation detection process for improved intelligent decision support for citizens and journalists. This includes developing explanations that can be represented at different levels of abstraction to take into account the individual needs of different user groups.

(2) Development of different evaluation criteria for the explanations in order to empirically investigate their evaluation in crowd-based user studies and qualitative interviews with journalists. In order to prevent limitations of user studies in terms of representativeness, bias and other errors, the explanations will also be evaluated in supplementary user studies with synthetically generated data and virtual participants.

(3) Development of an evaluation framework for AI-generated explanations that takes into account both objective and subjective evaluation components.

Partners: Prof. Konrad Rieck, Prof. Wociech Samek, Prof. Joachim Meyer

References