Research and development of tools for detecting and mitigating epistemic injustice in AI systems for identifying misleading information

Principal investigator: Doc. Laura Candiotto, Ph.D.
Period: 1. 8. 2025 – 31. 12. 2028
TAČR

Abstract

The primary aim of this project is to enhance the reliability and fairness of AI tools used to detect misleading information by investigating their epistemic biases. We will achieve this by designing, curating, and rigorously testing novel datasets for bias identification, and by developing methodologies for their mitigation. These tools will make it possible to detect biases and the marginalization of certain perspectives, ensuring an inclusive and transparent evaluation of information. The project adopts an interdisciplinary approach, drawing on AI ethics, philosophy, and religious studies, with particular emphasis on analyzing mechanisms of knowledge production, natural language processing, and data analytics. The team will bring together experts in disinformation, media literacy, and AI systems development.

Team members

Laura Candiotto
Ondřej Krása
Lucie Valentinová