Jing Yang

prof_pic.jpg

MAR 2052,

Marchstrasse 23

Berlin, Germany

I am a post-doc researcher at the XplaiNLP group in TU Berlin led by Dr. Vera Schmitt, and supervised by Prof. Sebastian Möller. I am also a guest researcher at the German Research Center for Artificial Intelligence (DFKI). My current research project (FakeXplain) is a BIFOLD agility project related to generating natural language explanations for AI-based disinformation detection.

I completed my Bachelor’s degree in Information and Computing Science at the Hubei University of Technology in China, followed by a Master’s degree in Computer Science at Hunan University, China. My Master’s dissertation is related to identifying 3D printed objects and printers with Digital Forensics and Machine Learning. After obtaining my Master’s degree in 2019, I pursued a PhD at the RECOD.ai lab from the University of Campinas in Brazil, under the supervision of Prof. Anderson Rocha. During 2022-2023, I did a research internship at the Ubiquitous Knowledge Processing (UKP) lab in TU Darmstadt. My PhD thesis was related to improving fact-checking efficiency and explainability with few-shot learning and large language models.

My research interests are:

  • Natural language explanation generation
  • Synthetic text evaluation
  • Human preference learning on text generation
  • Applications and social impacts of large language models

news

Jul 16, 2025 Three co-authored papers were accepted recently!

Comparing LLMs and BERT-based Classifiers for Resource-Sensitive Claim Verification in Social Media. Max Upravitelev, Nicolau Duran-Silva, Christian Woerle, Giuseppe Guarino, Salar Mohtaj, Jing Yang, Veronika Solopova and Vera Schmitt. Scholarly Document Processing workshop at ACL 2025.

Exploring Semantic Filtering Heuristics For Efficient Claim Verification. Max Upravitelev, Premtim Sahitaj, Arthur Hilbert, Veronika Solopova, Jing Yang, Nils Feldhus, Tatiana Anikina, Simon Ostermann and Vera Schmitt. FEVER workshop at ACL 2025.

dfkinit2b at CheckThat! 2025: Leveraging LLMs and Ensemble of Methods for Multilingual Claim Normalization. Tatiana Anikina, Van Vykopal, Sebastian Kula, Ravi Kiran Chikkala, Natalia Skachkova, Jing Yang, Veronika Solopova, Vera Schmitt, and Simon Ostermann. CLEF 2025: Conference and Labs of the Evaluation Forum.
Apr 03, 2025 Our TACL paper is now published in MIT Press: Self-Rationalization in the Wild: A Large-scale Out-of-Distribution Evaluation on NLI-related tasks Open Access. Feel free to check out!
Jan 09, 2025 Our TACL paper was recently accepted! A big thank to the co-authors: Max Glockner, Anderson Rocha and Iryna Guverych.
Jan 08, 2025 Posting the first update of my website!

selected publications

  1. ICASSP
    Explainable Fact-checking through Question Answering
    Jing Yang, Didier Vega-Oliveros, Taı́s Seibt, and 1 more author
    In IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2022
  2. WIFS
    Scalable Fact-checking with Human-in-the-Loop
    Jing Yang, Didier Vega-Oliveros, Tais Seibt, and 1 more author
    In 2021 IEEE International Workshop on Information Forensics and Security (WIFS), 2021
  3. WIFS
    Take It Easy: Label-Adaptive Self-Rationalization for Fact Verification and Explanation Generation
    Jing Yang, and Anderson Rocha
    In 2024 IEEE International Workshop on Information Forensics and Security (WIFS), 2024
  4. Self-Rationalization in the Wild: A Large-scale Out-of-Distribution Evaluation on NLI-related tasks
    Jing Yang, Max Glockner, Anderson Rocha, and 1 more author
    Transactions of the Association for Computational Linguistics, 2025