Jing Yang

prof_pic.jpg

MAR 2052,

Marchstrasse 23

Berlin, Germany

I am a post-doc researcher at the XplaiNLP group in TU Berlin led by Dr. Vera Schmitt, and the Berlin Institute for the Foundations of Learning (BIFOLD). My current research project (FakeXplain) is a BIFOLD agility project related to generating natural language explanations for AI-based disinformation detection.

I completed my Bachelor’s degree in Information and Computing Science at the Hubei University of Technology in China, followed by a Master’s degree in Computer Science at Hunan University, China. My Master’s dissertation is related to identifying 3D printed objects and printers with Digital Forensics and Machine Learning. After obtaining my Master’s degree in 2019, I pursued a PhD at the RECOD.ai lab from the University of Campinas in Brazil, under the supervision of Prof. Anderson Rocha. During 2022-2023, I did a research internship at the Ubiquitous Knowledge Processing (UKP) lab in TU Darmstadt. My PhD thesis was related to improving fact-checking efficiency and explainability with few-shot learning and large language models.

My research interests are:

  • Natural language explanation generation
  • Large language model tuning/prompting
  • Synthetic text evaluation
  • Human preference study on text generation
  • Applications and social impacts of large language models

news

Apr 03, 2025 Our TACL paper is now published in MIT Press: Self-Rationalization in the Wild: A Large-scale Out-of-Distribution Evaluation on NLI-related tasks Open Access. Feel free to check out!
Jan 09, 2025 Our TACL paper was recently accepted! A big thank to the co-authors: Max Glockner, Anderson Rocha and Iryna Guverych.
Jan 08, 2025 Posting the first update of my website!

selected publications

  1. ICASSP
    Explainable Fact-checking through Question Answering
    Jing Yang, Didier Vega-Oliveros, Taı́s Seibt, and 1 more author
    In IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2022
  2. WIFS
    Scalable Fact-checking with Human-in-the-Loop
    Jing Yang, Didier Vega-Oliveros, Tais Seibt, and 1 more author
    In 2021 IEEE International Workshop on Information Forensics and Security (WIFS), 2021
  3. WIFS
    Take It Easy: Label-Adaptive Self-Rationalization for Fact Verification and Explanation Generation
    Jing Yang, and Anderson Rocha
    In 2024 IEEE International Workshop on Information Forensics and Security (WIFS), 2024
  4. Self-Rationalization in the Wild: A Large-scale Out-of-Distribution Evaluation on NLI-related tasks
    Jing Yang, Max Glockner, Anderson Rocha, and 1 more author
    Transactions of the Association for Computational Linguistics, 2025