cv
This is a description of the page. You can modify it in '_pages/cv.md'. You can also change or remove the top pdf download button.
Basics
Name | Jing Yang |
Label | Post-doctoral Researcher |
jing.yang@tu-berlin.de | |
Url | https://jingyng.github.io/ |
Summary | I am a post-doctoral researcher focusing on AI applications, specifically for social good, including fact-checking, misinformation detection, and social media analysis. |
Work
-
2024.11 - 2027.10 Postdoc Researcher
XplaiNLP group, Quality and Usability lab, TU Berlin
working as a post-doc researcher at TU Berlin, focusing on topics related to generating natural language languages for AI-based disinformation detection.
- Natural Language Processing
- Explainable AI
- Fact-checking
Education
-
2019.08 - 2024.11 Campinas, Brazil
PhD
University of Campinas, Campinas, Brazil
Computer Science
- Machine Learning
- Natural Language Processing
- Parallel Computing
- Computer Vision
- Reinfocement Learning
-
2016.09 - 2019.06 Changsha, China
Master
Hunan University, Changsha, China
Computer Science
- Digital Forensics
- Information Security
- Image Processing
-
2012.09 - 2016.06 Wuhan, China
Bachelor
Hubei University of Technology, Wuhan, China
Information and Computing Science
- Mathematics
- Computer Science
- Statistics
Awards
- 2020
Best Master Thesis Award
Hunan University
Awarded for the best master thesis titled '3D printed Objects Authentification and Source Attribution based on Printing Distortion'.
Languages
Chinese (Mandarin) | |
Native speaker |
English | |
Fluent |
Portuguese | |
Basic |
Projects
- 2024.11 - 2027.10
FakeXplain
FakeXplain is a BIFOLD Agility Project pursues three goals: (1) Development of different explanations for the AI-based disinformation detection process for improved intelligent decision support for citizens and journalists. (2) Development of different evaluation criteria for the explanations in order to empirically investigate their evaluation in crowd-based user studies and qualitative interviews with journalists. (3) Development of an evaluation framework for AI-generated explanations that takes into account both objective and subjective evaluation components.
- Explainable fact-checking
- Human-AI Interection