👋 Hello! I’m Wonje Jeung, a first-year Master’s student in Artificial Intelligence at Yonsei University, where I am fortunate to be advised by Professor Albert No. Previously, I gained valuable research experience working with Professor Jonghyun Choi at both Yonsei University and Seoul National University. I work on privacy, safety, and reasoning and efficient model architectures. 🤝 If these interests resonate with you, feel free to reach out—I’m always open to collaboration!


News

  • May 2025: Four new papers are now on arXiv! These include work on safety alignment in reasoning models (SAFEPATH), unlearning benchmarks (DUSK and R-TOFU), and unlearning fragility (SEPS).
  • May 2025: Two papers—LTF-TEST and RepBend—have been accepted to ACL 2025. See you in Austria 🇦🇹!
  • Dec 2024: Our paper on sample-based privacy auditing for final model-only scenarios will appear at the NeurIPS SoLaR Workshop. See you in Canada 🇨🇦!
  • July 2024: One paper accepted to ECCV: REALFRED.
  • May 2024: We introduce a new information-theoretic metric for evaluating unlearning.
  • March 2024: One paper accepted to CVPR: EARL.