👋 Hello! I’m Wonje Jeung, a first-year Master’s student in Artificial Intelligence at Yonsei University, where I am fortunate to be advised by Professor Albert No. Previously, I gained valuable research experience working with Professor Jonghyun Choi at both Yonsei University and Seoul National University. I work on privacy, safety, and reasoning and efficient model architectures. 🤝 If these interests resonate with you, feel free to reach out—I’m always open to collaboration!
News
- September 2025: Our NeurIPS 2025 paper SAFEPATH: Preventing Harmful Reasoning in Chain-of-Thought via Early Alignment has been accepted. See you in San Diego 🇺🇸
- August 2025: Two papers, SEPS and R-TOFU, have been accepted to EMNLP 2025. See you in Suzhou 🇨🇳
- June 2025: Gave an talk at the MLSYS group on reasoning to achieve safety.
- May 2025: Two papers, LTF-TEST and RepBend, have been accepted to ACL 2025. See you in Austria 🇦🇹
- December 2024: Our paper on sample-based privacy auditing for final model-only scenarios will appear at the NeurIPS SFLLM Workshop. Additionally, Large Language Models Still Exhibit Bias in Long Text has been accepted to the NeurIPS SoLaR Workshop. See you in Canada 🇨🇦
- July 2024: One paper accepted to ECCV: REALFRED.
- March 2024: One paper accepted to CVPR: EARL.