My name is Chenghao Yang. I am currently a (x-2022)-year Ph.D. student at University of Chicago. I am fortunately advised by Prof. Allyson Ettinger. My research is generously supported by Eckhardt Scholarship.

My research interest focuses on natural language processing (NLP) and Machine Learning (ML). My goal is to design practical NLP systems as well as understand the underlying human intelligence behind the natural language. Recently, I worked on pragmatics (Check out our ToM 2023 workshop), word-level semantics, robustness, question-answering and continuous-time event stream modeling.

I am fortunate to be guided and mentored with so many great reseachers and I deeply appreciate their help: Prof. Jason Eisner at JHU, Prof. He He at NYU, Prof. Kai-Wei Chang at UCLA, Prof. Xuezhe Ma at USC, Prof. Smaranda Muresan at Columbia University, Prof. Zhiyuan Liu at Tsinghua University and Dr. Mo Yu at Wechat Research.

Before I join UChicago, I was previously an applied scientist at AWS AI, under the lead of Andrew O. Arnold. My full-time work at AWS AI is mostly about building and evaluating large-scale language models for code generation (Check out our great AWS Codewhisper!). I obtained my M.S. degree in Computer Science Department at Columbia University (ML track) and my bachelor's degree in Software College, Beihang University.

I enjoy listening to music, playing the guitar, watching movies, and the anime in my personal spare time. I recently become fascinated by cooking Chinese dishes.

I believe the famous quote "The best way to learn is to teach." I often make many slides or do some chalk talks to explain novel concepts or technical ideas. Recently: External Tools + LLM (WebGPT, Toolformer), GPT-4 TechReport 100-Page Walkthrough, Emergent Ability of LLM, Unified View of Pattern Efficient Tuning,

Feel free to send me any comments or feedbacks! I am happy to chat about various topics in ML and NLP. I am also open to various form of research collaboration.

Personal News

  • [Jan, 2024] Excited to share my first collaboration work with Chaoqi, Yibo, Han and Yuxin on DPO has been accepted to ICLR 2024 as a SPOTLIGHT! We propose f-DPO, allowing flexible trade-off for generation diversity, calibration and alignment. Read more [here]
  • [Oct, 2023] Excited to share that my paper with Allyson has been accepted at EMNLP 2023! In this paper, we propose a new synthetic environment to test the situational understanding for ChatGPT, and we find ChatGPT has non-persistent in-context memory. Read more [here]
  • [July, 2023] Successfully organize our workshop ToM 2023 @ ICML 2023! Thanks all great efforts from our speakers, panelists, authors, reviewers, co-organizers and advisory board members! See you next workshop!
  • [June, 2023] Start my internship at Google as a student researcher! I will work on building new large language models.
  • [May, 2023] Excited to share my two works mainly done at AWS has been officially accepted! Big thanks for all my collaborators!
    • One (Amortized-Interpretation) is on Shapley Values explanation with my great advisor Prof. He He and Prof. Kai-Wei Chang. We identify new stability-efficiency trade-off issues with commonly used Shapley Values settings. We develop a fast amortized model that can achieve 60-600 times efficiency boosting while maintaining good faithfulness and performance on downstream tasks.
    • The other paper (ReCode) is a comprehensive robustness benchmark on code generation models (the first robustness benchmark on generation models!) that I first time mentor/co-mentor non-NLP PhD students to do evaluation on LLMs.
  • [Apr, 2023] We are organizing the First Workshop on Theory of Mind in Communicating Agents [website]! If you are interested in how we can build model for other agents's belief and thought, we highly encourage you to submit a paper to our workshop and join our workshop this summer at Hawaii! We encourage under-review, accepted and new submissions (thought pieces, position papers and empirical papers) with 2 to 8 pages length, and all accepted papers are non-archival. So it is relatively easy to submit and share your great work!
  • [Nov, 2022] Gives a Talk on Predicting and Explaining Message-Level Disclosures of Opioid Use Disorder at University of Maryland, College Park.
  • [Oct, 2022] Excited to share that my work collaborated with Prof. Xuezhe (Max) Ma at USC has been accepted to EMNLP 2022! Shout out to my great mentor Max and thanks for help from Marius Mosbach!
  • [Sept, 2022] Officially start my Ph.D. at UChicago!
  • [Aug, 2022] Last days in AWS AI. Thanks for great mentors, colleagues, managers and leaders. I learn a lot. See my official goodbye tweet. Looking forward to more exciting news for this excellent team!
  • [July, 2022] Attend NAACL'22 for presenting my TACL'21 paper on NarrativeQA. Say hi if you are in Seattle as well!
  • [June, 2022] Visited USC and UCLA at LA. Thank Robin, Xuezhe, Muhao, Swabha and Kai-Wei for hosting me and having a chat with me in LA!
  • [April, 2022] Officially accept UChicago CS Ph.D. offer. I will work with my great advisors Prof. Allyson Ettinger and Prof. Chenhao Tan. My research will also be generously supported by UChicago Eckhardt Scholarship. Thanks, UChicago! Also excited to host my first social panel "Better Developing Pretraining-based Models and Beyond" at ICLR 2022!
  • [Jan, 2022] Excited to share that my work collaborated with Prof. Jason Eisner and Prof. Hongyuan Mei has been accepted to ICLR 2022! It mainly focuses on building a neural-symbolic hybrids based on Transformer architecture and can be used for event stream modeling. Please take a look at our full paper and our codebase!
  • [Oct, 2021] Officially be invited to serve as reviewers for ACL Rolling Reviews. Will work on September (as emergency), October and November.
  • [June, 2021] Officially join AWS AI as an applied scientist intern. Looking forward to explore Robustness + QA project! Feel free to reach out if you are also at AWS!
  • [May, 2021] Three Important News:
    • My TACL paper on NarrativeQA has been officially accepted by TACL (work done during my internship at IBM). Thanks to my great mentor Mo Yu and co-authors!
    • My paper on suicide risk assessment has been accepted to ACL 2021 as a short paper! Thanks to my great advisor Smara and my supportive co-author Yudong!
    • Also, our colaboration works with Columbia School of Social Work on COVID-19 social media analysis has also been officially accepted by Journal of Addiction Medicine Production. Thanks to all great collaborators! Very excited to contribute my efforts on COVID-19 related researches.
  • [April, 2021] Officially graduated with a Master's degree in Computer Science from Columbia. Thanks to my great research advisor Smaranda Muresan, my awesome and patient lecturers and TAs and thanks for the accompany of my classmates!
  • [Oct, 2020] I will start my visiting research assistant at JHU CLSP during Spring 2021. I will work with Prof. Jason Eisner and his PhD advisee Hongyuan Mei on a remote basis.
  • [Jun, 2020] I start my internship at IBM Watson as a Sr. Cognitive Software Developer. I will work with Dr. Mo Yu on NarrativeQA projects. Feel free to connect if you are also at IBM!
  • [Jan, 2020] I start working as a Research Assistant at Columbia University, working with Prof. Smaranda Muresan on the topic of NLP for health and social good.
  • [Dec, 2019] I finished my visiting at Tsinghua University as a Visiting Student Research Assistant. Great thanks for my advisor Prof. Zhiyuan Liu and my great collaborators Hao Zhu, Ruobin Xie, Fanchao Qi, Yuan Zang and Junjie Huang.

Publication

(“*” indicates equal contribution)

Journal Papers

  1. {Chenghao Yang*, Xiangyang Mou*, Mo Yu*}, Bingsheng Yao, Xiaoxiao Guo, Saloni Potdar, Hui Su., Narrative Question Answering with Cutting-Edge Open-Domain QA Techniques: A Comprehensive Study TACL 2021 [paper]
  2. Nabila El-Bassel, Karli R Hochstatter, Melissa Slavin, {Chenghao Yang*, Yudong Zhang*}, Smaranda Muresan., Harnessing the Power of Social Media to Understand the Impact of COVID-19 on People Who Use Drugs During Lockdown and Social Distancing. Journal of Addiction Medicine [PubMed Paper]

Conference Papers

  1. Chaoqi Wang, Yibo Jiang, Chenghao Yang, Han Liu, Yuxin Chen., Beyond Reverse KL: Generalizing Direct Preference Optimization with Diverse Divergence Constraints, ICLR 2024 Spotlight [paper]
  2. Chenghao Yang, Allyson Ettinger., Can You Follow Me? Testing Situational Understanding in ChatGPT, EMNLP 2023 [paper][code]
  3. Chenghao Yang, Fan Yin, He He, Kai-Wei Chang, Xiaofei Ma and Bing Xiang., Efficient Shapley Values Estimation by Amortization for Text Classification, ACL 2023 [paper][code][video]
  4. {Shiqi Wang*, Zheng Li*}, Haifeng Qian, Chenghao Yang, Zijian Wang, Mingyue Shang, Varun Kumar, Samson Tan, Baishakhi Ray, Parminder Bhatia, Ramesh Nallapati, Murali Krishna Ramanathan, Dan Roth and Bing Xiang., ReCode: Robustness Evaluation of Code Generation Models, ACL 2023 [paper][codebase] [DL4C @ ICLR 2023 Version]
  5. Chenghao Yang, Xuezhe Ma., Improving Stability of Fine-Tuning Pretrained Language Models via Component-Wise Gradient Norm Clipping, EMNLP 2022 [paper] [codebase]
  6. Chenghao Yang, Hongyuan Mei, Jason Eisner., Transformer Embeddings of Irregularly Spaced Events and Their Participants, ICLR 2022 [full paper] [codebase]
  7. Chenghao Yang, Yudong Zhang, Smaranda Muresan., Weakly-Supervised Methods for Suicide Risk Assessment: Role of Related Domains, ACL 2021 (Short) [paper] [codebase]
  8. {Chenghao Yang*, Yuan Zang*, Fanchao Qi*}, Zhiyuan Liu, Meng Zhang, Qun Liu, Maosong Sun., Word-level Textual Adversarial Attacking as Combinatorial Optimization, ACL 2020 (Long) [paper] [codebase]
  9. {Fanchao Qi*, Junjie Huang*}, Chenghao Yang, Zhiyuan Liu et al., Modeling Semantic Compositionality with Sememe Knowledge, ACL 2019 (Long & Oral) [paper] [codebase]

Workshop Papers

  1. Chaoqi Wang, Yibo Jiang, Chenghao Yang, Han Liu, Yuxin Chen., Beyond Reverse KL: Generalizing Direct Preference Optimization with Diverse Divergence Constraints, SoLAR@NeurIPS 2023, Instruction@NeurIPS 2023 [paper]
  2. Chenghao Yang*, Yuhui Zhang*, Zhengping Zhou*, Zhiyuan Liu., Enhancing Transformer with Sememe Knowledge, RepL4NLP@ACL 2020 [paper]
  3. Xiangyang Mou, Mo Yu, Bingsheng Yao, Chenghao Yang, Xiaoxiao Guo, Saloni Potdar, Hui Su., Frustratingly Hard Evidence Retrieval for QA Over Books, NUSE@ACL 2020 [paper]

Service

  • Workshop Organizer:ToM 2023 @ ICML 2023
  • ARR Reviewer: {Sept, Oct, Nov} 2021, {March, October} 2022, {December, October} 2023, {February} 2024
  • Conference Reviewer: COLM 2024, ICLR 2024, NeurIPS 2023, EMNLP {2021, 2022}, ACL {2020, 2021}, NAACL 2021, COLING {2020, 2022}, NLPCC 2020
  • Workshop Reviewer:TL4NLP @ NeurIPS 2022
  • Social Panel Host:"Better Developing Pretraining-based Models and Beyond" @ ICLR 2022