Yekun Chai
Contact:
chaiyekun (at) gmail.com
I am a staff research engineer working on large language models (LLMs) at Baidu NLP. Before that, I was associated with Institute of Automation, Chinese Academy of Sciences (CASIA). I graduated from Edinburgh Informatics in 2018 under the supervision of Adam Lopez and Naomi Saphra.
My research endeavors revolve around the generative pre-training paradigm of NLP, with a particular emphasis on:
- General language model pre-training, prompting, instruction tuning, and their variants across tasks, languages, and modalities;
- LLM alignment with human preferences;
- Augmented LLMs with non-parametric priors.
news
Feb 20, 2024 | One paper on HumanEval-XL, a multilingual code generation benchmark has been accepted to LREC-COLING 2024. We’ve released the code and data! |
---|---|
Jan 16, 2024 | One paper on reward models with tool-augmented feedback has been accepted to ICLR 2024 (spotlight). Dive into our research and code now! |
Sep 23, 2023 | One paper on XAI has been accepted to NeurIPS 2023 Datasets and Benchmarks Track. Code is available here. |
May 02, 2023 | ERNIE-Code on multilingual text and code pre-training has been accepted to ACL 2023 Findings. Check our code and models. |