Yujia Qin (秦禹嘉)
yujiaqin16 <at> gmail.com

I focus on LLM/VLM-based agent. I graduated from Tsinghua in 2024 (PhD in CS, advisor Zhiyuan Liu) and 2020 (BS in EE, advisor Ji Wu).

Google Scholar  /  Twitter

Education

Ph.D. Computer Science, Tsinghua University, 2020-2024

B.E. Electronic Information Science and Technology, Tsinghua University, 2016-2020

Selected Publication
(*indicates equal contribution)
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
Yujia Qin*, Shihao Liang*, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, et al.
ICLR 2024, spotlight
paper  /  code
Delta Tuning: A Comprehensive Study of Parameter Efficient Methods for Pre-trained Language Models
Ning Ding*, Yujia Qin*, Guang Yang, Fuchao Wei, Zonghan Yang, Yusheng Su, et al.
Nature Machine Intelligence, cover paper
paper  /  code
WebCPM: Interactive Web Search for Chinese Long-form Question Answering
Yujia Qin, Zihan Cai, Dian Jin, Lan Yan, Shihao Liang, Kunlun Zhu, et al.
ACL 2023
paper  /  code
Recyclable Tuning for Continual Pre-training
Yujia Qin*, Cheng Qian*, Xu Han, Yankai Lin, Huadong Wang, Ruobing Xie, et al.
Findings of ACL 2023
paper  /  code
Tool Learning with Foundation Models
Yujia Qin, Shengding Hu, Yankai Lin, Weize Chen, Ning Ding, et al.
preprint
paper  /  code
Exploring Mode Connectivity for Pre-trained Language Models
Yujia Qin*, Cheng Qian*, Jing Yi*, Weize Chen, Yankai Lin, Xu Han, et al.
EMNLP 2022
paper  /  code
Moderate-fitting as a Natural Backdoor Defender for Pre-trained Language Models
Biru Zhu*, Yujia Qin*, Ganqu Cui, Yangyi Chen, Weilin Zhao, Chong Fu, et al.
NeurIPS 2022
paper  /  code
Exploring Universal Intrinsic Task Subspace via Prompt Tuning
Yujia Qin*, Xiaozhi Wang*, Yusheng Su, Yankai Lin, Ning Ding, Zhiyuan Liu, et al.
TASLP
paper  /  code
ELLE: Efficient Lifelong Pre-training for Emerging Data
Yujia Qin*, Jiajie Zhang*, Yankai Lin, Zhiyuan Liu, Peng Li, Maosong Sun, Jie Zhou
ACL 2022, findings
paper  /  code
Knowledge Inheritance for Pre-trained Language Models
Yujia Qin, Yankai Lin, Jing Yi, Jiajie Zhang, Xu Han, Zhengyan Zhang, et al.
NAACL 2022 (oral)
paper  /  code
ERICA: Improving Entity and Relation Understanding for Pre-trained Language Models via Contrastive Learning
Yujia Qin, Yankai Lin, Ryuichi Takanobu, Zhiyuan Liu, Peng Li, Heng Ji, et al.
ACL-IJCNLP 2021 (oral)
paper  /  code
Learning from Explanations with Neural Execution Tree
Ziqi Wang*, Yujia Qin*, Wenxuan Zhou, Jun Yan, Qinyuan Ye, Leonardo Neves, et al.
ICLR 2020
paper  /  code  /  Project NExT
Open-source Projects

(showing only those I'm the project lead):

  • XAgent An autonomous AI Agent that can accomplish various tasks.
  • ToolBench A platform for training and evaluating large language models in tool learning.
  • BMTools Tool learning framework for large language models.
  • WebCPM Interactive web search for Chinese long-form question answering.
  • ToolLearningPapers A collection of papers on tool learning for large language models.
Awards and Professional Service

Awards

  • Beijing Excellent Graduate (2024)
  • Baidu Scholarship (2023)
  • First Prize Scholarship of Tencent Rhino-Bird Program (2021)

Professional Service

  • Reviewer for ACL, ICLR, NeurIPS, COLM, EMNLP, SIGIR, AAAI, COLING, etc.

Previous Research Focus

  • Knowledge-Integrated Language Models (2019-2020): Integrating human-expert knowledge into neural language models.
  • Efficient Pre-training (2020-2021): Research on improving the pre-training efficiency of large models, aiming to train more capable models with less computational power.
  • Efficient Fine-tuning (2021-2022): Investigate ways to update only a minimal number of parameters to adapt large models to downstream tasks, reducing computational and storage costs during the adaptation process.
  • Tool Learning (2022-): Explore how to endow large models with higher-order cognitive abilities, allowing them to use complex tools in a manner similar to humans.
Work Experience

Seed, ByteDance, 2024.7 - Now

Founder of SeqAI Inc., 2024.1 - 2024.7

Pattern recognition group, Wechat, Tencent, Research Intern hosted by Dr. Peng Li and Dr. Yankai Lin, 2020.5 - 2024.1

Laboratory of Multimedia and Information Processing, Tsinghua University (Thesis), Research Intern advised by Prof. Ji Wu, 2019.9 - 2020.6

Intelligence and Knowledge Discovery Research Lab, USC, Research Intern hosted by Prof. Xiang Ren , 2019.6 - 2019.9

Pacific Century CyberWorks, Hong Kong, Summer 2018


The design of this website is borrowed from Jon Barron, last updated: 2024.9