Kun Zhou (周昆)

Kun Zhou is a postdoctoral researcher at UC San Diego, working with Zhiting Hu and Biwei Huang. His research interests are Representation Learning for Foundation Models, with focuses on Multimodal Foundation Models and Unified Representation Learning, and their applications in complex planning and reasoning tasks. He obtained Ph.D at Renmin University of China, under the supervision of Wayne Xin Zhao and Ji-Rong Wen, and master degree from Peking University. His work has received 14000+ citation so far.

Research Opportunities: I am consistently seeking highly motivated students, to work with me in various research projects. If you are interested in my current research topics about improving the advanced capabilities of LLMs and World Models (Complex Reasoning, Multimodal Understanding and Generation), feel free to email me.

News

We release PAN, A World Model for General, Interactable, and Long-Horizon World Simulation!

Our Vision-G1 is accepted by AAAI 2026! See you in Singapore!

Our three papers are accepted by NeurIPS 2025, with one Spotlight! Thanks for my comrades!

Research Interests

Currently, my primary research lies in building Large-scale Multimodal Foundation Models. Concretely, I focus on training World Model and Multimodal Agent, improving their Width (Universal Memory) and Depth (Reasoning and Planning Capabilities):

Highlighted Projects

Most of my research work are open-source. Here are some my preferable projects!

  • PAN
    • A World Model with 21B Parameters for General, Interactable, and Long-Horizon World Simulation!
  • JiuZhang3.0
    • A series of LLMs with new SOTA performance on mathematical reasoning tasks, with only 1/4 cost for training and data synthesis!
  • YuLan-Chat
    • A LLM pre-trained on over 1.6TB tokens, and then post-trained via curriculum learning.
  • LLMBook
    • A Chinese book for everyone to master the knowledge about large language models.
  • LLMSurvey
    • A collection of papers and resources related to Large Language Models. The organization of papers refers to our survey “A Survey of Large Language Models”.

Experiences

2022.4 - 2023.6, Research Intern, NLC Group, MSRA.

Mentor: Yeyun Gong, Nan Duan

2021.9 - 2022.4, Research Intern, iFLYTEK Research.

Mentor: Jing Sha, Shijin Wang

2019.12 - 2021.5, Research Intern, NLP center, Meituan Dianping.

Mentor: Sirui Wang, Fuzheng Zhang

2018.8 - 2019.6, Research Intern, XiaoIce, Microsoft Asia.

Mentor: Kai Zhang, Yu Wu

Rewards and Honors

  • EACL 2024 Evaluation and Model Insight Award
  • Six Highly-cited Papers are selected as the most influential KDD/WWW/CIKM papers by PaperDigest:
  • 2022 Baidu Scholarship (10 PhD Students) Link
  • 2022 Bytedance Scholarship (10 PhD Students in China Mainland) Link
  • 2022 MSRA Fellowship (12 PhD Students in Asia-Pacific-region) Link
  • 2022 Baosteel Scholarship (12 Students in RUC).
  • 2022 National Scholarship.
  • LIC2021 Multi-Skill Dialog Challenge Link
    • Ranked 1st in Automatic Metrics Track, 3rd in Human Evaluation Track
  • LIC2020 Conversational Recommendation Challenge Link
    • Ranked 2st in Automatic Metrics Track, 4rd in Human Evaluation Track
  • 2018 Jane Street Electronic Trading Challenges
    • Ranked 1st
  • 2018 The Data Open Challenges Citadel Datathen
    • Ranked 2nd
  • 2016 American Mathematical Contest in Modeling
    • Honorable Mention
  • 2015 China Undergraduate Mathematical Contest in Modeling
    • National Second Prize
  • 2015 National Zhou Peiyuan college student mechanics competition
    • National Third Prize
  • 2015 Jiangsu Province Undergraduate Mechanical Competition
    • The First Prize

Service

  • IJCAI 2021
    • Senior PC Reviewer
  • AAAI, IJCAI, KDD, SIGIR, WWW, WSDM, ACL, EMNLP, COLING, TOIS, TORS
    • PC Reviewer