Runjia Zeng
Logo RIT Research Assistant

Hi! I am Runjia (Ruhn-jah).ツ

I am currently a first-year PhD student at RIT, advised by Prof. Dongfang Liu.

University email: rz4545@rit.edu


Education
  • Rochester Institute of Technology
    Rochester Institute of Technology
    Ph.D. Student in Electrical and Computer Engineering
    Sep. 2024 - present
  • Guangdong University of Technology
    Guangdong University of Technology
    B. Eng in Computer Science and Technology
    Sep. 2020 - Jul. 2024
Experience
  • Institute of Software Chinese Academy of Sciences
    NLP Intern
    Apr. 2023 - Sep. 2023
Honors & Awards
  • National Scholarship Award, PRC
    2021
  • China Robot Competition & Robocup China Open National Champion
    2022
News
2024
[ICLR] One first-author paper on Large Language Model Fusion, under review for ICLR.
Oct 01
[NeurIPS] One paper accepted to NeurIPS 2024, congratulations to all authors and myself!
Sep 25
[IEEE Virtual Reality] One paper on Prompt Tuning, under review for IEEE Virtual Reality.
Sep 18
[IJCV] One third-author paper on Vision-Language Model, under review for IJCV.
Sep 16
Selected Publications (view all )
Visual Fourier Prompt Tuning
Visual Fourier Prompt Tuning

Runjia Zeng, Cheng Han, Qifan Wang, Chunshu Wu, Tong Geng, Lifu Huang, Ying Nian Wu, Dongfang Liu

NeurIPS Conference on Neural Information Processing Systems 2024

To tackle performance drops caused by data differences between pretraining and finetuning, we propose Visual Fourier Prompt Tuning (VFPT), which leverages the Fast Fourier Transform to combine spatial and frequency domain information, achieving better results with fewer parameters.

Visual Fourier Prompt Tuning

Runjia Zeng, Cheng Han, Qifan Wang, Chunshu Wu, Tong Geng, Lifu Huang, Ying Nian Wu, Dongfang Liu

NeurIPS Conference on Neural Information Processing Systems 2024

To tackle performance drops caused by data differences between pretraining and finetuning, we propose Visual Fourier Prompt Tuning (VFPT), which leverages the Fast Fourier Transform to combine spatial and frequency domain information, achieving better results with fewer parameters.

All publications