I am Yu Zhang (张彧). Now, I am a Research Scientist at ByteDance. If you are seeking any form of academic cooperation, please feel free to email me at aaron9834@icloud.com.

I earned my PhD in the College of Computer Science and Technology, Zhejiang University (浙江大学计算机科学与技术学院), under the supervision of Prof. Zhou Zhao (赵洲). Previously, I graduated from Chu Kochen Honors College, Zhejiang University (浙江大学竺可桢学院), with dual bachelor’s degrees in Computer Science and Automation. I have also served as a visiting scholar at University of Rochester with Prof. Zhiyao Duan and University of Massachusetts Amherst with Prof. Przemyslaw Grabowicz.

My research interests primarily focus on Multi-Modal Generative AI, specifically in Spatial Audio, Music, Singing Voice, and Speech. I have published 10+ first-author papers at top international AI conferences, such as NeurIPS, ACL, and AAAI.

🔥 News

📝 Publications

*denotes co-first authors

🔊 Spatial Audio

ACM-MM 2025
sym

ISDrama: Immersive Spatial Drama Generation through Multimodal Prompting
Yu Zhang, Wenxiang Guo, Changhao Pan, et al.

Demo Hugging Face

  • MRSDrama is the first multimodal recorded spatial drama dataset, containing binaural drama audios, scripts, videos, geometric poses, and textual prompts.
  • ISDrama is the first immersive spatial drama generation model through multimodal prompting.
  • Our work is promoted by multiple media and forums, such as weixin, weixin, and zhihu.

🎼 Music

EMNLP 2025
sym

Versatile Framework for Song Generation with Prompt-based Control
Yu Zhang, Wenxiang Guo, Changhao Pan, et al.

Demo

  • VersBand is a multi-task song generation framework for synthesizing high-quality, aligned songs with prompt-based control.
  • Our work is promoted by multiple media and forums, such as weixin, weixin, and zhihu.

🎙️ Singing Voice

ACL 2025
sym

TCSinger 2: Customizable Multilingual Zero-shot Singing Voice Synthesis
Yu Zhang, Ziyue Jiang, Ruiqi Li, et al.

Demo

  • TCSinger 2 is a multi-task multilingual zero-shot SVS model with style transfer and style control based on various prompts.
  • Our work is promoted by multiple media and forums, such as weixin, weixin, and zhihu.
EMNLP 2024
sym

TCSinger: Zero-Shot Singing Voice Synthesis with Style Transfer and Multi-Level Style Control
Yu Zhang, Ziyue Jiang, Ruiqi Li, et al.

Demo

  • TCSinger is the first zero-shot SVS model for style transfer across cross-lingual speech and singing styles, along with multi-level style control.
NeurIPS 2024 Spotlight
sym

GTSinger: A Global Multi-Technique Singing Corpus with Realistic Music Scores for All Singing Tasks
Yu Zhang, Changhao Pan, Wenxinag Guo, et al.

Demo Hugging Face

  • GTSinger is a large Global, multi-Technique, free-to-use, high-quality singing corpus with realistic music scores, designed for all singing tasks.
  • Our work is promoted by multiple media and forums, such as weixin, weixin, and zhihu.
AAAI 2024
sym

StyleSinger: Style Transfer for Out-of-Domain Singing Voice Synthesis
Yu Zhang, Rongjie Huang, Ruiqi Li, et al.

Demo

  • StyleSinger is the first singing voice synthesis model for zero-shot style transfer of out-of-domain reference singing voice samples.

💬 Speech

💡 Others

📖 Educations

💻 Industrial Experiences

  • 2025.08-Now, Research Scientist at ByteDance.

🔍 Research Experiences

🎖 Honors and Awards

  • 2024.09, Outstanding PhD Student Scholarship of Zhejiang University.
  • 2020.06, Outstanding Graduate of Zhejiang University (Undergraduate).
  • 2019.09, First-Class Academic Scholarship of Zhejiang University (Undergraduate).

📚 Academic Services

  • Conference Reviewer: NeurIPS (2024, 2025), ICLR (2025, 2026), CVPR (2026), ACL (2024, 2025), AAAI (2026), ACM-MM (2025), EMNLP (2024, 2025), AACL (2025), EACL (2026).
  • Journal Reviewer: IEEE TASLP.