Papers
arxiv:2512.22905

JavisGPT: A Unified Multi-modal LLM for Sounding-Video Comprehension and Generation

Published on Dec 28, 2025
ยท Submitted by
KAI LIU
on Jan 1
Authors:
,
,
,
,
,
,
,
,
,
,
,
,
,

Abstract

This paper presents JavisGPT, the first unified multimodal large language model (MLLM) for Joint Audio-Video (JAV) comprehension and generation. JavisGPT adopts a concise encoder-LLM-decoder architecture, featuring a SyncFusion module for spatio-temporal audio-video fusion and synchrony-aware learnable queries to bridge a pretrained JAV-DiT generator. This design enables temporally coherent video-audio understanding and generation from multimodal instructions. We design an effective three-stage training pipeline consisting of multimodal pretraining, audio-video fine-tuning, and large-scale instruction-tuning, to progressively build multimodal comprehension and generation from existing vision-language models. To support this, we further construct JavisInst-Omni, a high-quality instruction dataset with over 200K GPT-4o-curated audio-video-text dialogues that span diverse and multi-level comprehension and generation scenarios. Extensive experiments on JAV comprehension and generation benchmarks show that JavisGPT outperforms existing MLLMs, particularly in complex and temporally synchronized settings.

Community

Paper submitter

๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅ JavisGPT

๐ŸŒŸ We introduce JavisGPT, a multimodal LLM that can understand audiovisual inputs and simultaneously generate synchronized sounding videos in a unified model.

๐Ÿค  We contribute JavisInst-Omni, a dataset to facilitate diverse and complex instruction-tuning for comprehension and generation on sounding videos.

๐Ÿ“ Paper: https://arxiv.org/abs/2503.23377
๐ŸŽ‰ Project: https://javisverse.github.io/JavisGPT-page/
โœจ Code: https://github.com/JavisVerse/JavisGPT

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

arXiv lens breakdown of this paper ๐Ÿ‘‰ https://arxivlens.com/PaperView/Details/javisgpt-a-unified-multi-modal-llm-for-sounding-video-comprehension-and-generation-6566-5b0bd725

  • Executive Summary
  • Detailed Breakdown
  • Practical Applications

Sign up or log in to comment

Models citing this paper 1

Datasets citing this paper 4

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2512.22905 in a Space README.md to link it from this page.

Collections including this paper 1