DynamicVLA: A Vision-Language-Action Model for Dynamic Object Manipulation
Abstract
DynamicVLA addresses dynamic object manipulation challenges through a compact vision-language-action model with temporal reasoning and closed-loop adaptation, supported by a new benchmark for dynamic manipulation tasks.
Manipulating dynamic objects remains an open challenge for Vision-Language-Action (VLA) models, which, despite strong generalization in static manipulation, struggle in dynamic scenarios requiring rapid perception, temporal anticipation, and continuous control. We present DynamicVLA, a framework for dynamic object manipulation that integrates temporal reasoning and closed-loop adaptation through three key designs: 1) a compact 0.4B VLA using a convolutional vision encoder for spatially efficient, structurally faithful encoding, enabling fast multimodal inference; 2) Continuous Inference, enabling overlapping reasoning and execution for lower latency and timely adaptation to object motion; and 3) Latent-aware Action Streaming, which bridges the perception-execution gap by enforcing temporally aligned action execution. To fill the missing foundation of dynamic manipulation data, we introduce the Dynamic Object Manipulation (DOM) benchmark, built from scratch with an auto data collection pipeline that efficiently gathers 200K synthetic episodes across 2.8K scenes and 206 objects, and enables fast collection of 2K real-world episodes without teleoperation. Extensive evaluations demonstrate remarkable improvements in response speed, perception, and generalization, positioning DynamicVLA as a unified framework for general dynamic object manipulation across embodiments.
Community
TL; DR: DynamicVLA enables open-ended dynamic object manipulation by pairing a compact 0.4B VLM with low-latency Continuous Inference and Latent-aware Action Streaming, evaluated at scale through the new DOM benchmark in both simulation and the real world.
- GitHub: https://github.com/hzxie/DynamicVLA
- Project Page: https://haozhexie.com/project/dynamic-vla
- Spotlight Video: https://www.youtube.com/watch?v=NmJnHcI04_Q
arXivLens breakdown of this paper 👉 https://arxivlens.com/PaperView/Details/dynamicvla-a-vision-language-action-model-for-dynamic-object-manipulation-4139-91b8768b
- Executive Summary
- Detailed Breakdown
- Practical Applications
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- PosA-VLA: Enhancing Action Generation via Pose-Conditioned Anchor Attention (2025)
- PALM: Progress-Aware Policy Learning via Affordance Reasoning for Long-Horizon Robotic Manipulation (2026)
- Video2Act: A Dual-System Video Diffusion Policy with Robotic Spatio-Motional Modeling (2025)
- Spatial-Aware VLA Pretraining through Visual-Physical Alignment from Human Videos (2025)
- ReViP: Reducing False Completion in Vision-Language-Action Models with Vision-Proprioception Rebalance (2026)
- Robotic VLA Benefits from Joint Learning with Motion Image Diffusion (2025)
- Clutter-Resistant Vision-Language-Action Models through Object-Centric and Geometry Grounding (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 1
Spaces citing this paper 0
No Space linking this paper