3D Gaussian Splatting (3DGS) has emerged as a state-of-the-art method for novel view synthesis. However, its performance heavily relies on dense, high-quality input imagery, an assumption that is often violated in real-world applications, where data is typically sparse and motion-blurred. These two issues create a "vicious cycle": sparse views ignore the multi-view constraints necessary to resolve motion blur, while motion blur erases high-frequency details crucial for aligning the limited views. Thus, reconstruction often fails catastrophically, with fragmented views and a low-frequency bias. To break this cycle, we introduce CoherentGS, a novel framework for high-fidelity 3D reconstruction from sparse and blurry images. Our key insight is to address these compound degradations using a dual-prior strategy. Specifically, we combine two pre-trained generative models: a specialized deblurring network for restoring sharp details and providing photometric guidance, and a diffusion model that offers geometric priors to fill in unobserved regions of the scene. This strategy is supported by several key techniques, including a consistency-guided camera exploration module that adaptively guides the generative process, and a depth regularization loss that ensures geometric plausibility. We evaluate CoherentGS through both quantitative and qualitative experiments on synthetic and real-world scenes, using as few as 3, 6, and 9 input views. Our results demonstrate that CoherentGS significantly outperforms existing methods, setting a new state-of-the-art for this challenging task.
The overview of CoherentGS. Our framework synergizes two generative priors to resolve sparse view ambiguity and motion blur. (Left) We initialize the Gaussian primitives using poses estimated by COLMAP. (Top) Photometric Restoration via Deblurring Priors: This branch models the physical exposure trajectory to supervise blurry synthesis via $\mathcal{L}_{\text{blurry}}$ and distills sharp high frequency details from a pretrained deblurring network using a perceptual loss $\mathcal{L}_{\text{pr}}$. (Bottom) Geometric Guidance via Repair Diffusion Priors: This branch utilizes a diffusion model to repair structural defects in explorative viewpoints. A score distillation loss $\mathcal{L}_{\text{geo}}$ and a depth regularization loss $\mathcal{L}_{\text{reg}}$ are applied to guide the geometry completion and ensure consistency.
Drag the slider to scan between methods.
@article{feng2025coherentgs,
author = {Feng, Chaoran and Xu, Zhankuo and Li, Yingtao and Zhao, Jianbin and Yang, Jiashu and Yu, Wangbo and Yuan, Li and Tian, Yonghong},
title = {Breaking the Vicious Cycle: Coherent 3D Gaussian Splatting from Sparse and Motion-Blurred Views},
journal = {ArXiv},
year = {2025},
}