EvaGaussians: Event Stream Assisted Gaussian Splatting from Blurry Images

Comparison results of novel view synthesis. Click the arrow to whatch more results.
(Due to file size limitations, we select certain scenarios to show the performance of our method.)

Abstract

3D Gaussian Splatting (3D-GS) has demonstrated exceptional capabilities in synthesizing novel views of 3D scenes. However, its training is heavily reliant on high-quality images and precise camera poses. Meeting these criteria can be challenging in non-ideal real-world conditions, where motion-blurred images frequently occur due to high-speed camera movements or low-light environments. To address these challenges, we introduce Event Stream Assisted Gaussian Splatting (EvaGaussians), a novel approach that harnesses event streams captured by event cameras to facilitate the learning of high-quality 3D-GS from blurred images. Capitalizing on the high temporal resolution and dynamic range offered by event streams, we seamlessly integrate them into the initialization and optimization of 3D-GS, thereby enhancing the acquisition of high-fidelity novel views with intricate texture details. We also contribute two novel datasets comprising RGB frames, event streams, and corresponding camera parameters, featuring a wide variety of scenes and various camera motions. The comparison results reveal that our approach not only excels in generating high-fidelity novel views, but also offers faster training and inference speeds.

Method Overview

Harnessing the exceptional temporal resolution and dynamic range offered by event streams, we begin by using them to assist in the initialization of 3D-GS and camera poses, then utilize a blur formation model and an event reconstruction loss to jointly optimize the 3D-GS parameters and recover blur-formation camera trajectories in a bundle-adjustment manner. Leveraging the continuously recorded event streams, we introduce two event-assisted geometry regularization terms that can function beyond the exposure time to stabilize the geometry of 3D-GS.


Compression Pipeline

Comparison Results

Quantitative comparisons of novel view synthesis across large-scale, medium-scale, object-level, and real-world scenes. The table reports the average performance for each scale, demonstrating that our method consistently surpasses previous state-of-the-art approaches across all metrics. Best-performing results are highlighted in bold and second-best results in underline.


Compression Pipeline

Interactive Examples


Image comparisons with the SOTA method EvDNeRF and our method.

Ours
23.71 dB
EvDNeRF
22.23 dB
Ours
24.88 dB
EvDNeRF
21.62 dB
Ours
30.26 dB
EvDNeRF
29.69 dB
Ours
RankIQA↓ 5.09
EvDNeRF
RankIQA↓ 5.25