Event3DGS: Event-Based 3D Gaussian Splatting for High-Speed Robot Egomotion
CoRL 2024

University of Maryland, College Park
*Equal Contribution

Left: Conventional (frame-based) 3D Gaussian Splatting fails to reconstruct geometric details due to motion blur caused by high-speed robot egomotion. Right: By exploiting the high temporal resolution of event cameras, Event3DGS can effectively reconstruct structure and appearance in the presence of fast egomotion.

Abstract

By combining differentiable rendering with explicit point-based scene representations, 3D Gaussian Splatting (3DGS) has demonstrated breakthrough 3D reconstruction capabilities. However, to date 3DGS has had limited impact on robotics, where high-speed egomotion is pervasive: Egomotion introduces motion blur and leads to artifacts in existing frame-based 3DGS reconstruction methods.

To address this challenge, we introduce Event3DGS, an event-based 3DGS framework. By exploiting the exceptional temporal resolution of event cameras, Event3GDS can reconstruct high-fidelity 3D structure and appearance under high-speed egomotion. Extensive experiments on multiple synthetic and real-world datasets demonstrate the superiority of Event3DGS compared with existing event-based dense 3D scene reconstruction frameworks; Event3DGS substantially improves reconstruction quality (+3dB) while reducing computational costs by 95%. Our framework also allows one to incorporate a few motion-blurred frame-based measurements into the reconstruction process to further improve appearance fidelity without loss of structural accuracy.

Method


Interpolate start reference image.

The proposed Event3DGS aims to efficiently reconstruct a 3D scene representation from a given sequence of events (either grayscale or color) under high-speed robot egomotion and low-light conditions.

  • Accumulating & Sampling Event Frames: First, We utilize a neutralization-aware accumulator and sparsity-aware sampling strategy to process the input event stream into frames.
  • Event-based Reconsruction: Then, the sampled event frames are utilized as differential supervision between the corresponding rendered views, optimizing the 3D Gaussians to reconstruct sharp structures and apperance from fast egomotion.
  • Progressive Training: As point initialization—essential for Gaussian Splatting reconstruction—is challenging to derive directly from event streams, we filter high-density regions from the pretrained Event3DGS as an alternative, then conduct training in a progressive manner to enhance reconstruction of structural details.
  • (Optional) Blur-aware Appearance refinement: As an optional component, We use a few motion-blurred RGB images to fine-tune the appearance-related parameters of Event3DGS, to further improve visual fidelity while preserving the sharp structures obtained from event sequences.

Results


Visualization on synthetic scenes (event-only). Event sequences were generated using blender and event simulator. Event3DGS excels in reconstructing sharp structures and appearance details, such as ficus leaves (2nd row) and drum racks (3rd row).

Visualization on real-world scenes (event-only). Event sequences were emulated using experimental frame-based data and v2e. Event3DGS effectively captures fine details (e.g. grass behind the bicycle), and preserves 3D consistency.

Visualization on low-light experimental scenes (event-only). Event sequences were experimentally captured using DAVIS-346C. Event3DGS exhibits superior performance in accurately reconstructing edges of the objects and removing noises on non-event background pixels.

BibTeX

@inproceedings{xiongevent3dgs,
    title={Event3dgs: Event-based 3d gaussian splatting for high-speed robot egomotion},
    author={Xiong, Tianyi and Wu, Jiayi and He, Botao and Fermuller, Cornelia and Aloimonos, Yiannis and Huang, Heng and Metzler, Christopher},
    booktitle={8th Annual Conference on Robot Learning}
}