ABSTRACT: This paper introduces a self-supervised, end-to-end architecture that learns part-level implicit shape and appearance models and optimizes motion parameters jointly without requiring any 3D supervision, motion, or semantic annotation. The training process is similar to original NeRF but and extend the ray marching and volumetric rendering procedure to compose the two fields.
【Reading】LATITUDE:Robotic Global Localization with Truncated Dynamic Low-pass Filter in City-scale NeRF
This paper proposes a two-stage localization mechanism in city-scale NeRF.
Reading:"NeRF:Representing Scenes as Neural Radiance Fields for View Synthesis"
This is a summary for paper "NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis". Keywords: scene representation, view synthesis, image-based rendering, volume rendering, 3D deep learning