Urban Radiance Fields

CVPR 2022


Paper

Arxiv

Video

Abstract

The goal of this work is to perform 3D reconstruction and novel view synthesis from data captured by scanning platforms commonly deployed for world mapping in urban outdoor environments (e.g., Street View). Given a sequence of posed RGB images and lidar sweeps acquired by cameras and scanners moving through an outdoor scene, we produce a model from which 3D surfaces can be extracted and novel RGB images can be synthesized. Our approach extends Neural Radiance Fields, which has been demonstrated to synthesize realistic novel images for small scenes in controlled settings, with new methods for leveraging asynchronously captured lidar data, for addressing exposure variation between captured images, and for leveraging predicted image segmentations to supervise densities on rays pointing at the sky. Each of these three extensions provides significant performance improvements in experiments on Street View data. Our system produces state-of-the-art 3D surface reconstructions and synthesizes higher quality novel views in comparison to both traditional methods (e.g.~COLMAP) and recent neural representations (e.g.~Mip-NeRF).

Video

Novel View Synthesis

Click on a city to visualize a novel camera trajectory (wait a bit to load).

Mesh Reconstruction

We use our method to extract colored meshes and visualize them on the browser (it may take some time to load).

Citation

The video was made by the authors using Blender and Adobe Premiere Pro. The interactive world map is based on d3.js. For the mesh visualization we use three.js.