NeRF_presentation_report_20231023_RunyiYang.pdf
NeRF Presentation.pdf
Oct 10, 2023
[Runyi Yang](<https://runyiyang.github.io/>) / [Runyi’s Blogs](<https://runyiyang.notion.site/Runyi-s-Blogs-f52d6bf73e104c51a4f5e80529b6a9b6>)
Origin Paper
ECCV 2020 Best Paper Final List
NeRF — Neural Radiance Fields
Abstract
- We present a method that achieves state-of-the-art results for synthesizing novel views of complex scenes by optimizing an underlying continuous volumetric scene function using a sparse set of input views. Our algorithm represents a scene using a fully-connected (non convolutional) deep network, whose input is a single continuous 5D coordinate (spatial location (x, y, z) and viewing direction (θ, φ)) and whose output is the volume density and view-dependent emitted radiance at that spatial location. We synthesize views by querying 5D coordinates along camera rays and use classic volume rendering techniques to project the output colors and densities into an image. Because volume rendering is naturally differentiable, the only input required to optimize our representation is a set of images with known camera poses. We describe how to effectively optimize neural radiance fields to render photorealistic novel views of scenes with complicated geometry and appearance, and demonstrate results that outperform prior work on neural rendering and view synthesis. View synthesis results are best viewed as videos, so we urge readers to view our supplementary video for convincing comparisons.
1 Introduction
Basic Idea
Since computer graphics started to develop, we always hoped to use computers to render a photorealistic image. Naturally, to talk about this, we need to briefly talk about optical and physical propagation laws and some academic terms.
- Lights come into human eyes through reflection, refraction, transmission, absorption, scattering, and so on physical processes. It’s so hard for a computer to simulate the whole process.
- Reflection: When light hits a surface, it can be reflected. The laws of reflection dictate that the angle of incidence is equal to the angle of reflection.
- Refraction: This is the bending of light as it passes from one medium into another with a different refractive index.
- Transmission: Some materials allow light to pass through them, with or without some degree of refraction.
- Absorption: Materials can also absorb light, converting the energy into other forms like heat.
- Scattering: Light can be deflected in many directions when encountering particles or rough surfaces.
- 3D points in space radiate light (but not reflect), the radiance changes with different perspectives and obstructs the propagation of light. NeRF’s principle is to reconstruct the radiance field to describe a 3D scene.
- If the points in front will block the points behind?
- The points have the density, when light goes through solid points, the intensity decreases
- Due to light and scattering, the light should be different in different views (highlight and shadow)
2. Neural Radiance Field Scene Representation
Synthesize images by sampling 5D coordinates (location and viewing direction) along camera rays, feeding those locations into an MLP to produce a color $RGB$ and volume density $\sigma$, and using volume rendering techniques to composite these values into an image.
Break NeRF into 3 key factors: shape, appearance and rendering
- NeRF: $f: (x, y, z, \theta, \phi) \rightarrow (R, G, B, \sigma)$
- Appearance: $(x, y, z, \theta, \phi) \rightarrow (R, G, B)$
- Shape: $(x, y, z) \rightarrow \sigma$
- $\sigma: (x, y, z)$ Volume Density
- Direction pose invariant, so only corresponding to the position