Xin Jin1,2,*,#, Pengyi Jiao1,*, Zheng-Peng Duan1, Xingchao Yang2, Chun-Le Guo1, Bo Ren1, Chongyi Li1
1VCIP, CS, Nankai University,
2MEGVII Technology,
*Equal Contribution
#This project is done during Xin Jin's Internship @ MEGVII Technology
Click to learn more
Due to LE3D's capability for real-time rendering, once the reconstruction results are obtained, we can perform real-time rendering on them. Additionally, since the reconstruction results from LE3D are in the HDR color space, it allows for exposure variation as well as local and global tone-mapping operations. Our interactive viewer enables users to edit the scene in real-time, and it will provide functionality for creating video storyboards using keyframes in the future.
RawGS* denotes a 3DGS trained with RawNeRF's weighted loss and multi-exposure strategy.
Comparison with RawNeRF |
Comparison with RawGS |
Compared to RawNeRF (left), LE3D can perform real-time rendering at 2K resolution, and as a result, it can also conduct nearly real-time postprocessing (our interactive viewer). In comparison with RawGS, LE3D possesses superior capabilities in scene structure optimization, and additionally, it has fewer residual floaters in the air.
|
|
|
|
LE3D has a natural noise resistance similar to RawNeRF, which is attributed to the noise2noise assumption and the fact that the noise is entirely located on the camera plane, rather than being distributed in space.
@inproceedings{jin2024le3d,
title={Lighting Every Darkness with 3DGS: Fast Training and Real-Time Rendering for HDR View Synthesis},
author={Jin, Xin and Jiao, Pengyi and Duan, Zheng-Peng and Yang, Xingchao and Guo, Chun-Le and Ren, Bo and Li, Chong-Yi},
booktitle={arxiv preprint},
year={2024}
}
Feel free to contact us at xjin[AT]mail.nankai.edu.cn!
Visitor Count