Learning to See Everything in Ultra High Dynamic Range Scenes
Yuang Meng*    Xin Jin*    Lina Lei    Chunle Guo    Chong-Yi Li   
*Equal contribution     VCIP, CS, Nankai University

Arxiv 2025

🔥 Want to See Everything in UHDR Scenes? Try UltraLED! 🔥

UltraLED generates UHDR results from a single short-exposure RAW image. As a result, it avoids hallucination in highlight regions and eliminates the need for alignment processing, unlike bracketing-based approaches.

Here are some results with UltraLED.


🤖️ Powerful Ability to Deploy on Videos!

Even when trained only on images, UltraLED can be rapidly adapted to videos.

Abstract

Ultra-high dynamic range (UHDR) scenes exhibit pronounced exposure disparities between bright and dark regions. Such conditions are common in nighttime scenes with light sources. Even standard exposure settings often result in a bimodal intensity distribution with boundary peaks, making it challenging to simultaneously preserve both highlight and shadow details. RGB-based bracketing methods can capture details at both ends using short-long exposure pairs, but are susceptible to misalignment and ghosting artifacts. A short-exposure image, however, already retains sufficient highlight detail. The main challenge lies in denoising and recovering information in dark regions. RAW images, thanks to their higher bit depth and more predictable noise characteristics, offer greater potential for addressing this challenge. This raises a key question: can we learn to see everything in UHDR scenes using only a single short-exposure RAW image? Our method, relying solely on one short-exposure frame, inherently avoids ghosting and motion blur, making it particularly robust in dynamic scenes. To achieve that, we introduce a two-stage framework: exposure correction via a ratio map to balance dynamic range, followed by brightness-aware noise modeling to enhance detail recovery in dark regions. To support this, we design a 9-stop bracketing pipeline to synthesize realistic UHDR images, and construct a dataset accordingly on static scenes, using only the shortest exposure as input for reconstruction. Experiments show that our method significantly outperforms existing single-frame approaches. Code will be released publicly.

BibTex


@article{jin2025learning,
    title={Learning to See Everything in Ultra High Dynamic Range},
    author={Meng, Yuang and Jin, Xin and Lei, Lina and Guo, Chunle and Li, Chongyi},
    journal={arXiv preprint},
    year={2025}
}
                

Contact

Feel free to contact us at xjin[AT]mail.nankai.edu.cn!

web counter

Visitor Count