Abstract
Ultra-high dynamic range (UHDR) scenes exhibit pronounced exposure disparities between bright and dark regions. Such conditions are common in nighttime scenes with light sources. Even standard exposure settings often result in a bimodal intensity distribution with boundary peaks, making it challenging to simultaneously preserve both highlight and shadow details. RGB-based bracketing methods can capture details at both ends using short-long exposure pairs, but are susceptible to misalignment and ghosting artifacts. A short-exposure image, however, already retains sufficient highlight detail. The main challenge lies in denoising and recovering information in dark regions. RAW images, thanks to their higher bit depth and more predictable noise characteristics, offer greater potential for addressing this challenge. This raises a key question: can we learn to see everything in UHDR scenes using only a single short-exposure RAW image? Our method, relying solely on one short-exposure frame, inherently avoids ghosting and motion blur, making it particularly robust in dynamic scenes. To achieve that, we introduce a two-stage framework: exposure correction via a ratio map to balance dynamic range, followed by brightness-aware noise modeling to enhance detail recovery in dark regions. To support this, we design a 9-stop bracketing pipeline to synthesize realistic UHDR images, and construct a dataset accordingly on static scenes, using only the shortest exposure as input for reconstruction. Experiments show that our method significantly outperforms existing single-frame approaches. Code will be released publicly.