Since most of the HDR examples contain smooth regions that are simple to reconstruct, we propose a sampling strategy to select challenging training patches during the HDR fine-tuning stage. Specifically, we first train our system on a large number of images for image inpainting task and then fine-tune it on HDR reconstruction. Since the number of HDR images for training is limited, we propose to train our system in two stages. Moreover, we adapt the VGG-based perceptual loss function to our application to be able to synthesize visually pleasing textures. To overcome this problem, we propose a feature masking mechanism that reduces the contribution of the features from the saturated areas. Previous deep learning-based methods apply the same convolutional filters on wellexposed and saturated pixels, creating ambiguity during training and leading to checkerboard and halo artifacts. ![]() ![]() ![]() In this paper, we present a novel learning-based approach to reconstruct an HDR image by recovering the saturated pixels of an input LDR image in a visually pleasing way. Existing single image high dynamic range (HDR) reconstruction methods attempt to expand the range of luminance, but are not able to hallucinate plausible textures, producing results with artifacts in the saturated areas. Digital cameras can only capture a limited range of real-world scenes' luminance, producing images with saturated pixels.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |