Publication

Non-Local Image Dehazing. Berman, D. and Treibitz, T. and Avidan S., CVPR 2016

paper   paper  


Source Code

The software code of Non-Local Image Dehazing is provided under this license agreement.   matlab source code


Abstract

Haze limits visibility and reduces image contrast in outdoor images. The degradation is different for every pixel and depends on the distance of the scene point from the camera. This dependency is expressed in the transmission coefficients, that control the scene attenuation and amount of haze in every pixel. Previous methods solve the single image dehazing problem using various patch-based priors. We, on the other hand, propose an algorithm based on a new, non-local prior. The algorithm relies on the assumption that colors of a haze-free image are well approximated by a few hundred distinct colors, that form tight clusters in RGB space. Our key observation is that pixels in a given cluster are often non-local, i.e., they are spread over the entire image plane and are located at different distances from the camera. In the presence of haze these varying distances translate to different transmission coefficients. Therefore, each color cluster in the clear image becomes a line in RGB space, that we term a haze-line. Using these haze-lines, our algorithm recovers both the distance map and the haze-free image. The algorithm is linear in the size of the image, deterministic and requires no training. It performs well on a wide variety of images and is competitive with other state-of-the-art methods.

Slides of a 20 minutes talk, given at the Vision Day


Non-Local Prior

Our method is based on the observation that the number of distinct colors in an image is orders of magnitude smaller than the number of pixels.
The observation regarding a small number of distinct colors holds for haze-free images. In the presence of haze, object points that belong to the same color cluster end up with different acquired colors, since they are located in disparate image areas and thus have different distances from the camera. This prior suggests that pixels clustered together in a haze-free image form a line in RGB space in a hazy image. Based on Eq. (1), the two end points of the line are the original color J and the airlight A. These are the haze-lines.
This prior is demonstrated in Fig: Non-Local Prior. A haze-free image is clustered using K-means to 500 clusters. The pixels belonging to four of these clusters are marked by different color markers in (a) and their RGB coordinates are plotted in (b), demonstrating tight clusters. Note that the clusters include pixels distributed over the entire image that come from objects with different distances from the camera.
A synthetic hazy image (c) was generated from the clear image. The same pixels as in (a) are marked. However, now, colors of pixels that belonged to the same color cluster are no longer similar. This is depicted in RGB space in (d), where the color coordinates of these pixels are distributed along a haze-line spanned by the original color and the airlight. The pixels marked by purple circles (originating from the sand patch) are located in similar distances, so their distribution along the haze-line is rather tight. However, the pixels marked by orange triangles (grassy areas) are found at different locations in the real world, so they are distributed along the haze-line.

Fig: Non-Local Prior


Results

This section contains additional results which complement the paper:
1) Various images before and after color-quantization, to support our prior. All the images are from the Berkeley Segmentation Dataset.
2) A comparison of our single image dehazing method to state-of-the-art algorithms. We compare:
    a) Natural Images
    b) Synthetic Images
    c) Noisy Synthetic Images

To switch between images please use the colored buttons on the left. Please note that the result images are initialized to our results .
A complete list of references is given at the end.


Color Quantized Images

Our method is based on the observation that the number of distinct colors in an image is orders of magnitude smaller than the number of pixels. This assumption is used for saving color images using indexed colormaps. We validate and quantify it on the Berkeley Segmentation Dataset: we clustered the RGB pixel values using K-means to a maximum of 500 clusters, and replaced every pixel in the image with its respective cluster center. The result is an image with 500 different RGB values at most (two orders of magnitude smaller than image size). We show a random selection of outdoor scenes, before and after the color quantization.

Color-quantized image
Color-quantized image
Color-quantized image
Color-quantized image

Natural Images

Lviv

Input Hazy Image
Hazy Input
Dehazing Methods
Output Dehazed Image
Dehazed Image
Dehazing Methods
Output Transmission Map
Transmission Map

Stadium

Input Hazy Image
Stadium Input
Dehazing Methods
Output Dehazed Image
Dehazed Image

Wheat Field

Input Hazy Image
cones Input
Dehazing Methods
Output Dehazed Image
Dehazed Image

Florence

Input Hazy Image
Florence Input
Dehazing Methods
Output Dehazed Image
Dehazed Image

Forest

Input Hazy Image
Hazy Input
Dehazing Methods
Output Dehazed Image
Dehazed Image
Dehazing Methods
Output Transmission Map
Transmission Map

Hazy Day

Input Hazy Image
Hazy Input
Dehazing Methods
Output Dehazed Image
Dehazed Image

New York

Input Hazy Image
Hazy Input
Dehazing Methods
Output Dehazed Image
Dehazed Image

Pumpkins

Input Hazy Image
Hazy Input
Dehazing Methods
Output Dehazed Image
Dehazed Image
Dehazing Methods
Output Transmission Map
Transmission Map

Flags

Dehazing Methods
Input Hazy Image
Hazy Input
Output Dehazed Image
Dehazed Image
Output Transmission Map
Transmission Map

Swan

Input Hazy Image
Hazy Input
Dehazing Methods
Output Dehazed Image
Dehazed Image

Train

Input Hazy Image
Hazy Input
Dehazing Methods
Output Dehazed Image
Dehazed Image

House

Dehazing Methods
Output Dehazed Image
Dehazed Image
Input Hazy Image
Hazy Input

Synthetic Images

A synthetic dataset of hazy images of natural scenes was introduced by [Fattal 2014], and is available online. The dataset contains eleven haze free images, synthetic distance maps and corresponding simulated haze images. The following table summarizes the L1 errors on non-sky pixels (same metric used in [Fattal 2014]) of the transmission maps and the dehazed images. Our method is compared to the method of [Fattal 2014] and an implementation of [He et al. 2009] by [Fattal 2014].

Table: L1 errors of transmission maps and dehazed images


The transmission maps are displayed along with the images. They are color-mapped: warm colors indicate high values, while cold color indicate low values.
Please note that the buttons on the left switch both the image and the transmission map.

Flower 1

Input Hazy Image
Hazy Input
Dehazing Methods
Output Dehazed Image
Dehazed Image
Dehazing Methods
Output Transmission Map
Transmission Map

Lawn 2

Input Hazy Image
Hazy Input
Dehazing Methods
Output Dehazed Image
Dehazed Image
Dehazing Methods
Output Transmission Map
Transmission Map

Road2

Input Hazy Image
Hazy Input
Dehazing Methods
Output Dehazed Image
Dehazed Image
Dehazing Methods
Output Transmission Map
Transmission Map

Noisy Synthetic Images

Church, σ=0.05

Input Noisy and Hazy Image
Hazy Input
Dehazing Methods
Output Dehazed Image
Dehazed Image
Dehazing Methods
Output Transmission Map
Transmission Map

Lawn1, σ=0.025

Input Noisy and Hazy Image
Hazy Input
Dehazing Methods
Output Dehazed Image
Dehazed Image
Dehazing Methods
Output Transmission Map
Transmission Map

Road1, σ=0.05

Input Noisy and Hazy Image
Hazy Input
Dehazing Methods
Output Dehazed Image
Dehazed Image
Dehazing Methods
Output Transmission Map
Transmission Map

References

[Tan 2008] TAN, R. 2008. Visibility in bad weather from a single image. In CVPR.
[Kopf et al. 2008] KOPF, J., NEUBERT, B., CHEN, B., COHEN, M., COHEN-OR, D., DEUSSEN, O., UYTTENDAELE, M., AND LISCHINSKI, D. 2008. Deep photo: Model-based photograph enhancement and viewing. ACM Trans. Graph. 27, 5, 116.
[Tarel and Hautière 2009] TAREL J. P., AND HAUTIÈRE N. 2009. Fast visibility restoration from a single color or gray level image. In ICCV.
[He et al. 2009] HE, K., SUN, J., AND TANG, X. 2009. Single image haze removal using dark channel prior. In CVPR.
[Nishino et al. 2012] NISHINO, K., KRATZ, L., AND LOMBARDI, S. 2012. Bayesian defogging. IJCV, 98, 3, 263-278.
[Ancuti et al. 2013] ANCUTI, C. O., AND ANCUTI, C. 2013. Single image dehazing by multi-scale fusion. IEEE Trans. on Image Processing, 22, 8, 3271-3282.
[Gibson et al. 2013] GIBSON, K.B, AND NGUYEN, T.Q. 2013. An analysis of single image defogging methods using a color ellipsoid framework. EURASIP Journal on Image and Video Processing.
[Tang et al. 2014] TANG, K., YANG, J., AND WANG, J. 2014. Investigating haze-relevant features in a learning framework for image dehazing. In CVPR.
[Fattal 2014] FATTAL, R. 2014. Dehazing using color-lines. ACM Trans. Graph. 34, 1, 13.
[Luzón-González et al. 2014] LUZÓN-GONZÁLEZ, R., NIEVES, J. L., AND ROMERO, J. 2014. Recovering of weather degraded images based on RGB response ratio constancy. Appl. Opt.
[Bahat and Irani 2016] Bahat, Y. and Irani, M. 2016. Blind Dehazing Using Internal Patch Recurrence. In ICCP.