TY - JOUR
T1 - A two-stage density-aware single image deraining method
AU - Cao, Min
AU - Gao, Zhi
AU - Ramesh, Bharath
AU - Mei, Tiancan
AU - Cui, Jinqiang
PY - 2021
Y1 - 2021
N2 - Although advanced single image deraining methods have been proposed, one main challenge remains: the available methods usually perform well on specific rain patterns but can hardly deal with scenarios with dramatically different rain densities, especially when the impacts of rain streaks and the veiling effect caused by rain accumulation are heavily coupled. To tackle this challenge, we propose a two-stage density-aware single image deraining method with gated multi-scale feature fusion. In the first stage, a realistic physics model closer to real rain scenes is leveraged for initial deraining, and a network branch is also trained for rain density estimation to guide the subsequent refinement. The second stage of model-independent refinement is realized using conditional Generative Adversarial Network (cGAN), aiming to eliminate artifacts and improve the restoration quality. In particular, dilated convolutions are applied to extract rain features at multiple scales and gated feature fusion is exploited to better aggregate multi-level contextual information in both stages. Extensive experiments have been conducted on representative synthetic rain datasets and real rain scenes. Quantitative and qualitative results demonstrate the superiority of our method in terms of effectiveness and generalization ability, which outperforms the state-of-the-art.
AB - Although advanced single image deraining methods have been proposed, one main challenge remains: the available methods usually perform well on specific rain patterns but can hardly deal with scenarios with dramatically different rain densities, especially when the impacts of rain streaks and the veiling effect caused by rain accumulation are heavily coupled. To tackle this challenge, we propose a two-stage density-aware single image deraining method with gated multi-scale feature fusion. In the first stage, a realistic physics model closer to real rain scenes is leveraged for initial deraining, and a network branch is also trained for rain density estimation to guide the subsequent refinement. The second stage of model-independent refinement is realized using conditional Generative Adversarial Network (cGAN), aiming to eliminate artifacts and improve the restoration quality. In particular, dilated convolutions are applied to extract rain features at multiple scales and gated feature fusion is exploited to better aggregate multi-level contextual information in both stages. Extensive experiments have been conducted on representative synthetic rain datasets and real rain scenes. Quantitative and qualitative results demonstrate the superiority of our method in terms of effectiveness and generalization ability, which outperforms the state-of-the-art.
UR - https://hdl.handle.net/1959.7/uws:65477
U2 - 10.1109/TIP.2021.3099396
DO - 10.1109/TIP.2021.3099396
M3 - Article
SN - 1057-7149
VL - 30
SP - 6843
EP - 6854
JO - IEEE Transactions on Image Processing
JF - IEEE Transactions on Image Processing
M1 - 9499966
ER -