Damffn: Depth-Wise Convolution Attention and Multi-Scale Feature Fusion Network for Low-Light Image Enhancement

26 Pages Posted: 7 Aug 2024

See all articles by Xinlin Yuan

Xinlin Yuan

affiliation not provided to SSRN

Yong Wang

Chongqing University of Technology (CQUT)

Yang Boran

affiliation not provided to SSRN

Yan Li

affiliation not provided to SSRN

Yu Chen

affiliation not provided to SSRN

Yiling Li

Chongqing University of Technology (CQUT)

Multiple version iconThere are 3 versions of this paper

Abstract

Images obtained by existing low-light image enhancement methods still suffer from poor visibility, low contrast, and loss of spatial details. This paper proposes a depth-wise convolution attention and multi-scale feature fusion network for low-light image enhancement (DAMFFN) to solve these problems. In DAMFFN, firstly, this paper designs a low-light attention block (LLAB) consisting of a low-light multi-head self-attention block (LL-MSAB), a dual-branch equalization block (DBEB), and two layers of normalization composition. The LL-MSAB balances feature weights between different channels by calculating the attention map between channels, thus improving the image's visibility and contrast. Secondly, this paper leverages the DBEB to enhance the image's contrast. Finally, the paper proposes a multi-scale feature compensation block (MSFCB), which is used to reduce the loss of spatial details in the LLAB and downsampling stages. On the other hand, MSFCB can also fuse the deep spatial information of images of different scales. In terms of the loss function, this paper refers to a multi-scale frequency domain loss function (Mult-SFD), which reduces the difference in the frequency domain space between the reference image and the enhanced low-light image. This paper conducts sufficient qualitative and quantitative experiments in 5 public datasets, which are superior to many other low-light image enhancement methods in terms of visual effects and index scores(PSNR: Peak Signal-to-Noise Ratio. SSIM: Structural Similarity Index): PSNR=24.87dB, SSIM=0.856 on LOw-Light(LOL) dataset; PSNR=25.70dB, SSIM=0.912 on MIT-Adobe FiveK dataset.

Keywords: Low-light image enhancement, Low-light multi-head self-attention, Multi-scale Feature compensation, Dual-branch equalization, Frequency domain

Suggested Citation

Yuan, Xinlin and Wang, Yong and Boran, Yang and Li, Yan and Chen, Yu and Li, Yiling, Damffn: Depth-Wise Convolution Attention and Multi-Scale Feature Fusion Network for Low-Light Image Enhancement. Available at SSRN: https://ssrn.com/abstract=4919042

Xinlin Yuan

affiliation not provided to SSRN ( email )

Yong Wang (Contact Author)

Chongqing University of Technology (CQUT) ( email )

Yang Boran

affiliation not provided to SSRN ( email )

Yan Li

affiliation not provided to SSRN ( email )

Yu Chen

affiliation not provided to SSRN ( email )

Yiling Li

Chongqing University of Technology (CQUT) ( email )

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Downloads
6
Abstract Views
71
PlumX Metrics