Abstract:Image classification is an important application of Polarized synthetic aperture radar (PolSAR) image interpretation. Traditional convolutional neural network (CNN) classifies pixel-by-pixel, resulting in repeated computation of convolution. PolSAR images have rich information, including polarization coherence feature information and polarization decomposition feature information, so it is crucial to fuse the information to achieve efficient classification. This paper proposes dual-channel feature fusion encoder-decoder network based on polarization scattering feature information analysis and the U_net network model. The network uses an attention feature fusion mechanism to integrate polarization coherence feature information and polarization decomposition feature information into a semantic segmentation framework, assists deep CNN classifier training to achieve high precision pixel-level labeling, and incorporates a spatial pyramid structure to efficiently extract multi-scale features. The network structure avoids the repetitive computation of pixel-by-pixel classification and effectively improves the computational efficiency. We use the PolSAR data acquired by AIRSAR in San Francisco area and the airborne PolSAR data in Boao area of Hainan for experimental study, and the experimental results show that the overall accuracy (OA) of the two areas is 97.11% and 99.97% regions respectively, which verifies the effectiveness and better application value of the classification method.