Abstract:To address issues like insuficient network focus,weak synergy between modules,and loss of deep feature representations in image super-resolution,a multi-level residual aggregation superesolution reconstruction model is presented.This model integrates hierarchical interactive dynamic attention with sequence learning units,featuring a network structure with multi-level feature fusion and skip connections for capturing diverse information levels more richly and accurately.Residual connections prevent gradient vanishing,ensuring smooth,flexible enhancements in deep networks.The dynamic hierarchical fusion attention module dynamically assigns importance weights to each feature for selective fusion,complemented by sequence learning units that broaden the contextual scope.A multi-scale feature fusion module combines features from different receptive fields to explore deeper representations.At the end,a lightweight,parameter-free attention mechanism adaptively weights feature maps,restoring high-frequency details. Experimental results demonstratethat this model surpasses mainstream algorithms in 3x super-resolution reconstruction across multiple public datasets(Set5,Set14,BSD100,Urbanl00,Mangal09),with average improvements of about 0.47 dB in PSNR and 0.0068 in SSIM,showcasing its potential for practical remote sensing applications and its superiority in the domain