Abstract:To address issues in image super-resolution like lack of network focus, weak module synergy, and vanishing deep features, a model combining hierarchical dynamic attention with sequence learning units for multi-level residual super-resolution reconstruction is proposed. The model utilizes multi-level feature fusion and skip connections for capturing diverse information levels, enhancing accuracy. It employs residual connections to prevent gradient loss and introduces a dynamic attention module for selective feature fusion, alongside sequence learning for extended context. A multi-scale fusion module merges features across different fields to deepen feature representation. A lightweight, parameter-free attention mechanism at the module's end adaptively enhances feature maps, restoring image details. Experiments demonstrate its superiority over mainstream algorithms in PSNR and SSIM across standard datasets, showcasing potential in remote sensing image super-resolution.