Objective All traditional example-based super-resolution methods adopt image-gradient features for low-resolution images and thus, these methods are unable to characterize the low-resolution space satisfactorily. To address this issue, this paper proposes a novel unified framework for image super-resolution that effectively combines example-based method with deep learning models. Method The proposed method consists of three main stages:low- and high-resolution similarity-learning, high-resolution patch-dictionary-learning, and high-resolution patch-generating stages. At the first stage, two different convolutional neural networks are proposed for learning a novel similarity metric between high- and low-resolution image patches. At the second stage, the high-resolution patch dictionaries are learned from training sets. At the last stage, the high-resolution patches are generated based on learned similarities between the input low-resolution patch and the atoms in the high-resolution patch dictionary. Result Experimental results on several commonly adopted datasets show that the proposed two-channel model quantitatively and qualitatively achieves improved performance compared with other methods. ConclusionThe proposed two-channel model can preserve more detailed information and reduce ringing artifacts in the resulting images.Objective All traditional example-based super-resolution methods adopt image-gradient features for low-resolution images and thus, these methods are unable to characterize the low-resolution space satisfactorily. To address this issue, this paper proposes a novel unified framework for image super-resolution that effectively combines example-based method with deep learning models. Method The proposed method consists of three main stages:low- and high-resolution similarity-learning, high-resolution patch-dictionary-learning, and high-resolution patch-generating stages. At the first stage, two different convolutional neural networks are proposed for learning a novel similarity metric between high- and low-resolution image patches. At the second stage, the high-resolution patch dictionaries are learned from training sets. At the last stage, the high-resolution patches are generated based on learned similarities between the input low-resolution patch and the atoms in the high-resolution patch dictionary. Result Experimental results on several commonly adopted datasets show that the proposed two-channel model quantitatively and qualitatively achieves improved performance compared with other methods. ConclusionThe proposed two-channel model can preserve more detailed information and reduce ringing artifacts in the resulting images.Objective All traditional example-based super-resolution methods adopt image-gradient features for low-resolution images and thus, these methods are unable to characterize the low-resolution space satisfactorily. To address this issue, this paper proposes a novel unified framework for image super-resolution that effectively combines example-based method with deep learning models. Method The proposed method consists of three main stages:low- and high-resolution similarity-learning, high-resolution patch-dictionary-learning, and high-resolution patch-generating stages. At the first stage, two different convolutional neural networks are proposed for learning a novel similarity metric between high- and low-resolution image patches. At the second stage, the high-resolution patch dictionaries are learned from training sets. At the last stage, the high-resolution patches are generated based on learned similarities between the input low-resolution patch and the atoms in the high-resolution patch dictionary. Result Experimental results on several commonly adopted datasets show that the proposed two-channel model quantitatively and qualitatively achieves improved performance compared with other methods. ConclusionThe proposed two-channel model can preserve more detailed information and reduce ringing artifacts in the resulting images.