Blind image sharpness metric based on edge and texture features

Document Type

Conference Proceeding

Department or Administrative Unit

Computer Science

Publication Date



Video is fast becoming the most common medium for media content in the present era. It is especially helpful in security situations for the detection of criminal or threat-related activity. Police routinely use videos as evidence in the analysis of criminal cases. It is important in such applications to get a high-quality still image from such videos. However, there are situations where the images are blurred and have artifacts as they are extracted from moving video repositories. A practical solution to this problem is to sharpen these images using advanced processing techniques to obtain higher display quality. Due to vast amount of data, it is extremely important that any such enhancement technique satisfy real-time processing constraints, in order for it to be usable by the end user. In this paper, a blind image sharpness metric is proposed using a combination of edge and textural features. Edges can be detected using different methods like Canny, Sobel, Prewitt and Roberts that are commonly accepted in the image processing literature. The Canny edge detection method typically provides better results due to extra processing steps and can be effectively used as a model feature extractor for the image. Wavelet processing based on the db2, sym4, and haar is also utilized to extract texture features. The normalized luminance coefficients of natural images are known to obey the generalized Gaussian probability distribution. Consequently, this characteristic is utilized to extract statistical features in the regions of interest (ROI) and regions of non-interest respectively. The extracted features are then merged together to obtain the sharpened image. The principle behind image formation is to merge the wavelet decompositions of the two original images using fusion methods applied to the approximation and details coefficients. The two images must be of the same size and are supposed to be associated with indexed images on a common color map. It is worth noting that the image fusion results are more consistent with human subjective visual perception of image quality, ground truth data for which is obtained from publicly available databases. Popular standard images such as Cameraman and Lena are used for the experiments. Results also show that the proposed method provides better objective quality than competing methods.


This article was originally published in Real-Time Image and Video Processing 2018. The full-text article from the publisher can be found here.

Due to copyright restrictions, this article is not available for free download from ScholarWorks @ CWU.


Real-Time Image and Video Processing 2018;


© (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).