Volume 13 Number 5
October 2016
Article Contents
Zhi-Heng Wang, Qin-Feng Song, Hong-Min Liu and Zhan-Qiang Huo. Absence Importance and Its Application to Feature Detection and Matching. International Journal of Automation and Computing, vol. 13, no. 5, pp. 480-490, 2016. doi: 10.1007/s11633-015-0925-7
Cite as: Zhi-Heng Wang, Qin-Feng Song, Hong-Min Liu and Zhan-Qiang Huo. Absence Importance and Its Application to Feature Detection and Matching. International Journal of Automation and Computing, vol. 13, no. 5, pp. 480-490, 2016. doi: 10.1007/s11633-015-0925-7

Absence Importance and Its Application to Feature Detection and Matching

Author Biography:
  • ORCID iD: 0000-0002-3241-0720

    E-mail: songqf1989@126.com

    E-mail: hzq@hpu.edu.cn

  • Corresponding author: E-mail: hongminliu@hpu.edu.cn (Corresponding author)
  • Received: 2014-04-11
  • Accepted: 2014-09-10
  • Published Online: 2016-07-25
Fund Project:

National Natural Science Foundation of China 61201395

National Natural Science Foundation of China 61472119

National Natural Science Foundation of China 61472373

the program for Science & Technology Innovation Talents in Universities of Henan Province 13HASTIT039

the Program for Young Backbone Teachers in Universities of Henan Province 2013GGJS-052

the Program for Young Backbone Teachers in Universities of Henan Province 2012GGJS-057

通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Figures (12)  / Tables (4)

Metrics

Abstract Views (3982) PDF downloads (1088) Citations (0)

Absence Importance and Its Application to Feature Detection and Matching

Fund Project:

National Natural Science Foundation of China 61201395

National Natural Science Foundation of China 61472119

National Natural Science Foundation of China 61472373

the program for Science & Technology Innovation Talents in Universities of Henan Province 13HASTIT039

the Program for Young Backbone Teachers in Universities of Henan Province 2013GGJS-052

the Program for Young Backbone Teachers in Universities of Henan Province 2012GGJS-057

Abstract: Feature detection and matching play important roles in many fields of computer vision, such as image understanding, feature recognition, 3D-reconstruction, video analysis, etc. Extracting features is usually the first step for feature detection or matching, and the gradient feature is one of the most used selections. In this paper, a new image feature-absence importance (AI) feature, which can directly characterize the local structure information, is proposed. Greatly different from the most existing features, the proposed absence importance feature is mainly based on the consideration that the absence of the important pixel will have a great effect on the local structure. Two absence importance features, mean absence importance (MAI) and standard deviation absence importance (SDAI), are defined and used subsequently to construct new algorithms for feature detection and matching. Experiments demonstrate that the proposed absence importance features can be used as an important complement of the gradient feature and applied successfully to the fields of feature detection and matching.

Zhi-Heng Wang, Qin-Feng Song, Hong-Min Liu and Zhan-Qiang Huo. Absence Importance and Its Application to Feature Detection and Matching. International Journal of Automation and Computing, vol. 13, no. 5, pp. 480-490, 2016. doi: 10.1007/s11633-015-0925-7
Citation: Zhi-Heng Wang, Qin-Feng Song, Hong-Min Liu and Zhan-Qiang Huo. Absence Importance and Its Application to Feature Detection and Matching. International Journal of Automation and Computing, vol. 13, no. 5, pp. 480-490, 2016. doi: 10.1007/s11633-015-0925-7
  • Feature detection and matching are classic problems in computer vision, which are of significance in object recognition, 3D-reconstruction, image registration, video understanding and many other areas. For a long time, a large number of important algorithms have sprang up in the field of feature detection, and feature matching also has made a number of breakthroughs marked by the Scale invariant feature transform (SIFT) technique[1]. However, almost no algorithm is general due to the complexity of the imaging conditions and diversity of the scenes. Thus, feature detection and matching are still hot, difficult and challenging research topics. At first, we briefly review the existing feature detection and matching methods in the literature according to the used features:

    1) Color feature

    Color is one of the most direct image features. Gray threshold and gray histogram[2] are the earliest used directly for feature detection and matching. Maximally stable extremal region (MSER)[3] proposed by Matas et al. was the popular region feature detector based on gray information. And gray correlation technique which is widely used in feature matching[4, 5] also adopts gray value as the feature of similarity measurement. Color feature gradually substitutes for the gray feature. In addition to the common RGB (red, green, blue) color space[6, 7], other color spaces such as HIS (hue, intensity, saturation), HSV (hue, saturation, value), CIE (commission internationale de L'eclairage) and YUV (Y denotes luminance, U and V denote chrominance)[2] are also widely used for feature detection and matching. For example, the impurity detection for cotton image which is proposed by Gao et al.[8] uses the H and S components as the features in HSI space. Bay et al.[9] also proposed line segment matching algorithm based on the HSV color histogram. In addition, works based on other color features also made some important progresses[10, 11].

    2) Gradient feature

    Gradient feature is the most widely used image feature, and Gaussian gradient is used as the core of the mainstream classical algorithms. First-order and second-order gradients are used for feature detection, such as Sobel, Robert, Prewitt and Laplacian edge detectors. In addition, Gaussian gradient is also used for corner detection, such as Harris and CSS (curvature scale space) detectors[12], and edge detection, such as Canny edge detector[13]. As for feature matching, Gaussian gradient is used as the feature in almost all the popular descriptors, such as scale-invariant feature transform (SIFT)[1], gradient location and orientation histogram (GLOH)[14], speeded up robust features (SURF)[15], weber local descriptor (WLD)[16], mean-standard deviation line descriptor (MSLD)[17], etc. Moreover, the multidirectional Gabor feature used in texture matching[18] can also be categorized as the gradient feature. In addition, Wang et al.[19] proposed a wide-baseline image matching approach based on line segments. Under their framework, the feature matching is not only robust against affine distortion but also a considerable range of 3D viewpoint changes for non-planar surfaces.

    3) Sequence feature

    To overcome the decrease of the descriptor resolution which is caused by the changes of complex illumination in feature matching, the idea using the gray sequence as the feature for matching was proposed in the literature. Compared with the traditional descriptor based on gray or gradient, the gray sequence descriptor uses the relative relationship of the pixel gray values as the feature. Due to the invariance of the relative relationship under the monotonic gray level, gray sequence based methods show great stability to illumination changes. Gupta and Mittal[20] proposed a matching algorithm which is of some robustness to the change of the monotonic gray by penalizing gray-sequence inconsistencies. Tang et al.[21] proposed the location and gray-sequence 2D histogram to handle the change of the complex illumination. Heikkilä et al.[22] substituted the modified local binary pattern feature (CS-LBP) for local gray sequence to construct descriptor, which shows better performance than the latter in complex lighting conditions. Gupta et al.[23] further improved the CS-LBP descriptor by using a local 3D coding mode and the relative gray histogram. Fan et al.[24, 25] enhanced the histogram distinguishing ability by mapping gray orders into high dimensional features.

    4) Spectral feature

    A variety of spectral features are also used widely in feature detection and matching. For example, Shi et al.[26] proposed a method which extracts the line segment feature using Fourier transform coefficient. Qian et al.[27] used the discrete cosine transform (DCT) to obtain the feature for edge detection. In addition, wavelet analysis is applied to the image, and the descriptor is constructed at all levels of wavelet image for feature matching[28-30].

    5) Other features

    Similar to sequence feature, a feature named univalue segment assimilating nucleus (USAN) based on gray similarity was proposed by Smith and Brady[31]. This method defines the region with same or similar gray value to the center point as USAN, and detects the edge or corner according to the area of the USAN. However, this method is sensitive to noises. Wu et al.[32] proposed the feature vector field (FVF) by introducing the definitions of inner product and outer product in mathematics. It is used successfully not only for feature detection but also for feature matching. However, the accuracy of the method is not high due to the multiplicative effect of the inner product and outer product, and it is also sensitive to image deformation. Fan et al.[33] proposed a line matching by line-point invariants which encode local geometric information between a line and its neighboring points.

    In this paper, a new image feature called absence importance (AI) is proposed, which is more different than all the above existing features. Absence importance is motivated by the fact that the absence of important pixels will cause a significant impact on the local structure. The importance can be measured by using the changes of the local statistics before and after the pixel is absent. Two specific absence importance features, mean absence importance (MAI) and standard deviation absence importance (SDAI), are defined in the paper, and new feature detection and matching algorithms combining them with the classical algorithms are developed. The proposed absence importance features can be used as an important supplement of the gradient feature. Experiments demonstrate the successful applications of the absence importance features to feature detection and matching.

    This paper is organized as follows. Section 2 introduces the basic idea of absence importance and subsequently describes how to construct absence importance features. Sections 3 and 4 demonstrate the applications of absence importance to feature detection and matching respectively. Finally, we draw conclusions in Section 5.

  • For an image consisting of a great deal of pixels, which pixels have higher importance than others? How to measure the importance of pixels? Whether the pixels with higher importance are of help for image analysis and comprehension? Moreover, what is the relationship between these pixels and local structure types? By intuition, one important object is the one which is distinctive and plays a significant role in a certain scope, and the absence of it would lead to an obvious change for the local scope. In other words, the importance of one object can be evaluated by the change extent before and after it is absent. Inspired by this idea, we introduce the concept of AI in this paper, which indicates the effect of the pixel before and after its absence, and then apply it to feature detection and matching. The local statistics, the mean and standard deviation are selected as the examples to analyze quantitatively the absence importance of pixels.

    Fig. 1 shows 3 kinds of the local structures in a ${3\times3}$ region, in which the center pixel is located in a flat region, on an edge, and at a corner, respectively. Table 1 gives the means and standard deviations of the local regions in Fig. 1 before and after the center pixel is absent. It can be seen that: 1) The absence of the center pixel on the flat region has no effect on the local statistics; 2) The absence of the edge pixel or corner pixel can cause a large change on the local statistics; 3) The change caused by the absence of the corner pixel is greater than that of the edge pixel. Obviously, we can distinguish the different types of the pixels such as edge pixels and corner pixels using the proposed definition of the absence importance. The following work is expanded further based on this basic idea.

    Figure 1.  Regions of different types: (a) Flat region; (b) Edge; (c) Corner

    Table 1.  The local means (MEs) and standard deviations (SDs) of the regions in Fig. 1

  • For pixel X in an image, a circular neighborhood region with radius ${\bf R}$ centered at X is defined as its supporting region and denoted as G(X). ${\bf R}$ is generally set at 1 or 2 empirically. Here, the mean is selected as one of the measure of the local statistics information.

    Firstly, the mean of region G(X) is calculated using the equation as

    $\begin{align} M(X)=\frac{1}{N}\times\sum_{X_i\in G(X)}I(X_i) \end{align}$

    (1)

    where N denotes the number of pixels in G(X), and I(X$_{i}$) is the gray value of pixel X$_{i}$ in G(X).

    The mean of the region G${'}$(X) after removing the center pixel X can be calculated as

    $\begin{align} M'(X)=\frac{1}{N-1}\times\sum_{X_i\in G'(X)}I(X_i). \end{align}$

    (2)

    Then, the mean absence importance of pixel X is thus defined as

    $\begin{align} MAI(X)=|{M(X)-M'(X)}|. \end{align} $

    (3)
  • The standard deviation is also introduced here for constructing the second measure to evaluate the absence importance. Similar to the definition of the mean absence importance, the standard deviation error after the absence of the center pixel can be defined as

    $\begin{align} \begin{aligned} SAI(X)&=|S(X)-S'(X)|\\ S(X)&={\rm Std}(G(X))\\ S'(X)&={\rm Std}(G'(X))\\ \end{aligned} \end{align}$

    (4)

    where {Std}($\cdot$) represents computing the standard deviation of the gray values in the local region.

    From Table 1, it can be seen that the errors of the standard deviations between an edge and a corner are indistinctive, while the errors of the means are obvious. However, for corner pixels at an obtuse angle, MAI(X) is unavailable to distinguish them from edge pixels. Fortunately, it is found that the ratio of MAI(X) to SAI(X) can overcome the above shortcoming during the experiment. In our method, the ratio is used instead of SAI(X) to define the standard deviation absence importance (SDAI), which is expressed as

    $\begin{align} SDAI(X)=\frac{MAI(X)}{SAI(X)}. \end{align}$

    (5)
  • For an image, one pixel can be located in a flat region, on an edge, or at a corner (the intersection point of edges). Subsequently, we analyze the absence importance of the pixels of different types. Fig. 2 shows 8 different types of the image structure, where (a)-(c) are ridge edges, (d)-(e) are step edges and (f)-(h) are corners. Fig. 3 demonstrates the MAIs and SDAIs of center pixels in Figs. 2(a)-(h) in the form of histogram, respectively. As seen from Fig. 3(a), all the 3 types of the structure have a strong response to MAI, and the corresponding values are not equal among different types but same for one type. As for the SDAIs, shown in Fig. 3(b), the values increase gradually for the 3 types, and the largest value is attained for the corner structure.

    Figure 2.  Local regions of different types

    Figure 3.  MAI and SDAI of the regions in Fig. 2

    Compared with the commonly used gradient magnitude, absence importance has 2 obvious advantages: 1) The gradient magnitude has a single response to the step edge, while it shows a bilateral response to the ridge edge. However, regardless of the types of the edge, a step edge or ridge edge, the absence importance just gives one single response. This may provide a uniform measure for detecting different types of edges. 2) The gradient magnitude at the corner position often has a weak response, which makes edge detectors based on gradient magnitude (such as Canny) generally provide non-continuous results at corners. However, absence importance can be great at corners, and thus it can produce continuous results when used in feature detection Fig. 4 shows energy maps of absence importance corresponding to different types of edges and corners, which are obtained by computing the MAI or SDAI of each pixel in the images. Obviously, edges are highlighted and the maxima are obtained at corners.

    Figure 4.  MAI and SDAI energy maps of regions of different types

  • In this section, the defined absence importance is used to construct new feature detection algorithms and is compared to classic detectors. For testing the performance of absence importance when applied to edge detection, firstly we compute MAI (3) energy map instead of the Gaussian gradient amplitude of the image. Then, in order to obtain a better edge detection result, we perform post-processing thinning on the resulted edges based on the MAI energy map to obtain single pixel edges. As for corner detection, after calculating the SDAI (5) of each pixel in the image, corners of the image can be obtained by thresholding the image and further detecting the local maxima. In the following, synthetic images and real images are used to evaluate the performance of the developed algorithms for edge and corner detection.

  • Fig. 5(a) gives an example image including step edge and ridge edges at the same time. Figs. 5(b) and (c) are edge detection results based on Canny and absence importance, respectively. Obviously, the Canny detector generates a single edge for the step edge but double edges for the ridge edge, and the breakpoint is produced at the junction of the edges. In contrast, the absence importance based method gives one unique response to both types of the edges, and the detected edges are continuous and complete.

    Figure 5.  Edge detection: (a) Original image; (b) Edge detection by Canny; (c) Edge detection based on AI

    A classic synthetic image for edge detection and corner detection is used to evaluate the performance of the proposed AI method, as shown in Fig. 6(a). The proposed method is compared with smallest, univalue, segment, assimilating, nucleus (SUSAN) and Canny detector on this image, and the corresponding results are shown in Figs. 6(b)-6(d). It can be seen that both SUSAN and the AI based method can generate continuous edges at the junctions. But for the short edge marked in the circle area, a bending phenomenon happens, which also can be observed in the result image provided by Canny. Comparatively, the proposed AI based method does not produce this phenomenon. Instead, it gives the accurate edge detection, as Fig. 6(d) shows, i.e., the AI based method locates the edge more accurately. This is another advantage of the absence importance based method.

    Figure 6.  Edge detection results of different methods: (a) Original image; (b) SUSAN; (c) Canny; (d) AI

    Fig. 7 gives a clearer example comparing SUSAN with AI for edge detection. When the central pixel of the template for SUSAN detector is located in the red region shown in Fig. 7(a), the area of the USAN is 21 or 25, and the pixel in red region is identified as edge pixels. When the central pixel of the template is located in the yellow region, the area of the USAN is 11 or 15, and pixels marked as yellow will be identified as corners. Obviously, SUSAN produces a bent edge near the corner. Now, we give an analysis about the proposed AI based method. When the central pixel of the template is located in the red region shown in Fig. 7(b), the MAI absence importance is 0 and these pixels will be identified as flat region pixels. When the central pixel of the template is located in the blue region, the MAI absence importance is larger and these pixels will be identified as edge pixels. Obviously, no bending phenomenon happens in the detecting result of the absence importance based method.

    Figure 7.  Compare SUSAN with AI for edge detection: (a) SUSAN; (b) AI (please refer to the electronic version)

    Fig. 8 gives several edge detection results based on Canny detector and the proposed AI based method on real images. It further demonstrates that the AI based method has two obvious advantages over the Canny detector: 1) The AI based method gives single responses to both the ridge edge and the step edge, while the Canny detector gives double responses to the ridge edge and a single response to the step edge, as marked in the red rectangle area of Fig. 8. 2) The AI based method provides more continuous edges than the Canny detector, as marked in the green rectangle area of Fig. 8.

    Figure 8.  Edge detection results: (a) Original images; (b) Results based on Canny detector; (c) Results based on MAI (please refer to the electronic version)

  • Jiang et al.[34] showed that the accuracy of the SUSAN corner detector is the highest among the many corner detectors. Therefore, to verify the performance of the proposed AI based method for corner detection, a comparative experiment just with the SUSAN corner detector is performed on the classic synthetic image shown in Fig. 6(a).

    Fig. 9 gives the corner detection results. It can be seen that the SUSAN corner detector loses one corner with more redundant corners while the AI based method detects all the corners with no redundant corners except for one corner is located incorrectly. So the AI based method displays superior performance in terms of detection accuracy.

    Figure 9.  Corner detection: (a) Result by SUSAN; (b) Result by AI

    Table 2 gives the comparative results on time costs for the SUSAN corner detector and AI based methods, which are named as MAI and SDAI, respectively.

    Table 2.  Comparative results on time cost for SUSAN corner detector (SUSAN) and AI based methods (MAI and SDAI)

    The time cost is obtained by averaging the time of a number of runs, which is enough to be convincing. It can be seen that the MAI based algorithm costs less time; the SUSAN detector comes second, and the SDAI based algorithm is time consuming. So in a way, the AI based methods can be superior in runtime to the SUSAN corner detector.

    Generally, the proposed AI based method shows better results than the SUSAN corner detector in terms of accuracy and time cost.

    More images are used to evaluate the performance of the proposed method as Fig. 10(a) shows. Four sets of images are the checkerboard, fence, wall painting and two-dimensional code, respectively. The detected edges based on MAI are shown in Fig. 10(b). Figs. 10(c), 10(d) and 10(e) display the detected corners based on MAI, SDAI and SUSAN, respectively. Table 3 gives the statistics about the numbers of the detected corners and the average accuracy of corner detection. It can be seen that both MAI and SDAI based methods can be used for corner detection and the latter shows a little better performance. And it is obvious that the average accuracies of corner detection based on MAI and SDAI are greater than that of the SUSAN. In addition, the MAI based method performs edge detection well and accurately.

    Figure 10.  Edge and corner detection results using absence importance based method and SUSAN: (a) Original images; (b) Edge detection results based on MAI; (c) Corner detection results based on MAI; (d) Corner detection results based on SDAI; (e) Corner detection results based on SUSAN

    Table 3.  Statistical results of the detected corner based on MAI, SDAI and SUSAN corner detector for the images in Fig. 10

  • In this section, the proposed AI feature of the image is used for feature matching. Since point matching has made a great deal of progress in recent years, curve matching which makes only little progress is selected here for the application of the AI feature. Here, we propose a new curve descriptor called absence importance mean-standard deviation descriptor (AIMSD), which is achieved by using the two proposed absence importance features instead of the gradient feature in mean-standard deviation curve descriptor (MSCD)[32]. The proposed AIMSD is constructed as follows:

    1) For pixel X$_{i}$ on a curve C, the point support regions (PSR) of X$_{i}$ is determined as a rectangular region centered at X$_{i}$ along the gradient direction of X$_{i}$, as Fig. 11 shows. Denote the PSRs of the pixels on curve C as G$_{1}$, G$_{2}$, $\cdots$, G$_{N}$ (assuming C consists of N pixels). Divide each PSR G$_{i}$ into M non-overlapped sub-regions with the same size along the gradient direction: G$_i$=G$_{i1} \bigcup$ G$_{i2} \bigcup \cdots \bigcup$ G$_{iM}$ and M=9 in this paper.

    Figure 11.  Curve support region and its sub-regional division

    2) Calculate the signed mean absence importance (SMAI) and the signed standard deviation absence importance (SSAI) of each pixel X according to the following equations, which are slightly different from the MAI (3) and SAI (4):

    $\begin{align} SMAI(X)=M(X)-M'(X) \end{align}$

    (6)

    $\begin{align} SSAI(X)=S(X)-S'(X). \end{align}$

    (7)

    3) Differentiate the positive and negative SMAI and SSAI features to obtain a 4-dimensional feature vector of the sub-region G$_{ij}$($i=1$, 2, $\cdots$, N; $j=1, 2, \cdots$, M):

    $\begin{align} \begin{aligned} AIFV_{ij}&=(V_{ij}^{1}, V_{ij}^{2}, V_{ij}^{3}, V_{ij}^{4})\in {\bf R}^4, \\ V_{ij}^{1}&=\sum_{X\in G_{ij} \& SMAI(X)>0}SMAI(X)\\ V_{ij}^{2}&=\sum_{X\in G_{ij} \& SMAI(X)<0}-SMAI(X)\\ V_{ij}^{3}&=\sum_{X\in G_{ij} \& SSAI(X)>0}SSAI(X)\\ V_{ij}^{4}&=\sum_{X\in G_{ij} \& SSAI(X)<0}-SSAI(X). \end{aligned} \end{align}$

    (8)

    4) Construct the described matrix of the curve by stacking the description vectors of all the sub-regions associated with curve C:

    $ \begin{align} \begin{aligned} {DM}({C})&=\left[\begin{array}{cccc} {AIFV}_{11} & {AIFV}_{21} & \cdots & {AIFV}_{N1} \\ {AIFV}_{12} & {AIFV}_{22} & \cdots & {AIFV}_{N2} \\ \vdots & \vdots & \ddots & \vdots \\ {AIFV}_{1M} & {AIFV}_{2M} & \cdots & {AIFV}_{NM} \end{array}\right]\triangleq \\ &\qquad [{V}_1, {V}_2, \cdots, {V}_N]. \end{aligned} \end{align}$

    (9)

    5) Compute the mean and standard deviation vector of the curve, and then combine the two vectors into a single vector and normalize it to the unit norm:

    $\begin{align} \textbf{Mean}(C)={\rm Mean}(V_1, {V}_2, \cdots, {V}_N) \end{align}$

    (10)

    $\begin{align} \textbf{Std}(C)={\rm Std}(V_1, V_2, \cdots, V_N) \end{align}$

    (11)

    $\begin{align} {AIMSD}(C)=\left[\frac{{\rm Mean}(C)}{\parallel {\rm Mean}(C)\parallel}, \frac{{\rm Std}(C)}{\parallel {\rm Std}(C)\parallel}\right]. \end{align}$

    (12)

    Fig. 12 gives the curve matching results on 5 image pairs. Table 4 gives the comparing results of MSCD, intensity order mean-standard deviation descriptor (IOMSD)[35] and AIMSD. In the experiments, all the experiment conditions are set the same when matching curves using different methods. It can be seen that, although the total number of the matches obtained by AIMSD is a little less than those by MSCD and IOMSD, the accuracies of the three are approximately equivalent, while the accuracy of AIMSD is slightly better than the IOMSD algorithm. This proves that the proposed absence importance features can be applied to feature matching successfully.

    Figure 12.  Curve matching results using AIMSD

    Table 4.  Curve matching results (The number outside the bracket denotes the total matches, and the number in the bracket denotes the incorrect matches)

  • Based on the fact that the absence of the important pixel will have great effect on the local structure, a new feature named AI is proposed in this paper. Two specific features, MAI and SDAI, are defined and applied to feature detection and matching successfully. In a sense, the absence importance proposed in the paper can be used as an important supplement to the classic gradient feature. Our further work will focus on the definition of the absence importance using other statistics and its applications to other fields.

Reference (35)

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return