2.1 Introduction
In last few years, remote sensing image scene classification has got a remarkable attention due to its importance. Researchers are trying hard to classify remote sensing images correctly. For this reason, we have studied different topics related to this research. Rest of this chapter will describe some important Hand-Engineered feature extraction methods along with deep learning technique. In section 2.2, we will discuss some hand-engineered feature extraction methods, in section 2.3, we will discuss some deep learning models and finally, in section 2.4 we will draw a conclusion.
2.2 Hand-Engineered Feature Extraction Method
Feature plays a very important role in the area of image processing. Different feature extraction
…show more content…
LBP [1] is very simple to understand and easy to implement. To compute LBP at first we need to divide the image into cells like 16X16. Then take a pixel into the grid and consider the neighboring pixels around the pixel. If the center pixel value is greater than or equal to its neighbor, then consider 1 for the neighboring pixel otherwise consider 0. Then take the binary numbers in a sequence (clockwise or anti- clockwise). Assume the number is a binary number and compute the decimal number. Then set the value to the middle pixel. Compute histogram for each 16x16 cells and finally concatenate the histograms. The histogram gives the feature vector for entire window. Example of LBP feature extraction is given in the Figure 2.1
Figure 2.1: Finding decimal value for central pixel using LBP
LBP has some limitations that reduces its application fields. LBP is not rotation invariant and the size of the features increases exponentially with the number of neighbors which leads to an increase of computational complexity in terms of time and space.
2.2.1 Noise Adaptive Binary Pattern (NABP)
Noise adaptive binary pattern [12] is a modification of local binary pattern. Though LPB is one of the most powerful method for feature extraction but it has a lack of discriminative power and sensitive to noise. LBP may produce same
These 16 features included 12 features calculated based on the 6 multispectral bands, which is mean value and standard deviation of these bands. In addition, we chose intensity, texture-variance, texture-mean, and NDVI (Normalized Difference Vegetation Index) for classifications. Finally, training samples were selected for each classification category based on the previously segmented and merged objects
Thus our proposed optimal feature subset selection based on multi-level feature subset selection produced better results based on number of subset feature produced and classifier performance. The future scope of the work is to use these features to annotate the image regions, so that the image retrieval system can retrieve relevant images based on image semantics.
In accession to the binary images, the proposed method may be tested on discrete color images also. These type of
The goal of the feature extraction and selection is to reduce the dimension of the data. In this experiment the dimension of the AVIRIS and HYDICE images reduced to 20 from 220 and 191 respectively using PCA. From the PCA analysis we can see that image of principal component 1 is brightest and sharpest than other PCA image which is illustrated in figure-2.
are the pixels used in the feature detection. The pixel at C is the centre of a detected
The basic principle of this algorithm is to recognize the input paper currency. First of all acquired the image from a particular source. As in this thesis we use for reference images. System read the particular image. Then resize the image. After that the color separator convert the image into RGB to Gray scale and then in binary image. After that the system use color noise median filter. The currency length detector detects the length of the currency. Using the feature extraction techniques the system detect the particular feature of that currency and then the system use pattern matching algorithm to math that particular feature. The input image match with particular database image and according to that we find the currency. In this way this thesis design a automatic system in which we can recognized the paper currency.
Texture is one of the crucial primitives in human vision and texture features have been used to identify contents of images. Examples are identifying crop fields and mountains from aerial image domain. Moreover, texture can be used to describe contents of images, such as clouds, bricks, hair, etc. Both identifying and describing characteristics of texture are accelerated when texture is integrated with color, hence the details of the important features of image objects for human vision can be provided. One crucial distinction between color and texture features is that color is a point, or pixel, property, whereas texture is a local-neighborhood property. The main motivation for using texture is the identifying and describing
Face recognition applications are computation intensive applications. They try to extract every small detail from the image provided for accurate face detection and recognition. They are able to accurately recognize the faces with the results being accurate up to 90% [1]. Accuracy of recognized objects is a crucial factor. The accuracy of the recognizer depends on the recognition algorithms implemented. The object needs to be recognized quickly without compromising the level of accuracy [2].
The feature extraction basically deals with the ABCD rule. This method is able to provide a more objective and reproducible diagnostic of skin cancers in addition to its speed of calculation.It is based on four parameters: A (Asymmetry) concerns the result of evaluation of lesions asymmetry, B (Border) estimates the character of lesions border, C (Color) identifies the number of colors present in the investigated lesion, and D (Diameter).
• Feature Extraction : this is the most important stage for automated markerless capture systems whether for gait recognition, activity classification or other
As shown in the above figure 1, the input neonatal image from the database is preprocessed using techniques such as noise removal, inhomogeneity and partial volume correction. Then from the preprocessed image the texture feature is extracted through EWT (Empirical Wavelet transform) and the most relevant features like Entropy, Correlation, Contrast, Homogeneity and Dissimilarity are chosen by GLCM (Gray Level Co-Occurrence Matrix). Based on the selected features the image is segmented by means of SOM-DCNN and then from the segmented image the tissues are classified through the help of sparse auto encoder.
Embedded method: Embedded method is a combination of wrapper method and embedded method. This decreases the computational cost than wrapper approach and captures feature dependencies. It searches locally for features that allow better discrimination and also the relationship between the input feature and the targeted feature. It involves the learning algorithm which is used to select optimal subset among
The first step aims to identify those locations and scales that are identifiable from different views of the same object. Various techniques can then be used to detect stable keypoint locations in the scale-space. The Difference of Gaussians (DoG) is one such technique which is obtained as the difference of Gaussian blurring of an image with two different scales, $\sigma$, one with scale $k$ times the scale of the other,
Image Processing is a technique to enhance raw images received from cameras/sensors placed on satellites, space probes and aircrafts or pictures taken in normal day-today life for various applications. Various techniques have been developed in Image Processing during the last four to five decades. Most of the techniques are developed for enhancing images obtained from unmanned spacecrafts, space probes and military reconnaissance flights. Image Processing systems are becoming popular due to easy availability of powerful personnel computers, large size memory devices, graphics software’s etc. The common steps in image processing are image scanning, storing, enhancing and interpretation.
1.1.1. Image processing: Image processing is a methodology to perform some operations on an image, so as to urge an enhanced image or to extract some helpful data from it. It is treated as an area of signal processing where both the input and output signals are images. Images are portrayed as two dimensional matrix, and we are applying already having signal processing strategies to input matrix. Images processing finds applications in several fields like photography, satellite imaging, medical imaging, and image compression, just to name a few. Basically Image processing includes the following steps: Reading the image via image acquisition tools like cameras, caners etc. Analysing and manipulating the acquired image to have enhanced quality and locate the data of interest; Output in which result can be altered image or report that is based on image analysis. Originally image processing is proposed for space exploration and biomedical field. But later on with the increase in use of digital images in everybody’s lives it considered as powerful tool for arbitrarily manipulating images to gain useful information. It defined as the means of conversion between human visual system and digital imaging devices.The main purpose of image processing are listed below: 1. Visualization - Observe the objects which are not visible. 2. Image sharpening and restoration - To increase quality of image. 3. Image retrieval –