127, 2005).

An Eigenface representation (Carts-Power, pg. 127, 2005) created using primary “components” (Carts-Power, pg. 127, 2005) of the covariance matrix of a training set of facial images (Carts-Power, pg. 127, 2005). This method converts the facial data into eigenvectors projected into Eigenspace (a subspace), (Carts-Power, pg. 127, 2005) allowing copious “data compression because surprisingly few Eigenvector terms are needed to give a fair likeness of most faces. The method of catches the imagination because the vectors form images that look like strange, bland human faces. The projections into Eigenspace are compared and the nearest neighbors are assumed to be matches.” (Carts-Power, pg. 127, 2005)

The differences in the algorithms are reflective in the output of the resulting match or non-match of real facial features against the biometric database or artificial intelligence generated via algorithm. The variances generated by either the Eigenspace or the PCA will vary according to the use of the approach. Eigenspace work on the premise of vectors, contours, and gradients, which are all essentially geophysical descriptors used in earth science technology. However, the human face is very similar to a geophysical landscape, similar to an arid desert with hills, valleys, and peaks.

Many regard the principle component analysis (PCA) or eigenface approach (Liu, Chen, Lu, Chen, 2006) as highly beneficial. As such, the industry early on has relied on “PCA-based face recognition systems” (Liu, Chen, Lu, Chen, 2006). The PCA approach is able to locate variances in the details and intricacies when reviewing the “scaled and aligned human face, but it will degrade dramatically for not-aligned faces.” (Liu, Chen, Lu, Chen, 2006) The prevailing over the limit of this approach (Liu, Chen, Lu, Chen, 2006), is what Liu et al. regard as “a better method named independent component analysis (ICA) is presented” (Liu, Chen, Lu, Chen, 2006), developed to find “basis functions which are local and give good representation of face images.” (Liu, Chen, Lu, Chen, 2006)

Issues with parametric modeling of the facial sub-features hidden from shading issues are posed for solution. The use of PCA to solve this issue (Zhao, Chellappa & Rosenfeld, Phillips) provides a means to create a mathematical framework to identify the hidden parameters where the shadow subspace is shading.

Principal Component Analysis (PCA) (Zhao, Chellappa & Rosenfeld, Phillips), recommended as an enabler for to render a solution to the “parametric shape-from shading (SFS) problem.” (Zhao, Chellappa & Rosenfeld, Phillips) “An eigen-head approximation of a 3D head” (Zhao, Chellappa & Rosenfeld, Phillips) “was received after training on about 300 laser-scanned range images of real human heads.” (Zhao, Chellappa & Rosenfeld, Phillips) The SFS quandary described by Zhao et al. morphs to a “parametric problem” (Zhao et al.) however, “a constant albedo is still assumed.” (Zhao et al.) “This assumption does not hold for most real face images and it is one of the reasons why most SFS algorithms fail on real face images. To overcome the constant albedo issue, suggests including the use of a varying albedo reflectance model.” (Zhao, Chellappa & Rosenfeld, Phillips)

In the face of stellar results performed by the PCA, this approach has now been understood to possess the “disadvantage of being computationally expensive and complex with the increase in database size” (Neerja, Walia, 2008), as each pixel in the entire image in aggregate, is required to generate representation needed “to match the input image with all others in the database.” (Neerja, Walia, 2008)

Neerja & Walia put forth a “new PCA-based face recognition approach” (Neerja, Walia, 2008), “using the geometry and symmetry of faces, which extract the features using fast Fuzzy edge Detection to locate the vital feature points on eyes, nose and mouth exactly and quickly.” (Neerja, Walia, 2008) With regard to each feature, each subgroup repository for database images are created. “During recognition only the images falling in same group as test image, will be loaded as image vectors in covariance matrix of PCA for comparison.” (Neerja, Walia, 2008)

The aforementioned approach is expensive, however such governmental agencies including the FBI, CIA, and departments such as the DoE, DOD, and the Secret Service will use these approaches to ensure that the SFS problem is eliminated. Additional algorithms are described below.

“The Fisherfaces algorithm, also known as linear discriminant analysis (LDA), was developed at the University of Maryland (College Park, MD).” (Carts-Power, pg. 127, 2005) This method is akin to the PCA application, however incorporates addendums that accentuate the differences between faces as more evident. (Carts-Power, pg. 127, 2005) “Instead of looking for the nearest neighbor in a subspace (like PCA and LDA), the Bayesian intrapersonal/extrapersonal classifier looks at the distance between two face images.” (Carts-Power, pg. 127, 2005) Each differing image may undergo reclassification into two classes as they either are a function of “two images of the same subject or derived from images of different subjects.” (Carts-Power, pg. 127, 2005) Each of the aforementioned classes will unfold as a distribution that is Gaussian (Carts-Power, pg. 127, 2005) in appearance. The Gaussian distribution can have layered results without obfuscation. (Carts-Power, pg. 127, 2005)

The LDA algorithm is similar to the PCA but exacerbates the non-similarities between the compared faces. The key difference of the LDA is in its framework of searching for the differences within the subspace. The Gaussian approach mathematically looks at the variance between the squared deviations to determine the spatial distance between landmarks. Such an approach is supposed to be rather expensive as well. A further look at the ICA reveals the following.

“Recently, there has been an increasing interest in statistical models for learning data representations. A very popular method for this task is independent component analysis (ICA), the concept of which was initially proposed by Comon

. The ICA algorithm was initially proposed to solve the blind source separation (BSS) problem i.e. given only mixtures of a set of underlying sources, the task is to separate the mixed signals and retrieve the original sources. Neither the mixing process nor the distribution of sources is known in the process. A simple mathematical representation of the ICA model is as follows. Consider a simple linear model which consists of N. sources of T. samples i.e. s I =s I (1)…s I (t)…s I (T). The -symbol t here represents time, but it may represent some other parameter like space. M weighted mixtures of the sources are observed as X, where XI =Xi (1)… Xi (t)… Xi (T). This can be represented as -.” (Acharya, Panda, 2008)

“ICA is a new signal processing technique for extracting independent variables from a mixture of signals and its basic idea is to represent a set of random variables using basic functions, where the components are statistically independent or as independent as possible. It has become one recent powerful technique in the field of image processing and pattern recognition. The concept of ICA can be seen as a generational of principal component analysis (PCA). PCA tries to obtain a representation of the input signals based on uncorrelated variables, where ICA provides a representation based on statistically independent variables.” (Liu, Chen, Lu, Chen, 2006)

“Generally, ICA is performed on multidimensional data. This data may be corrupted by noise, and several original dimensions of data may contain only noise. So if ICA is performed on a high dimensional data, it may lead to poor results due to the fact that such data contain very few latent components. Hence, reduction of the dimensionality of the data is a preprocessing technique that is carried prior to ICA.

ICA is a rather comprehensive approach that incorporates a learning-based approach and also searches against a database. Through the use of statistically independent variables, a facial shape is created via the ICA approach whereas PCA relies on uncorrelated variables to make a distinction regarding facial dissimilarities.

Thus, finding a principal subspace where the data exist reduces the noise. Besides, when the number of parameters is larger, as compared to the number of data pints, the estimation of those parameters becomes very difficult and often leads to over-learning. Over learning ICA typically produces estimates of the independent components that have a single spike or bump and are practically zero everywhere else

. This is because in the space of source signals of unit variance, nongaussianity is more or less maximized by such spike/bump signals.” (Acharya, Panda, 2008)

The use of differing algorithms can…