Observation Vector

Given an observation vector y from one of the L classes in the training set, one can compute its coefficients αˆ by solving either (8) or (9).

From: Handbook of Statistics , 2013

Handbook of Statistics

Vishal M. Patel , Rama Chellappa , in Handbook of Statistics, 2013

2.1 Sparse representation-based classification

In object recognition, given a set of labeled training samples, the task is to identify the class to which a test sample belongs to. Following Wright et al., (2009), we briefly describe the use of sparse representations for biometric recognition, however, this framework can be applied to a general object recognition problem.

Suppose that we are given L distinct classes and a set of n training images per class. One can extract an N-dimensional vector of features from each of these images. Let B k = [ x k 1 , , x kj , , x kn ] be an N × n matrix of features from the kth class, where x kj denote the feature from the jth training image of the kth class. Define a new matrix or dictionary B , as the concatenation of training samples from all the classes as

B = [ B 1 , , B L ] R N × ( n . L ) = [ x 11 , , x 1 n | x 21 , , x 2 n | | x L 1 , , x Ln ] .

We consider an observation vector y R N of unknown class as a linear combination of the training vectors as

(5) y = i = 1 L j = 1 n α ij x ij

with coefficients α ij R . The above equation can be written more compactly as

(6) y = B α ,

where

(7) α = [ α 11 , , α 1 n | α 21 , , α 2 n | | α L 1 , , α Ln ] T

and . T denotes the transposition operation. We assume that given sufficient training samples of the kth class, B k , any new test image y R N that belongs to the same class will lie approximately in the linear span of the training samples from the class k . This implies that most of the coefficients not associated with class k in (7) will be close to zero. Hence, α is be a sparse vector.

In order to represent an observed vector y R N as a sparse vector α , one needs to solve the system of linear Eq. (6). Typically L · n N and hence the system of linear Eq. (6) is underdetermined and has no unique solution. As mentioned earlier, if α is sparse enough and B satisfies certain properties, then the sparsest α can be recovered by solving the following optimization problem:

(8) α ˆ = argmin α α 1 subject to y = B α .

When noisy observations are given, Basis Pursuit DeNoising (BPDN) can be used to approximate α

(9) α ˆ = argmin α α 1 subject to y - B α 2 ε ,

where we have assumed that the observations are of the following form:

(10) y = B α + η

with η 2 ε .

Given an observation vector y from one of the L classes in the training set, one can compute its coefficients α ˆ by solving either (8) or (9). One can perform classification based on the fact that high values of the coefficients α ˆ will be associated with the columns of B from a single class. This can be done by comparing how well the different parts of the estimated coefficients, α ˆ , represent y . The minimum of the representation error or the residual error can then be used to identify the correct class. The residual error of class k is calculated by keeping the coefficients associated with that class and setting the coefficients not associated with class k to zero. This can be done by introducing a characteristic function, Π k : R n R n , that selects the coefficients associated with the kth class as follows:

(11) r k ( y ) = y - B Π k ( α ˆ ) 2 .

Here the vector Π k has value one at locations corresponding to the class k and zero for other entries. The class, d , which is associated with an observed vector, is then declared as the one that produces the smallest approximation error

(12) d = argmin k r k ( y ) .

The sparse representation-based classification method is summarized in Algorithm 1.

Algorithm 1 Sparse representation-based classification (SRC) algorithm

Input: B R N × ( n · L ) , y R N .

1.

Solve the BP (8) or BPDN (9) problem.

2.

Compute the residual using (11).

3.

Identify y using (12).

Output: Class label of y .

For classification, it is important to be able to detect and then reject the test samples of poor quality. To decide whether a given test sample has good quality, one can use the notion of Sparsity Concentration Index (SCI) proposed in Wright et al. (2009). The SCI of a coefficient vector α R ( L . n ) is defined as

(13) SCI ( α ) = L · max Π i ( α ) 1 α 1 - 1 L - 1 .

SCI takes values between 0 and 1. SCI values close to 1 correspond to the case where the test image can be approximately represented by using only images from a single class. The test vector has enough discriminating features of its class, so has high quality. If SCI = 0 then the coefficients are spread evenly across all classes. So the test vector is not similar to any of the classes and has of poor quality. A threshold can be chosen to reject the images with poor quality. For instance, a test image can be rejected if SCI ( α ˆ ) < λ and otherwise accepted as valid, where λ is some chosen threshold between 0 and 1.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780444538598000084

ASYMPTOTICALLY UNIMPROVABLE SOLUTION OF MULTIVARIATE PROBLEMS

VADIM I. SERDOBOLSKII , in Multiparametric Statistics, 2008

Problem Setting

Let x be an observation vector from an n-dimensional population

with expectation Ex = 0, with fourth moments of all components and a nondegenerate covariance matrix Σ = cov(x, x). A sample

= {x m } of size N is used to calculate the mean vector x ¯ and sample covariance matrix

C = N 1 m = 1 N ( x m x ¯ ) ( x m x ¯ ) T .

We use the following asymptotical setting. Consider a hypothetical sequence of estimation problems

B = { ( G , , N , X , C , ^ 1 ) n } , n = 1 , 2 , ,

where

is a population with the covariance matrix ∑ = cov(x, x),

is a sample of size N from

, ^ 1 is an estimator Σ−1 calculated as function of the matrix C (we do not write the indexes n for arguments of

). Our problem is to construct the best statistics ^ 1 .

We begin by consideration of more simple problem of improving estimators of Σ−1 by the introduction of a scalar multiple of C −1 (shrinkage estimation) for normal populations. Then, we consider a wide class of estimators for a wide class of populations.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780444530493500072

Miscellaneous topics in regression rank tests

Jaroslav Hájek , ... Pranab K. Sen , in Theory of Rank Tests (Second Edition), 1999

PROBLEMS AND COMPLEMENTS TO CHAPTER 10

Section 10.1.

1.

Define the aligned observation vectors X ˆ i , i = 1,…,n, as in Subsection 10.1.1. and denote their joint distributions under H 0 and Kn by Pn and Qn respectively. Use LeCam's lemmas 1o verify that {Qn } is contiguous to {Pn }.

2.

Use the contiguity result in the preceding problem along with LeCam's third lemma, and extend the joint asymptotic normality of the aligned rank statistics to contiguous alternatives in Kn.

3.

Derive the convergence in formula (10.1.1.14).

4.

Show that N , defined by (10.1.1.9), has asymptotically, under Kn , a non-central χ2 distribution with p−1 degrees of freedom and non-centrality parameter (10.1.1.17).

5.

Prove that the (p − 1)-multiple of classical ANOVA (variance-ratio) test statistic has asymptotically, under Kn , a non-central χ 2 distribution with p − 1 degrees of freedom and non-centrality parameter (10.1.1.18).

6.

Prove the inequality (10.1.1.20). and that the equality sign holds in it only when ϕ ( F ( x ) ) x , excepting on a set of null measure.

7.

Prove the asymptotic linearity result (10.1.2.9).

8.

Show that β ˆ 2 n is asymptotically normal, as stated in (10.1.2.11).

9.

Show that the representation (10.1.2.14) holds, and that it implies the asymptotic normality in (10.1.2.11).

10.

Prove the asymptotic result (10.1.2.15).

11.

Prove the asymptotic result in relation (10.1.2.16).

12.

Using the projection method, show (10.1.2.17).

13.

Consider the setup of Subsection 10.1.2, and testing H 0 : β1 = 0 against H 1: β10 by means of the statistic ˆ n 1 given by (10.1.2.21). If H 0 does not hold, i.e. β ≠ 0, show that n 1 ˆ n converges in probability to a positive constant, so that L ˆ n 1 is Op (n).

14.

Verify that {qn }, the joint density of {Yi ; 1 ≤ in} under contiguous alternatives {Hn } given by (10.1.2.22), is contiguous to {pn }, the joint density under H 0.

15.

Show that, under {Hn }, n 1 / 2 ˆ n 1 is asymptotically normal with parameters given before and in (10.1.2.24).

16.

Prove that, under {Hn }, the statistic ˆ n 1 has asymptotically the non-central χ2 distribution as given in (10.1.2.25) and (10.1.2.20).

17.

Having still the setup of Subsection 10.1.2, prove that the asymptotically optimal aligned rank test uses the score function ϕ0 given by (10.1.2.27).

18.

(Continuation) Verify the equality (10.1.2.29). i.e. ϕ 0 ( x ) ϕ ( x , f ) , 0 ≤ u ≤ 1.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B978012642350150028X

SPECTRAL THEORY OF SAMPLE COVARIANCE MATRICES

VADIM I. SERDOBOLSKII , in Multiparametric Statistics, 2008

Limit Spectra

We investigate here the limiting behavior of spectral functions for the matrices SandC under the increasing dimension asymptotics. Consider a sequence

= {

N } of problems

(11) B n = ( S , Σ , N , X , S , C ) n n = 1 , 2 ,

in which spectral functions of matrices CandS are investigated over samples

of size N from populations

with cov(x,x) = Σ (we do not write out the subscripts in arguments of

n ). For each problem

n , we consider functions

h 0 n ( t ) = n 1 tr ( ( I z S ) 1 , h n ( t ) = n 1 tr ( ( I z C ) 1 , F n S ( u ) = 1 n i = 1 n ind ( λ i S u ) , F n C ( u ) = 1 n i = 1 n ind ( λ i C u ) ,

where λ i S and λ i C are eigenvalues of SandC, respectively,i = 1,2,…,n.

We restrict (11) by the following conditions.

A.

For each n , the observation vectors in

are such that Ex = 0 and the four moments of all components of x exist.
B.

The parameter M does not exceed a constant c 0, where c 0does not depend on n. The parameter γ vanishes as n → ∞ in

.
C.

In

,n/N → λ > 0.
D.

In

for each n, the eigenvalues of matrices Σ are located on a segment [c 1,c 2], where c 1> 0 and c 2does not depend on n, and F nΣ(u) → F Σ(u) as n → ∞ almost for any u ≥ 0.

Corollary(of Theorem 2.1). Under Assumptions A-D for any z

, the limit exists lim n h n ( z ) = h ( z ) such that

(12) h ( z ) = ( 1 z s ( z ) u ) 1 d F ( u ) , s ( z ) = 1 λ + λ h ( z ) ,

and for each z, we have

lim n E ( I z C ) 1 ( I z s ( z ) ) 1 0.

.

Let us investigate the analytical properties of solutions to (12).

Theorem 3.3 . If h(z)satisfies(12),c 1> 0, λ > 0,andλ ≠ 1,then

1.

|h(z) ≤ α (z) and h(z) is regular near any point z

2.

for any v = Re z > 0 such that v < v 2 = c 1 1 ( 1 λ ) 2 or v > v 1 = c 2 1 ( 1 + λ ) 2 , we have

lim ε + 0 Im h ( v   + i ε )   =  0;

;
3.

ifv 1vv 2, then 0 Im h ( v + i ε ) ( c 1 λ v ) 1 / 2 + ω , where Ω → 0 as ε → + 0;

4.

ifv = Re z > 0 then s(−v) ≥ (1 +c 2λ |v|)− 1;

5.

if |z| → ∞ on the main sheet of the analytical function h(z), then we have

if 0 < λ < 1, then zh(z) = − (1 – λ)− 1Λ− 1+ O(|z|− 1),

if λ = 1, then zh 2(z) = − Λ− 1+ O(|z − ½|),

if λ > 1, then zs(z) = − β0+ O(|z|),

where β0is a root of the equation

( 1 + β 0 u ) 1 d F ( u ) = 1 λ 1 .

.

Proof. The existence of the solution to (12) follows from Theorem 2.1. Suppose Im(z) > 0, then |h(z)| α = α (z). To be concise, denote h =h(z). For all u > 0 and z outside the beam z > 0, we have |1 −zsu|− 1. Differentiating h(z) in (12), we prove the regularity of h(z). Define

b v = b v ( z ) = | 1 z s ( z ) u | 2 u v d F ( u ) , v = 1 , 2.

.

Let us rewrite (12) in the form

(13) ( h 1 ) / s = z u ( 1 z s u ) 1 d F ( u ) .

It follows that

Im [ ( h 1 ) / s ] = | s | 2 Im h = b 1 Im z + b 2 λ | z | 2 Im h .

Dividing by b 2, we use the inequality b 1 / b 2 c 1 1 . Fix some v = Re z > 0 and tend z = ɛ → +0. It follows that the product

( | s | 2 b 2 1 λ v 2 ) Im h 0.

.

Suppose that Im h does not tend to 0 (υ is fixed). Then, there exists a sequence {zk } such that, for z k=v+iɛk,h =h(z k),s =s(z k), we have Im ha, where a≠ 0. For these zk , we obtain | s | 2 b 2 1 λ v 2 as ɛ k + 0 . We apply the Cauchy-Bunyakovskii inequality to (5). It follows that | h 1 | 2 / | z k s | 2 b 2 . We obtain that |h − 1|2≤ λ+ 0(1) as ɛk→ +0. It follows that |s − 1|2≤ λ + 0(1). So the values s are bounded for {zk }. On the other hand, it follows from (12) that Im h =b 1Im(zs) =b 1. We find that ( b 1 1 λ v ) Im h 0 as Im z → 0. But Im h → q ≠ 0 for {zk }. It follows that b 1 1 λ v . Combining this with the inequality | s | 2 b 2 1 λ v 2 , we find that | s | 2 b 2 1 b 1 1 v 0 . Note that b 1is finite for {zk } and c 2 2 b 1 b 2 1 c 1 1 . Substitute the boundaries ( 1 ± λ ) + o ( 1 ) for |s|. We obtain that v 1 + o ( 1 ) v v 2 + o ( 1 ) as ε k → + 0. We can conclude that Im h → 0 for any positive v outside the interval [v 1,v 2]. This proves the second statement of our theorem.

Now suppose v 1vv 2. From (12), we obtain the inequality (Imh)2< (c 1 vλ)− 1. But h is bounded. It follows that the quantity (Imh)2≤ (c 1 vλ)− 1. The third statement of our theorem is proved.

Further, let v = Re z > 0. Then, the functions hands are real and non-negative. We multiply both parts of (12) by λ. It follows that ( h 1 ) / z s b 1 c 2 . We obtain s≥ (1 +c 2λ |z|)− 1.

Let us prove the fifth theorem statement. Let λ < 1. For real z → − ∞, the real value of 1 −zsu in (12) tends to infinity.

Consequently,h → 0 and s → 1 − λ. For sufficiently large |Rez|, we have

h ( z ) = k = 1 Λ k ( z s ) k ,

where Λ k = u k d F Σ ( u ) . We conclude that

h ( z ) = ( 1 λ ) 1 Λ 1 z 1 + O ( | z | 2 )

for real z< 0 and for any z

as |z| → ∞ in view of the properties of the Laurent series. Now let λ = 1. Then h =s. From (12), we obtain that h → 0 as z → − ∞ and h 2= Λ− 1|z|− 1+ O(|z|− 2). Now suppose that Λ > 1,z = −t< 0, and t → ∞. Then, by Lemma 3.6, we have s≥ 0,h≥ 1 − 1/λ, and s → 0.Equation (12)impliest s→ β0as is stated in the theorem formulation. This completes the proof of Theorem 2.3.

Remark 7. Under Assumptions A-D for each u≥ 0, the limit exists

(14) F ( u ) = plim n F n C ( u ) such that ( 1 z u ) 1 d F ( u ) = h ( z ) .

Indeed, to prove the convergence, it is sufficient to cite Corollary 3.2.1 from [22] that states the convergence of {hnS (z)} and {hnC (z)} almost surely. By Lemma 3.5, both these sequences converge to the same limit h(z). To prove that the limits of FnS (u) and FnC (u) coincide, it suffices to prove the uniqueness of the solution to (12). It can be readily proved if we perform the inverse Stieltjes transformation.

Theorem 3.4. Under Assumptions A-D,

1.

if λ = 0, then F u=F Σ(u) almost everywhere for u≥ 0;

2.

if λ > 0 and λ ≠ 0, then F(0) =F(u 1− 0), where u 1 = c 1 ( 1 λ ) 2 , u 2 = c 2 ( 1 + λ ) 2 , and c 1andc 2are bounds of the limit spectra Σ;

3.

if y > 0, λ ≠ 1, and u > 0, then the derivative F′ (u) of the function F(u) exists and F ( u ) π 1 ( c 1 λ u ) 1 / 2 ;

Proof. Let λ = 0. Then s(z) = 1. In view of (12), we have

h ( z ) = ( 1 z u ) 1 d F ( u ) = ( 1 z u ) 1 d F ( u ) .

At the continuity points of FΣ (u), the derivative

F ( u ) = 1 π lim ε + 0 Im 1 z h ( 1 z ) = F ( u ) ,

wherez =uiɛ .

Let λ > 0. By Theorem 2.2 for u<u 1and for u >u 2(note that u 1> 0 if λ < 0), the values Im[(uiɛ)− 1 h((uiɛ)− 1)] → 0 as ε → + 0. But we have

(15) Im h ( ( u i ε ) 1 ) u i ε > ( 2 ε ) 1 [ F ( u + ε ) F ( u ε ) ] .

It follows that F′ (u) exists and F′ (u) for 0 <u<u 1and for u >u 2. The points of the increase of F(u) can be located only at the point u = 0 or on the segment [u 1,u 2]. If λ < 1 and |z| → ∞, we have ( 1 z u ) 1 d F ( u ) 0 and, consequently,F(0) = 0. If λ > 1 and |z| → ∞, then h ( z 1 ) z 1 ( 1 λ 1 ) / Z , and F(0) =1 − λ− 1. The second statement of our theorem is proved.

Now, let z =v+ ɛ, where v > 0 is fixed and ε → + 0. Then, using (12) we obtain that Im h =b 1Im(zs). Obviously,

| Im h | | 1 z s u | 1 d F ( u ) 1 c 1 Im ( z s ) = b 1 c 1 Im h .

.

If Im h remains finite, then b 1→ (λv)− 1. Performing limit transition in (15), we prove the last statement of the theorem.

Theorem 3.5. If Assumptions A-D hold and0 < λ < 1,then for any complex z, z′ outside of the half-axis z > 0,we have

| h ( z ) h ( z ) | < c 3 | z z | ζ ,

where c 3 andζ > 0do not depend on z and z′ .

Proof. From (12), we obtain

| h ( z ) | λ 1 max ( λ , | 1 λ | + 2 c 1 1 | z | 1 ) .

By definition, the function h(z) is differentiable for each z outside the segment

= [v 1,v 2],v 1> 0.

Denote a δ -neighborhood of the segment

by

δ. If z is outside of

δ, then the derivative h′ (z) exists and is uniformly bounded. It suffices to prove our theorem for v

1, where

1=

δ− {z: Im z = 0}. Choose δ = δ 1 = v 1 / 2 . Then δ1< |z| < δ2, where δ2does not depend on z. We estimate the absolute value of the derivative h′ (z). For Im ≠ 0, from(15) by the differentiation we obtain

(16) ( z 1 y 1 X 2 u d F ( u ) ) h ( z ) = s ( z ) z λ X 2 u d F ( u ) ,

whereX1= (1 −zs(z)u) ≠ 0. Denote

ϕ ( z ) = 1 z y X 2 u d F ( u ) , b 1 = | X | 2 u d F ( u ) , h 1 = Im h ( z ) , z 0 = Re z , z 1 = Im z , s 0 = Re s ( z ) ,

and let α with subscripts denote constants not depending on z. The right-hand side of (16) is not greater than α1 b 1forz

1and therefore |h′ (z)| < α2 b 1|ϕ (z)|− 1.

We consider two cases. Denote α3= (2δ2 c 2)− 1.

At first, let Re s(z) =s 0< α3. Using the relation h 1=b 1, we obtain that the quantity − Im ϕ (z) equals

z 1 | z | 2 λ 1 + 2 b 1 1 | X | 4 u 2 ( 1 z 0 s 0 u + z 1 λ h 1 u ) h 1 d F ( u ) .

.

In the integrand here, we have z 0> 0, 1 −z 0 s 0> ½,z 1 h 1< 0. From the Cauchy-Bunyakovskii inequality, it follows that

| X | 4 u 2 d F ( u ) b 1 2 .

.

Hence |Im ϕ (z)| >b 1 h 1and |h′ (z)|. Let

Re ϕ ( z ) = λ 1 z 0 | z | 2 b 1 + 2 [ Im z s ( z ) ] 2 | X | 4 u 3 d F ( u ) .

.

Definep = λ− 1 z 0|z|− 2b 1. We have

p = λ 1 | z | 2 z 0 z 1 | Im z s ( z ) | 1 ( s 0 λ h 1 z 1 / z 0 ) .

Here |h 1| < α4,z 0≥ δ1≥ 0,s 0≥ α3> 0, and we obtain that p > 0 if z 1< α6, where α6= α3α5/λ α4. If zε

1andz 1> α6, then the Hö lder inequality follows from the existence of a uniformly bounded derivative of the analytic function h(z) in a closed domain.

Now let z

,z 1< α6,p > 0, and s 0> α3> 0. Then, |h′ (z)| ≤ α7 b 1|Reϕ (z)|− 1, where

Re ϕ ( z ) 2 ( Im z s ( z ) ) 2 c 1 | X | 4 u 2 d F ( u ) 2 ( Im z s ( z ) ) 2 c 1 b 1 = 2 c 1 h 1 2 .

.

Substitutingb 1/Im(zs(z)) =h 1and taking into account that s 0> 0, we obtain that | h ( z ) | α 7 h 1 2 . Thus, for v

δand 0 <z 1< α6for any s 0, it follows that |h′ (z)| < α8max ( h 1 1 , h 1 2 ) α 9 h 1 2 . Calculating the derivative along the vertical line we obtain the inequality h 1 2 | d h 1 / d z | α 9 , whence

h 1 3 ( z ) h 1 3 ( z ) + 3 α 9 | z z | ( h 1 ( z ) + α 10 | z z | 1 / 3 ) 3

if Im z. Im z′ > 0. The Hö lder inequality for h 1= Im h(z) with ζ =1/3 follows. This completes the proof of Theorem 3.5.

Example. Consider limit spectra of matrix Σ of a special form of the "ρ -model" considered first in [63]. It is of a special interest since it admits an analytical solution to the dispersion equation (12). For this model, the limit spectrum of Σ is located on a segment [c 1,c 2], where c 1 = σ 2 ( 1 p ) 2 and c 2 = σ 2 ( 1 + p ) 2 . Its limit spectrum density is

d F ( u ) d u = { ( 2 π ρ ) 1 ( 1 ρ ) u 2 ( c 2 u ) ( u c 1 ) , c 1 u c 2 , 0 for u < c 1 and for u > c 2 .

.

The moments Λ k = u k d F Σ ( u ) for k = 0 , 1 , 2 , 3 , 4 are

Λ 0 = 1 , Λ 1 = σ 2 ( 1 ρ ) , Λ 2 = σ 4 ( 1 ρ ) , Λ 1 = σ 6 ( 1 ρ 2 ) , Λ 4 = σ 8 ( 1 ρ ) ( 1 + 3 ρ + ρ 2 ) .

If ρ > 0, the integral

η ( z ) = ( 1 z u ) 1 d F Σ ( u ) = 1 + ρ k z ( 1 + ρ k z ) 2 4 ρ 2 ρ ,

where k = σ2(1 − ρ). The function η = η (z)satisfies the equation ρ η1+ (kz− ρ − 1)η + 1 =0. The equation h(z) = η (zs(z)) can be transformed to the equation (h− 1)(1 − ρh) =kzhs, which is quadratic with respect to h =h(z),s = 1 − λ + λh. If λ > 0, its solution is

h = 1 + ρ k ( 1 λ ) z ( 1 + ρ k ( 1 λ ) z ) 2 4 ( ρ + k z λ ) 2 ( ρ + k λ z ) .

.

The moments M k= (k!)− 1 h (k)(0) for k = 0,1,2,3 are

M 0 = 1 , M 1 = σ 2 ( 1 ρ ) , M 2 = σ 4 ( 1 ρ ) ( 1 + λ ( 1 ρ ) ) , M 3 = σ 6 ( 1 ρ ) ( 1 + ρ + 3 λ ( 1 ρ ) + λ 2 ( 1 ρ ) 2 ) .

Differentiating the functions of the inverse argument, we find that, in particular, Λ− 1=k − 1, Λ− 2=k − 2(1 + ρ),M − 1=k − 1=k − 1(1 − λ)− 1, M − 2=k − 2(ρ + λ (1 − ρ))(1 − λ)− 3. The continuous limit spectrum of the matrix C is located on the segment [u 1,u 2], where

u 1 = σ 2 ( 1 λ + ρ ( 1 λ ) ) 2 , u 2 = σ 2 ( 1 + λ + ρ ( 1 λ ) ) 2

and has the density

f ( u ) = { ( 1 ρ ) ( u 2 u ) ( u u 1 ) 2 π u ( ρ u + σ 2 ( 1 ρ ) 2 y ) if u [ u 1 , u 2 ] , 0 otherwise .

.

If λ > 1, then the function F(u) has a jump 1 − λ− 1at the point u = 0. If λ = 0, then F(u) =F Σ(u) has a form of a unit step at the point u = σ2. The density f(u) satisfies the Hö lder condition with ζ = 1/2.

In a special case when Σ =I and ρ = 0, we obtain the limit spectral density F ( u ) = ( 2 π ) 1 u 2 ( u 2 u ) ( u u 1 ) foru 1<u<u 2, where u 2 , 1 = ( 1 ± λ ) 2 . This "semicircle" law of spectral density was first found by Marchenko and Pastur [43].

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780444530493500060

Volume 4

T. Kourti , in Comprehensive Chemometrics, 2009

4.02.4 Handling Future Observations with Missing Data

Missing measurements are a frequent occurrence in process industries. Therefore, the new observation vector x new ( Figure 4 ) may have a few elements missing. Latent variable methods that model the process space (PCA, PCR, and PLS) make it possible to infer the corresponding score values of x new by using the available elements in the vector together with the model built from the training data set.

The fact that process variables are highly correlated and that there is redundancy in process data (i.e., many variables are affected by the same event) makes this possible. Redundancy is beneficial for handling missing data. More details on methods for treatment of missing data in regression can be found in Chapter 3.06.

A variety of algorithms have been suggested 32,33 to handle missing data, with different degree of complexity: trimmed score method (TRI), single-component projection (SCP), projection to the model plane (PMP) – using PLS or ordinary least squares (PMPPLS, PMPOLS), iterative imputation of missing data (II), a method based on the minimization of the squared prediction error (SPE), conditional mean replacement (CMR), trimmed score regression (TSR), and regression on known data (KDR).

Suppose that x new T = [ x * x ] T , where without loss of generality we assume that x# is the vector of missing observations. (Following this convention, p * and P * are loadings corresponding to the known x *.) The methods can be seen as different ways to impute values for the missing variables vector, x#. By setting the missing values equal to their expected mean value (i.e., for mean-centered data x#   =   0), we have the TRI method. 33

SCP is the simplest but also the poorest performing approach: It calculates each of the scores independently and sequentially as t ˆ i = z * p i * / p i * T p i * , where z * is x * deflated by the first i    1 components.

Nelson et al. 32 showed that superior results can be obtained by calculating all of the scores at once by projecting onto the hyperplane formed by the P * vectors. In the PMP method, the known x * vector is regressed onto the matrix P *. Sometimes, depending on the measurements missing, some of the columns of P * may become highly correlated and P *T P * becomes ill-conditioned. It was suggested32 to use PLS, PCR, or regularized least squares regression for the projection.

CMR 32 and TSR 33 use the known score T matrix from the training data together with the loadings (P *) and the available measurements (x *) to estimate the score vector. A singularity problem that may arise in CMR may be solved by a procedure suggested by Nelson et al., 32 where the estimated score vector is calculated in two steps: First a parameter β is computed using PLS from T=X * β, where T and X * , respectively, represent the score matrix and those columns from the training data set corresponding to known values; then β along with the current available data vector x * is used to compute an estimate of the score vector.

In iterative imputation, one may use an initial estimate of the final scores (say, those given by SCP method) to forecast the missing values x̂#, (using their corresponding loadings), then create the new vector and recalculate a score estimate, and iterate until convergence.

Arteaga and Ferrer 33 presented an extensive study on the various methods. Iterative imputation and SPE methods are equivalent to PMP; KDR is equivalent to CMR. They concluded that based on the best prediction of the missing values, KDR is statistically superior to the other methods. The TSR is practically equivalent to the KDR and has the advantage that a much smaller matrix needs inversion. Additionally, TSR is statistically superior to PMP method.

Before the system is implemented online, there should be a plan for the operators as to how to respond if the values of several variables stop being recorded. For example, if there are three thermocouples in a reactor and one fails, common sense dictates that we can afford to continue the monitoring scheme. On the contrary, if there is only one sensor for a variable uncorrelated with any other, the value for this variable cannot be assessed from the rest of the variables in the system; therefore depending of the importance of this variable, one may not be able to rely on the monitoring scheme until the failed sensor is replaced. This idea was treated quantitatively by Nelson 52 and Nelson et al., 53 where they analyzed the uncertainty resulting from missing measurements for the predictions of the values of the Hotelling's T 2 and the SPE. Rather than representing an object with missing measurements by a single point, an estimate of the uncertainty regions in the score, Hotteling's T 2, and SPE spaces arising from the missing measurements is provided. They suggested measures to distinguish between situations where model performance will continue to be acceptable and situations where it will be unacceptable, and therefore if the missing measurements cannot be recovered the application must be shut down.

Missing data methods did find their way in the industrial applications. In their industrial perspective on implementing online applications of multivariate statistics, Miletic et al. 2 emphasize that missing data handling is a necessary feature for both the offline modeling and the online systems and report that are using the methods proposed by Nelson et al. 32

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780444527011000132

Parameter Estimation of Chaotic Systems Using Density Estimation of Strange Attractors in the State Space

Yasser Shekofteh , ... Sajad Jafari , in Recent Advances in Chaotic Systems and Synchronization, 2019

5.1 The GMM of the Chaotic System

The chaotic system (1) has three variables in the state space. So, the observation vector of its attractor will be formed as v  =   [x, y, z], and we must select D  =   3 as the state space dimension in Eq. (2). To generate the attractor points of the chaotic system (1) as real data, its model has been simulated with parameters a  =   1.0 and b  =   1.0 by a fourth-order Runge-Kutta method with a step size of 10   ms [27,28]. For training data at the first phase, a set of sequential samples of the system (1) including 100,000 samples (equal to 1000   s time length) has been recorded. The initial conditions were set to (−   0.10, −   5.05, −   6.00) as initial conditions of the system (1). Here, we assume that this recorded training data must lead us to estimate unknown parameters of the chaotic system (1), a and b, by minimization of the GMM-based cost function.

Using obtained training data from the chaotic system, we can learn a GMM in order to model the geometry of the attractor in the state space. In other words, the GMM computation fits a parametric model to the distribution of the attractor in the state space. Fig. 5 shows the attractor of the chaotic system (1) in a three-dimensional state space along with its GMM modeling using M  =   64 Gaussian components. In this figure, every three-dimensional ellipsoid corresponds to one of the Gaussian components.

Fig. 5

Fig. 5. Plot of the chaotic attractor of the system (1) and its GMM modeling with M  =   64 components in the 3D state space. Here, the parameters of the system (1) are set to a  =   1.0 and b  =   1.0.

As can be seen from Fig. 5, the Gaussian components attempt to cover the attractor in the state space. To show the effect of the number of Gaussian mixtures, in Fig. 6, the attractor of the chaotic system (1) and its GMM models are shown for different values of M  =   16, 32, 48, and 64.

Fig. 6

Fig. 6. Plot of the chaotic attractor of the system (1) and its GMM modeling with M  =   16, 32, 48, and 64 components in the 3D state space.

As can be seen from Fig. 6, when we increase the number of Gaussian components, more details of the trajectory of the chaotic attractor can be covered by the added Gaussian components. Therefore, in these experiments, the best GMM modeling of the attractor can be obtained by M  =   64, which shows a precise model of the chaotic attractor. Therefore, by increasing the number of Gaussian components in the GMM, it can cover more complexity of the given time series in its model. The higher value of M can improve the performance of the cost function, but it also increases the computational cost and may be lead to overfitting problems.

In Fig. 7, the information criteria such as AIC, BIC, and the negative of the log-likelihood are considered for the GMM selection problem. It shows that M  =   64 is a good choice for the number of GMM components, because of minimization of the criteria.

Fig. 7

Fig. 7. Plot of the information criteria values to select the best GMM.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128158388000078

Volume 2

G.J. McLachlan , in Comprehensive Chemometrics, 2009

2.30.18 Mixed Feature Data

We consider now the case where some of the feature variables are discrete. That is, the observation vector y j on the jth entity to be clustered consists of p 1 discrete variables, represented by the subvector y 1j , in addition to p 2 continuous variables represented by the subvector y 2j (j  =   1,…,n). The ith component density of the jth observation

y j = ( y 1 j T , y 2 j T ) T

can then be written as

(84) f i ( y i ) = f i ( y 1 j ) f i ( y 2 j | y 1 j )

The symbol f i is being used generically here to denote a density where, for discrete random variables, the density is really a probability function.

In discriminant and cluster analyses, it has been found that it is reasonable to proceed by treating the discrete variables as if they are independently distributed within a class or cluster. This is known as the NAIVE assumption. 49,50 Under this assumption, the ith component-conditional density of the vector y 1j of discrete features is given by

(85) f i ( y 1 j ) = v = 1 p 1 f i v ( y 1 v j )

where f iv (y 1vj ) denotes the ith component-conditional density of the vth discrete feature variable y 1vj in y 1j .

If y 1v denotes one of the distinct values taken on by the discrete variable y 1vj , then under Equation (85) the (k  +   1)th update of f iv (y 1v ) is

(86) f iv ( k + 1 ) ( y 1 v ) = Σ j = 1 n τ i ( y j ; Ψ ( k ) ) δ [ y 1 v j , y 1 v ] + c 1 Σ j = 1 n τ i ( y j ; Ψ ( k ) ) + c 2

where δ[y 1vj , y 1vj ]   =   1 if y 1vj   = y 1v and is zero otherwise, and Ψ(k) is the current estimate of the vector of all the unknown parameters that now include the probabilities for the discrete variables. In Equation (86), the constants c 1 and c 2, which are both equal to zero for the maximum likelihood estimate, can be chosen to limit the effect of zero estimates of f iv (y 1v ) for rare values y 1v . One choice is c 2  =   1 and c 1  =   1/d v , where d v is the number of distinct values in the support of y 1vj . 49

We can allow for some dependence between the vector y 2j of continuous variables and the discrete-data vector y 1j by adopting the location model as, for example, in Hunt and Jorgensen. 51 With the location model, f i ( y 2j y 1j ) is taken to be multivariate normal with a mean that is allowed to be different for some or all of the different levels of y 1j .

As an alternative to the use of the full mixture model, we may proceed conditionally on the realized values of the discrete feature vector y 1j , as in McLachlan and Chang. 52 This leads to the use of the conditional mixture model for the continuous feature vector y 2j ,

(87) f ( y 2 j | y 1 j ) = i = 1 g π i ( y 1 j ) f i ( y 2 j | y 1 j )

where π i ( y 1j ) denotes the conditional probability of ith component membership of the mixture given the discrete data in y 1j . A common model for π i ( y 1j ) is the logistic model under which

(88) π i ( y 1 j ) = exp ( β i 0 + β i T y 1 j ) 1 + Σ h = 1 g 1 exp ( β h 0 + β h T y 1 j )

where β i = ( β i 1 , , β i p 1 ) T for i = 1 , , g 1 , and

π g ( y 1 j ) = 1 h 1 g 1 π h ( y 1 j )

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780444527011000685

Statistical Control of Measures and Processes

A.J. Ferrer-Riquelme , in Comprehensive Chemometrics, 2009

1.04.9.2.2 PCA-based MSPC: online process monitoring (Phase II)

Once the reference PCA model and the control limits for the multivariate control charts are obtained, new process observations can be monitored online. When a new observation vector z i is available, after preprocessing it is projected onto the PCA model yielding the scores and the residuals, from which the value of the Hotelling T A 2 and the value of the SPE are calculated. This way, the information contained in the original K variables is summarized in these two indices, which are plotted in the corresponding multivariate T A 2 and SPE control charts. No matter what the number of the original variables K is, only two points have to be plotted on the charts and checked against the control limits. The SPE chart should be checked first. If the points remain below the control limits in both charts, the process is considered to be in control. If a point is detected to be beyond the limits of one of the charts, then a diagnostic approach to isolate the original variables responsible for the out-of-control signal is needed. In PCA-based MSPC, contribution plots 37 are commonly used for this purpose.

Contribution plots can be derived for abnormal points in both charts. If the SPE chart signals a new out-of-control observation, the contribution of each original kth variable to the SPE at this new abnormal observation is given by its corresponding squared residual:

(37) Cont ( SPE; x new , k ) = e new , k 2 = ( x new , k x new , k * ) 2

where e new,k is the residual corresponding to the kth variable in the new observation and x new , k * is the prediction of the kth variable x new,k from the PCA model.

In case of using the DModX statistic, the contribution of each original kth variable to the DModX is given by 44

(38) Cont ( DModX; x new , k ) = w k e new , k

where w k is the square root of the explained sum of squares for the kth variable. Variables with high contributions in this plot should be investigated.

If the abnormal observation is detected by the T A 2 chart, the diagnosis procedure is carried out in two steps: (i) a bar plot of the normalized scores for that observation (t new,a a )2 is plotted and the ath score with the highest normalized value is selected; (ii) the contribution of each original kth variable to this ath score at this new abnormal observation is given by

(39) Cont ( t new , a ; x new , k ) = p ak x new , k

where p ak is the loading of the kth variable at the ath component. A plot of these contributions is created. Variables on this plot with high contributions but with the same sign as the score should be investigated (contributions of the opposite sign will only make the score smaller). When there are some scores with high normalized values, an overall average contribution per variable can be calculated over all the selected scores. 39

Contribution plots are a powerful tool for fault diagnosis. They provide a list of process variables that contribute numerically to the out-of-control condition (i.e., they are no longer consistent with NOCs), but they do not reveal the actual cause of the fault. Those variables and any variables highly correlated with them should be investigated. Incorporation of technical process knowledge is crucial to diagnose the problem and discover the root causes of the fault.

Apart from the T A 2 and SPE control charts, other charts such as the univariate time-series plots of the scores or scatter score plots can be useful (both in Phase I and II) for detecting and diagnosing out-of-control situations and also for improving process understanding.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B978044452701100096X

Multivariate density estimation

Dag Tjøstheim , ... Bård Støve , in Statistical Modeling Using Local Gaussian Approximation, 2022

9.2.2 Estimation of the joint dependence function

Let ψ ( , θ ) be a parametric family of p-variate density functions. Below ψ is taken to be the multinormal. We recall from Chapter 4 that Hjort and Jones (1996) estimate the unknown density f using the sample X 1 , , X n by fitting ψ locally. The local parameter estimate θ ˆ = θ ˆ ( x ) maximizes the local likelihood function

(9.3) L ( X 1 , , X n , θ ) = L n ( θ , x ) = n 1 i = 1 n K B ( X i x ) log ψ ( X i , θ ) K B ( y x ) ψ ( y , θ ) d y ,

where K is a kernel function that integrates to one and is symmetric about the origin, B is a positive definite matrix of bandwidths, and K B ( x ) = | B | 1 K ( B 1 x ) , | | being the determinant. For small bandwidths, the local estimate f ˆ ( x ) = ψ ( x , θ ˆ n ( x ) ) is close to f ( x ) in the limit, because if the bandwidth matrix B is held fixed and u j ( , θ ) = / θ j log ψ ( , θ ) , we have

(9.4) 0 = L n ( θ ˆ n , x ) θ j P K B ( y x ) u j ( y , θ B , K ( y ) ) { f ( y ) ψ ( y , θ B , K ( y ) ) } d y

for some value of the parameter θ B , K ( x ) toward which θ ˆ n ( x ) converges in probability. However, for finite sample sizes, the curse of dimensionality comes into play. The number of coordinates in θ = θ ( x ) typically grows with the dimension of x , making the local estimates difficult to obtain at every point in the sample space. One solution might be increasing the bandwidths so that the estimation becomes almost parametric. However, here we propose a different path around the Curse directly exploiting decomposition (9.2). The first step might be choosing a standardized multivariate normal distribution as parametric family in (9.3) for modeling f Z in (9.2) locally:

(9.5) ψ ( z , θ ) = ψ ( z , R ) = ( 2 π ) p / 2 | R | 1 / 2 exp { 1 2 z T R 1 z } ,

where R denotes the local correlation matrix. Using a univariate local fit, the local Gaussian expectations and variances in (9.5) are constant and equal to zero and one, respectively, reflecting our knowledge that the margins of the unknown density function f Z are standard normal. However, as B 0 in the p-variate case, as briefly described in Chapter 4.9, the local mean μ and variance σ in general depend on z . In this chapter, we make the additional assumption in our p-dimensional local Gaussian approximation that μ ( z ) 0 and σ 2 ( z ) = 1 . This is more restrictive than in Chapters 7 and 8, where it was assumed that μ = μ ( z ) = μ ( z 1 , z 2 ) and σ σ ( z ) = σ ( z 1 , z 2 ) in the bivariate case, and this more general assumption was crucial in obtaining the local spectral results in Chapter 8.

With this more restrictive assumption that μ ( z ) 0 and σ 2 ( z ) 1 , we are left with the problem of estimating the pairwise correlations ρ i j , 1 i < j p , in (9.5). Fitting the Gaussian distribution according to the scheme described above results in a local correlation matrix at each point. Specifically, the estimated local correlations are written as ρ ˆ i j = ρ ˆ i j ( z 1 , , z p ) , i , j = 1 , , p , indicating that each parameter depends on all variables. The dependence between variables is captured in the variation of the parameter estimates in the p-dimensional Euclidean space, and its estimate maximizes the local likelihood function (9.3). However, as mentioned, the quality of the estimate deteriorates quickly with the dimension.

If the data were jointly normally distributed, there would be no dimensionality problem, since the entire distribution would be characterized by the global correlation coefficients between pairs of variables, and their empirical counterparts are easily computed from the data. A local Gaussian fit would then coincide with a global fit and result in estimates of the form ρ ˆ i j = ρ ˆ i j ( Z i , Z j ) , where the arguments indicate which of the transformed observation variables were used to obtain the estimate. This points to a natural simplification, which we may use to estimate the density f Z , analogous to the additive regression model in Chapter 2.7.1. We allow the local correlations to depend on their own variables only:

(9.6) ρ ˆ i j ( z 1 , , z p ) = ρ ˆ i j ( z i , z j ) .

We could also simplify the estimation problem by estimating the local means and variances as functions of "their own" coordinate only: μ i ( z ) = μ i ( z i ) and σ i 2 ( z ) = σ i 2 ( z i ) , but, as mentioned before, here we have chosen the stricter approximation

(9.7) μ ( z ) = 0 and σ 2 ( z ) = 1 .

We refer to Section 9.7 for a further discussion of this point.

The resulting estimation is carried out in four steps:

1.

Estimate the marginal distributions using the logspline method (or the empirical distribution function) and transform each observation vector to pseudo-standard normality as described in the previous subsection.

2.

Estimate the joint density of the transformed data using the Hjort and Jones (1996) local likelihood function (9.3), the standardized normal parametric family (9.5), and simplifications (9.6) and (9.7). In practice, this means fitting the bivariate version of (9.5) to each pair of the transformed variables ( Z i , Z j ) . Put the estimated local correlations into the estimated local correlation matrix: R ˆ ( z ) = { ρ ˆ i j ( z i , z j ) } i , j = 1 , , p .

3.

Let f ˆ Z ( z ) = ψ ( z , R ˆ ( z ) ) and obtain the final estimate of f ( x ) by replacing f Z with f ˆ Z , and the marginal distribution and density functions with their logspline estimates in (9.2):

(9.8) f ˆ ( x ) = f ˆ Z ( Φ 1 ( F ˆ 1 ( x 1 ) ) , , Φ 1 ( F ˆ p ( x p ) ) ) i = 1 p f ˆ i ( x i ) ϕ ( Φ 1 ( F ˆ i ( x i ) ) ) .

4.

Normalize the density estimate so that it integrates to one.

The existence of population values corresponding to the estimated local correlations is discussed in the following section. It is clear that assumptions (9.6) and (9.7) represent an approximation to most multivariate distributions. The authors are aware of no other distributions than those possessing the Gaussian copula or step functions thereof as in Tjøstheim and Hufthammer (2013) or Chapter 4.3, for which (9.6) and (9.7) are exact properties of the true local correlations. In that case the local correlations are constant or stepwise constant in all its variables. The quality of the LGDE thus depends to a large degree on the severity of assumptions (9.6) and (9.7) on the underlying density. The pairwise assumption is hard to interpret except in general statements about "pairwise dependence structures", and so we proceed in this chapter to explore the impact of (9.6) and (9.7) in practice in Section 9.6 and the subsequent discussion in Section 9.7. Before we do that, we take a closer look at the theoretical foundations of the LGDE.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B978012815861600016X

INTRODUCTION

VADIM I. SERDOBOLSKII , in Multiparametric Statistics, 2008

The Kolmogorov Asymptotics

In 1967, Andrei Nikolaevich Kolmogorov was interested in the dependence of errors of discrimination on sample size. He solved the following problem. Let x be a normal observation vector, and x v ¯ be sample averages calculated over samples from population number ν = 1, 2. Suppose that the covariance matrix is the identity matrix. Consider a simplified discriminant function

g ( x ) = ( x ¯ 1 x ¯ 2 ) T ( x− ( x ¯ 1 + x ¯ 2 ) / 2 )

and the classification rule w(x) > 0 against w(x) ≤ 0. This function leads to the probability of errors α n = Φ ( G / D ) , where G and D are quadratic functions of sample averages having a noncentral χ2 distribution. To isolate principal parts of G and D, Kolmogorov proposed to consider not one statistical problem but a sequence of n-dimensional discriminant problems in which the dimension n increases along with sample sizes N ν, so that N v and n / N v λ v > 0 , v = 1 , 2 . Under these assumptions, he proved that the probability of error α n converges in probability

(7) p lim n α n = Φ ( J λ 1 + λ 2 2 J + λ 1 + λ 2 ) ,

where J is the square of the Euclidean limit "Mahalanobis distance" between centers of populations. This expression is remarkable by that it explicitly shows the dependence of error probability on the dimension and sample sizes. This new asymptotic approach was called the "Kolmogorov asymptotics."

Later, L. D. Meshalkin and the author of this book deduced formula (7) for a wide class of populations under the assumption that the variables are independent and populations approach each other in the parameter space (are contiguous) [45], [46].

In 1970, Yu. N. Blagoveshchenskii and A. D. Deev studied the probability of errors for the standard sample Fisher-Andersen-Wald discriminant function for two populations with unknown common covariance matrix. A. D. Deev used the fact that the probability of error coincides with the distribution function g(x). He obtained an exact asymptotic expansion for the limit of the error probability α. The leading term of this expansion proved to be especially interesting. The limit probability of an error (of the first kind) proved to be

α = Φ ( Θ J λ 1 + λ 2 2 J + λ 1 + λ 2 ) ,

where the factor Θ = 1 λ , with λ = λ1λ2 /1 + λ2), accounts for the accumulation of estimation inaccuracies in the process of the covariance matrix inversion. It was called "the Deev formula." This formula was thoroughly investigated numerically, and a good coincidence was demonstrated even for not great n, N.

Note that starting from Deev's formulas, the discrimination errors can be reduced if the rule g(x) > θ against g(x) ≤ θ with θ = (λ1 — λ2)/2 ≠ 0 is used. A. D. Deev also noticed [18] that the half-sum of discrimination errors can be further decreased by weighting summands in the discriminant function.

After these investigations, it became obvious that by keeping terms of the order of n/N, one obtains a possibility of using specifically multidimensional effects for the construction of improved discriminant and other procedures of multivariate analysis. The most important conclusion was that traditional consistent methods of multivariate statistical analysis should be improvable, and a new progress in theoretical statistics is possible, aiming at obtaining nearly optimal solutions for fixed samples.

The Kolmogorov asymptotics (increasing dimension asymp–totics [3]) may be considered as a calculation tool for isolating leading terms in case of large dimension. But the principal role of the Kolmogorov asymptotics is that it reveals specific regularities produced by estimation of a large number of parameters. In a series of further publications, this asymptotics was used as a main tool for investigation of essentially many-dimensional phenomena characteristic of high-dimensional statistical analysis. The constant n/N became an acknowledged characteristics in many-dimensional statistics.

In Section 5.1, the Kolmogorov asymptotics is applied for the development of theory allowing to improve the discriminant analysis of vectors of large dimension with independent components. The improvement is achieved by introducing appropriate weights of contributions of independent variables in the discriminant function. These weights are used for the construction of asymptotically unimprovable discriminant procedure. Then, the problem of selection of variables for discrimination is solved, and the optimum selection threshold is found.

But the main success in the development of multiparametric solutions was achieved by combining the Kolmogorov asymptotics with the spectral theory of random matrices developed independently at the end of 20th century in another region.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780444530493500047