Jump to content

User:Ekimd/sandbox

From Wikipedia, the free encyclopedia

==Eigen decomposition of the Autocorrelation Matrix== !!VERBATIM!!

To understand why these methods are used, consider the following. A signal

consists of a single complex exponential in white noise. The amplitude of the complex exponential is where is a uniformly distributed random variable and is white noise that has a variance of . The autocorrelation sequence of is

,

where is the power in the complex exponential. Therefore, the M x M autocorrelation matrix for x(n) is a sum of an autocorrelation matrix due to the signal, R_s, and an autocorrelation matrix due to the noise, R_n.

R_x = R_s + R"'_n

where the signal autocorrelation matrix is

R_s = P_1[1, e^{-j\omega_1}, e^{-j2\omega_1}, ..., e^{-j(M-1)\omega_1} ;

e^{j\omega_1}, 1, e^{-j\omega_1}, ..., e^{-j(M-2)\omega_1} ;

.

.

.

e^{j(M-1)\omega_1}, e^{j(M-2)\omega_1}, ..., 1]

and has a rank of one, and the noise autocorrelation matrix is diagonal,

R_n = \sigma_w^2 I

and has full rank, Note that if we define

(e as below)

the R_s may be written in terms of e_1 as R_s = P_1 e_1 e_1^H.

Since the rank of R_s is equal to one, then R_s has only one nonzero eigenvalue. With

R_s e_1 = P_1(e_1 e_1^H)e_1 = P_1 e_1(e_1^H e_1) = M P_1 e_1

it follows that the nonzero eigenvalue is equal to M P_1 and that e_1 is the corresponding eigenvector. In addition, since R_s is Hermitian, then the remaining eigenvectors, v_2, v_3, ..., v_M, will be orthogonal to e_1 (for a Hermitian matrix the eigenvectors corresponding to distinct eigenvalues are orthogonal), e_1^H = 0; i = 2, 3, ..., M.

Finally, note that if we let \lambda_i^s be the eigenvalues of R_s, then

R_x v_i = (R_s + \sigma_w^2 I)v_i = \lambda_i^s v_i + \sigma_w^2 v_i = (\lamda_i^s + \sigma_w^2)v_i.

Therefore, the eigenvectors of R_x are the same as those of Rs, and the eigenvalues of Rx are

\lamda_i = \lamda_i^s + \sigma_w^2.

As a result, the largest eigenvalue of R_x is

\lamda_max = M P_1 + \sigma_w^2 (no bold)

and the remaining M-1 eigenvalues are equal to \sigma_w^2. Thus, it is possible to extract all of the parameters of interest about x(n) from the eigenvalues and eigenvectors of R_x as follows:

1. Perform an eigen decomposition of the autocorrelation matrix, R_x. The largest eigenvalue will be equal to M P_1 + \sigma_w^2 and the remaining eigenvalues will be equal to \sigma_w^2.

2. Use the eigenvalues of R_x to solve for the power P_1 and the noise variance as follows:

\sigma_w^2 = \lamda_min

P_1 = \frac{1}{M}(\lamda_max - \lamda_min)

3. Determine the frequency \omega_1 from the eigenvector v_max that is associated with the largest eigenvalue, using, for example, the second coefficient of v_max (the components of the eigenvectors are numbered from v_i(0) to v_i(M-1)), \omega_i = arg\{v_max(1)\}.