# Connexions

You are here: Home » Content » The Expected Energy of a Gaussian Random Vector

### Recently Viewed

This feature requires Javascript to be enabled.

# The Expected Energy of a Gaussian Random Vector

Module by: Justin Romberg. E-mail the author

This module is concerned with the expected energy in a Gaussian random vector. We consider both the case where the entries in the vector are both uncorrelated and correlated.

## Independent and identically distributed (iid) entries

Throughout this module, zz is a vector in RNRN whose components z[1],...,z[N]z[1],...,z[N] are independent identically distributed Gaussian random variables with mean zero and variance σ2σ2:

z [ n ] Normal ( 0 , σ 2 ) , z [ n ] Normal ( 0 , σ 2 ) ,
(1)

or more compactly,

z Normal 0 , σ 2 I . z Normal 0 , σ 2 I .
(2)

To begin, we are interested in the expected energy of zz; that is Ez22Ez22, where ·2·2 is the standard Euclidean norm. We have

E z 2 2 = E k = 1 N | z [ k ] | 2 = E k = 1 N = 1 N z [ k ] z [ ] = k = 1 N = 1 N E z [ k ] z [ ] = k = 1 N E [ | z [ k ] | 2 ] (since E [ z [ k ] z [ ] ] = 0 for k ) = N σ 2 . E z 2 2 = E k = 1 N | z [ k ] | 2 = E k = 1 N = 1 N z [ k ] z [ ] = k = 1 N = 1 N E z [ k ] z [ ] = k = 1 N E [ | z [ k ] | 2 ] (since E [ z [ k ] z [ ] ] = 0 for k ) = N σ 2 .
(3)

## Projecting an iid Gaussian random vector

Now suppose that VV is an N×RN×R matrix, with RNRN, whose columns are orthonormal (VTV=IVTV=I, but VVTIVVTI unless R=NR=N). We are interested in the expected energy of

y = V T z . y = V T z .
(4)

We can interpret yy as the projection on zz onto the subspace spanned by the columns of VV. This is easy to figure out once we realize that the entries of yy will also be independent Gaussian random variables with mean zero and variance σ2σ2. To see this, note that the kkth entry of yy can be written as

y [ k ] = v k , z , y [ k ] = v k , z ,
(5)

where vkvk is the kkth column of VV, and so

E y [ k ] = E [ v k , z ] = E n = 1 N v k [ n ] z [ n ] = n = 1 N v k [ n ] E [ z [ n ] ] = 0 . E y [ k ] = E [ v k , z ] = E n = 1 N v k [ n ] z [ n ] = n = 1 N v k [ n ] E [ z [ n ] ] = 0 .
(6)

Likewise,

E [ y [ k ] y [ ] ] = E n = 1 N v k [ n ] z [ n ] · m = 1 N v [ m ] z [ m ] = n = 1 N m = 1 N v k [ n ] v [ m ] E [ z [ n ] z [ m ] ] = σ 2 n = 1 N v k [ n ] v [ n ] (since E [ z [ n ] z [ m ] ] = 0 unless m = n , in which case it is σ 2 ) = σ 2 v k , v = σ 2 k = 0 k . E [ y [ k ] y [ ] ] = E n = 1 N v k [ n ] z [ n ] · m = 1 N v [ m ] z [ m ] = n = 1 N m = 1 N v k [ n ] v [ m ] E [ z [ n ] z [ m ] ] = σ 2 n = 1 N v k [ n ] v [ n ] (since E [ z [ n ] z [ m ] ] = 0 unless m = n , in which case it is σ 2 ) = σ 2 v k , v = σ 2 k = 0 k .
(7)

Finally, we have

E y 2 2 = E V T z 2 2 = k = 1 R = 1 R E v k , z v , z = R σ 2 = R N E z 2 2 . E y 2 2 = E V T z 2 2 = k = 1 R = 1 R E v k , z v , z = R σ 2 = R N E z 2 2 .
(8)

Thus projecting a Gaussian random vector in RNRN into RRRR decreases the expected energy by the ratio of the dimensions, R/NR/N.

It is also worth noting here that when R=NR=N, the matrix VV is orthonormal, and so VVT=IVVT=I and not only is E[y22]=E[z22]E[y22]=E[z22], but the two vectors have identical distributions. Since yy is a linear function of a zero-mean Gaussian random vector, it is itself a zero-mean Gaussian random vector with correlation matrix

E [ y y T ] = E [ V z z T V T ] = V E [ z z T ] V T = σ 2 V V T = σ 2 I . E [ y y T ] = E [ V z z T V T ] = V E [ z z T ] V T = σ 2 V V T = σ 2 I .
(9)

## Multiplying an iid Gaussian random vector by a diagonal matrix

Now suppose we pass zz through an N×NN×N diagonal matrix ΛΛ,

Λ = λ 1 0 0 λ 2 0 λ N Λ = λ 1 0 0 λ 2 0 λ N
(10)

What is the expected energy of ΛzΛz? Proceeding as above, we have

E Λ z 2 2 = E k = 1 N λ k 2 z [ k ] 2 = σ 2 k = 1 N λ k 2 = σ 2 · Trace ( Λ T Λ ) . E Λ z 2 2 = E k = 1 N λ k 2 z [ k ] 2 = σ 2 k = 1 N λ k 2 = σ 2 · Trace ( Λ T Λ ) .
(11)

(Recall that the trace of a matrix is simply the sum of the terms along the diagonal.) Thus the expected energy of ΛzΛz is simply the expected energy of zz, which is Nσ2Nσ2, times the average of the squared-entries in ΛΛ, which is 1N(λ1+λ2++λN)1N(λ1+λ2++λN).

## Multiplying an iid Gaussian random vector by an arbitrary matrix

We have seen what happens when we apply an orthogonal matrix and a diagonal matrix to vector zz with iid Gaussian entries. We combine these results to analyze the application of a arbitrary matrix.

Let AA be an M×NM×N matrix with singular value decomposition (SVD)

A = U Σ V T . A = U Σ V T .
(12)

With RR as the rank of AA, we know that

• UU is an M×RM×R matrix whose columns are orthonormal: UTU=IUTU=I,
• ΣΣ is an R×RR×R diagonal matrices with entries σ1,...,σR>0σ1,...,σR>0,
• VV is an N×RN×R matrix whose columns are orthonormal VTV=IVTV=I.

The expected energy of AzAz is

E A z 2 2 = E U Σ V T , U Σ V T = E Σ V T , Σ V T (since U T U = I ) = E Σ y 2 2 , E A z 2 2 = E U Σ V T , U Σ V T = E Σ V T , Σ V T (since U T U = I ) = E Σ y 2 2 ,
(13)

where y=VTzy=VTz. As we saw above, entries of yRRyRR will also be independent Gaussian random variables with zero mean and variance σ2σ2. Applying our previous result on applying diagonal matrices to random vectors with iid entries, we have

E A z 2 2 = σ 2 Trace Σ T Σ = σ 2 k = 1 R σ k 2 . E A z 2 2 = σ 2 Trace Σ T Σ = σ 2 k = 1 R σ k 2 .
(14)

In general, the expected energy of AzAz is the expected energy of zz multiplied by the average of the squared singular values of AA (if we include the zero singular values σR+12,...,σN2σR+12,...,σN2).

## The energy of colored noise

Suppose that xRNxRN is a Gaussian random vector with zero mean and correlation

E [ x x T ] = R . E [ x x T ] = R .
(15)

What is the expected energy E[x22]E[x22]?

This question is easy answered given the work above. Since RR is symmetric and nonnegative definite, the matrix R1/2R1/2 is well-defined by

R 1 / 2 = V Λ 1 / 2 V T , R 1 / 2 = V Λ 1 / 2 V T ,
(16)

where R=VΛVTR=VΛVT is the eigenvalue decomposition of RR. Notice that R1/2R1/2 is also symmetric nonnegative definite. If z Normal (0,I)z Normal (0,I), then R1/2zR1/2z has the same distribution as xx, since it is Gaussian, zero mean, and has correlation

E [ R 1 / 2 z z T R 1 / 2 ] = R 1 / 2 E [ z z T ] R 1 / 2 = R . E [ R 1 / 2 z z T R 1 / 2 ] = R 1 / 2 E [ z z T ] R 1 / 2 = R .
(17)

As a result,

E [ x 2 2 ] = E [ R 1 / 2 z 2 2 ] = Trace ( R ) . E [ x 2 2 ] = E [ R 1 / 2 z 2 2 ] = Trace ( R ) .
(18)

## Content actions

### Give feedback:

My Favorites (?)

'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.

| A lens I own (?)

#### Definition of a lens

##### Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

##### What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

##### Who can create a lens?

Any individual member, a community, or a respected organization.

##### What are tags?

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

| External bookmarks