We take a look back at our previous examples in light of the
results of two previous sections The Spectral Representation and The Partial Fraction Expansion of
the Transfer Function. With respect to the rotation
matrix
B=(
01
-10
)
B
01
-10
we recall, see Cauchy's Theorem eqn6, that
Rs=1s2+1(
s1
-1s
)
Rs
1
s
2
1
s1
-1s

Rs=1s−i(
1/2−i2
i21/2
)+1s+i(
1/2i2
−i21/2
)
Rs
1
s
12
2
2
12
1
s
12
2
2
12

(1)
Rs=1s−
λ1
P1
+1s−
λ2
P2
Rs
1
s
λ1
P1
1
s
λ2
P2
and so

B=
λ1
P1
+
λ2
P2
=i(
1/2−i2
i21/2
)−i(
1/2i2
−i21/2
)
B
λ1
P1
λ2
P2
12
2
2
12
12
2
2
12
From

m1
=
m2
=1
m1
m2
1
it follows that

ℛ
P1
ℛ
P1
and

ℛ
P2
ℛ
P2
are actual (as opposed to generalized) eigenspaces. These
column spaces are easily determined. In particular,

ℛ
P1
ℛ
P1
is the span of

e1
=(
1
i
)
e1
1
while

ℛ
P2
ℛ
P2
is the span of

e2
=(
1
−i
)
e2
1
To recapitulate, from partial fraction expansion one can read
off the projections from which one can read off the
eigenvectors. The reverse direction, producing projections
from eigenvectors, is equally worthwhile. We laid the
groundwork for this step in the discussion of

Least Squares. In
particular,

this Least Squares projection equation
stipulates that

P1
=
e1
e1
T
e1
-1
e1
T
and
P2
=
e2
e2
T
e2
-1
e2
T
P1
e1
e1
e1
e1
and
P2
e2
e2
e2
e2
As

e1
T
e1
=
e1
T
e1
=0
e1
e1
e1
e1
0
these formulas can not possibly be correct. Returning
to the Least Squares discussion we realize that it was,
perhaps implicitly, assumed that all quantities were real. At
root is the notion of the length of a complex vector. It is
not the square root of the sum of squares of its components
but rather the square root of the sum of squares of the

*magnitudes* of its components. That is,
recalling that the magnitude of a complex quantity

zz is

zz¯
z
z
,

∥
e1
∥2≠
e1
T
e1
rather
∥
e1
∥2≠
e1
¯T
e1
e1
2
e1
e1
rather
e1
2
e1
e1
Yes, we have had this discussion before, recall

complex numbers, vectors, and
matrices. The upshot of all of this is that, when
dealing with complex vectors and matrices, one should conjugate
before every transpose. Matlab (of course) does this
automatically,

i.e., the ' symbol conjugates
and transposes simultaneously. We use

xH
x
to denote `conjugate transpose',

i.e.,

xH≡x¯T
x
x
All this suggests that the desired projections are more likely

P1
=
e1
e1
H
e1
-1
e1
H
and
P2
=
e2
e2
H
e2
-1
e2
H
P1
e1
e1
e1
e1
and
P2
e2
e2
e2
e2

(2) Please check that

Equation 2 indeed jives with

Equation 1.