The magnitude approximation problem can be formulated as follows:
min
h
∥
D
(
ω
)


H
(
ω
;
h
)

∥
p
p
min
h
∥
D
(
ω
)


H
(
ω
;
h
)

∥
p
p
(1)Unfortunately, the second term inside the norm (namely the absolute value function) is not differentiable when its argument is zero. Although one could propose ways to work around this problem, I propose the use of a different design criterion, namely the approximation of a desired magnitude squared. The resulting problem is
min
h
∥
D
(
ω
)
2


H
(
ω
;
h
)

2
∥
p
p
min
h
∥
D
(
ω
)
2


H
(
ω
;
h
)

2
∥
p
p
(2)The autocorrelation r(n)r(n) of a causal lengthNN FIR filter h(n)h(n) is given by
r
(
n
)
=
h
(
n
)
*
h
(

n
)
=
∑
k
=

(
N

1
)
N

1
h
(
k
)
h
(
n
+
k
)
r
(
n
)
=
h
(
n
)
*
h
(

n
)
=
∑
k
=

(
N

1
)
N

1
h
(
k
)
h
(
n
+
k
)
(3)The Fourier transform of the autocorrelation r(n)r(n) is known as the Power Spectral Density function [2] R(ω)R(ω) (or simply the SPD), and is defined as follows,
R
(
ω
)
=
∑
n
=

(
N

1
)
N

1
r
(
n
)
e

j
ω
n
=
∑
n
=

(
N

1
)
N

1
∑
k
=

(
N

1
)
N

1
h
(
n
)
h
(
n
+
k
)
e

j
ω
n
R
(
ω
)
=
∑
n
=

(
N

1
)
N

1
r
(
n
)
e

j
ω
n
=
∑
n
=

(
N

1
)
N

1
∑
k
=

(
N

1
)
N

1
h
(
n
)
h
(
n
+
k
)
e

j
ω
n
(4)From the properties of the Fourier Transform [1] one can show that there exists a frequency domain relationship between h(n)h(n) and r(n)r(n) given by
R
(
ω
)
=
H
(
ω
)
·
H
*
(

ω
)
=

H
(
ω
)

2
R
(
ω
)
=
H
(
ω
)
·
H
*
(

ω
)
=

H
(
ω
)

2
(5)This relationship suggests a way to design magnitudesquared filters, namely by using the filter's autocorrelation coefficients instead of the filter coefficients themselves. In this way, one can avoid the use of the nondifferentiable magnitude response.
An important property to note at this point is the fact that since the filter coefficients are real, one can see from Equation 3 that the autocorrelation function r(n)r(n) is symmetric; thus it is sufficient to consider its last NN values. As a result, the PSD can be written as
R
(
ω
)
=
∑
n
r
(
n
)
e

j
ω
n
=
r
(
0
)
+
∑
n
=
1
N

1
2
r
(
n
)
cos
ω
n
R
(
ω
)
=
∑
n
r
(
n
)
e

j
ω
n
=
r
(
0
)
+
∑
n
=
1
N

1
2
r
(
n
)
cos
ω
n
(6)in a similar way to the linear phase problem.
The symmetry property introduced above allows for the use of the lplp linear phase algorithm of (Reference) to obtain the autocorrelation coefficients of h(n)h(n). However, there is an important step missing in this discussion: how to obtain the filter coefficients from its autocorrelation. To achieve this goal, one can follow a procedure known as Spectral Factorization. The objective is to use the autocorrelation coefficients r∈RNr∈RN instead of the filter coefficients h∈RNh∈RN as the optimization variables. The variable transformation is done using Equation 7, which is not a onetoone transformation. Because of the last result, there is a necessary condition for a vector r∈RNr∈RN to be a valid autocorrelation vector of a filter. This is summarized [3] in the spectral factorization theorem, which states that r∈RNr∈RN is the autocorrelation function of a filter h(n)h(n) if and only if R(ω)≥0R(ω)≥0 for all ω∈[0,π]ω∈[0,π]. This turns out to be a necessary and sufficient condition [3] for the existence of r(n)r(n). Once the autocorrelation vector rr is found using existing robust interiorpoint algorithms, the filter coefficients can be calculated via spectral factorization techniques.
Assuming a valid vector r∈RNr∈RN can be found for a particular filter hh, the problem presented in Equation 1 can be rewritten as
L
(
ω
)
2
≤
R
(
ω
)
≤
U
(
ω
)
2
∀
ω
∈
[
0
,
π
]
L
(
ω
)
2
≤
R
(
ω
)
≤
U
(
ω
)
2
∀
ω
∈
[
0
,
π
]
(7)In Equation 7 the existence condition R(ω)≥0R(ω)≥0 is redundant since 0≤L(ω)20≤L(ω)2 and, thus, is not included in the problem definition. For each ωω, the constraints of Equation 7 constitute a pair of linear inequalities in the vector rr; therefore the constraint is convex in rr. Thus the change of variable transforms a nonconvex optimization problem in hh into a convex problem in rr.