Skip to content Skip to navigation Skip to collection information

OpenStax_CNX

You are here: Home » Content » Basic Vector Space Methods in Signal and Systems Theory » General Solutions of Simultaneous Equations

Navigation

Lenses

What is a lens?

Definition of a lens

Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

Who can create a lens?

Any individual member, a community, or a respected organization.

What are tags? tag icon

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

This content is ...

Affiliated with (What does "Affiliated with" mean?)

This content is either by members of the organizations listed or about topics related to the organizations listed. Click each link to see a list of all content affiliated with the organization.
  • Rice Digital Scholarship

    This collection is included in aLens by: Digital Scholarship at Rice University

    Click the "Rice Digital Scholarship" link to see all content affiliated with them.

Also in these lenses

  • UniqU content

    This collection is included inLens: UniqU's lens
    By: UniqU, LLC

    Click the "UniqU content" link to see all content selected in this lens.

  • Lens for Engineering

    This module and collection are included inLens: Lens for Engineering
    By: Sidney Burrus

    Click the "Lens for Engineering" link to see all content selected in this lens.

Recently Viewed

This feature requires Javascript to be enabled.
 

General Solutions of Simultaneous Equations

Module by: C. Sidney Burrus. E-mail the author

The second problem posed in the introduction is basically the solution of simultaneous linear equations [19], [1], [6] which is fundamental to linear algebra [17], [33], [23] and very important in diverse areas of applications in mathematics, numerical analysis, physical and social sciences, engineering, and business. Since a system of linear equations may be over or under determined in a variety of ways, or may be consistent but ill conditioned, a comprehensive theory turns out to be more complicated than it first appears. Indeed, there is a considerable literature on the subject of generalized inverses or pseudo-inverses. The careful statement and formulation of the general problem seems to have started with Moore [24] and Penrose [27], [28] and developed by many others. Because the generalized solution of simultaneous equations is often defined in terms of minimization of an equation error, the techniques are useful in a wide variety of approximation and optimization problems [7], [21] as well as signal processing.

The ideas are presented here in terms of finite dimensions using matrices. Many of the ideas extend to infinite dimensions using Banach and Hilbert spaces [30], [26], [35] in functional analysis.

The Problem

Given an MM by NN real matrix AA and an MM by 1 vector bb, find the NN by 1 vector xx when

a 11 a 12 a 13 a 1 N a 21 a 22 a 23 a 31 a 32 a 33 a M 1 a M N x 1 x 2 x 3 x N = b 1 b 2 b 3 b M a 11 a 12 a 13 a 1 N a 21 a 22 a 23 a 31 a 32 a 33 a M 1 a M N x 1 x 2 x 3 x N = b 1 b 2 b 3 b M
(1)

or, using matrix notation,

A x = b A x = b
(2)

If bb does not lie in the range space of AA (the space spanned by the columns of AA), there is no exact solution to Equation 2, therefore, an approximation problem can be posed by minimizing an equation error defined by

ε = A x - b . ε = A x - b .
(3)

A generalized solution (or an optimal approximate solution) to Equation 2 is usually considered to be an xx that minimizes some norm of εε. If that problem does not have a unique solution, further conditions, such as also minimizing the norm of xx, are imposed. The l2l2 or root-mean-squared error or Euclidean norm is εT*εεT*ε and minimization sometimes has an analytical solution. Minimization of other norms such as ll (Chebyshev) or l1l1 require iterative solutions. The general lplp norm is defined as qq where

q = | | x | | p = ( n | x ( n ) | p ) 1 / p q = | | x | | p = ( n | x ( n ) | p ) 1 / p
(4)

for 1<p<1<p< and a “pseudonorm" (not convex) for 0<p<10<p<1. These can sometimes be evaluated using IRLS (iterative reweighted least squares) algorithms [3], [5], [34], [16], [11].

If there is a non-zero solution of the homogeneous equation

A x = 0 , A x = 0 ,
(5)

then Equation 2 has infinitely many generalized solutions in the sense that any particular solution of Equation 2 plus an arbitrary scalar times any non-zero solution of Equation 5 will have the same error in Equation 3 and, therefore, is also a generalized solution. The number of families of solutions is the dimension of the null space of AA.

This is analogous to the classical solution of linear, constant coefficient differential equations where the total solution consists of a particular solution plus arbitrary constants times the solutions to the homogeneous equation. The constants are determined from the initial (or other) conditions of the solution to the differential equation.

Ten Cases to Consider

Examination of the basic problem shows there are ten cases [19] listed in Figure 1 to be considered. These depend on the shape of the MM by NN real matrix AA, the rank rr of AA, and whether bb is in the span of the columns of AA.

  • 1a. M=N=rM=N=r: One solution with no error, εε.
  • 1b. M=N>rM=N>r: bspan{A}bspan{A}: Many solutions with ε=0ε=0.
  • 1c. M=N>rM=N>r: bnotspan{A}bnotspan{A}: Many solutions with the same minimum error.
  • 2a. M>N=rM>N=r: bspan{A}bspan{A}: One solution ε=0ε=0.
  • 2b. M>N=rM>N=r: bnotspan{A}bnotspan{A}: One solution with minimum error.
  • 2c. M>N>rM>N>r: bspan{A}bspan{A}: Many solutions with ε=0ε=0.
  • 2d. M>N>rM>N>r: bnotspan{A}bnotspan{A}: Many solutions with the same minimum error.
  • 3a. N>M=rN>M=r: Many solutions with ε=0ε=0.
  • 3b. N>M>rN>M>r: bspan{A}bspan{A}: Many solutions with ε=0ε=0
  • 3c. N>M>rN>M>r: bnotspan{A}bnotspan{A}: Many solutions with the same minimum error.

Figure 1. Ten Cases for the Pseudoinverse.

Here we have:

  • case 1 has the same number of equations as unknowns (A is square, M=NM=N),
  • case 2 has more equations than unknowns, therefore, is over specified (A is taller than wide, M>NM>N),
  • case 3 has fewer equations than unknowns, therefore, is underspecified (A is wider than tall N>MN>M).

This is a setting for frames and sparse representations.

In case 1a and 3a, bb is necessarily in the span of AA. In addition to these classifications, the possible orthogonality of the columns or rows of the matrices gives special characteristics.

Examples

Case 1: Here we see a 3 x 3 square matrix which is an example of case 1 in Figure 1 and 2.

a 11 a 12 a 13 a 21 a 22 a 23 a 31 a 32 a 33 x 1 x 2 x 3 = b 1 b 2 b 3 a 11 a 12 a 13 a 21 a 22 a 23 a 31 a 32 a 33 x 1 x 2 x 3 = b 1 b 2 b 3
(6)

If the matrix has rank 3, then the bb vector will necessarily be in the space spanned by the columns of AA which puts it in case 1a. This can be solved for xx by inverting AA or using some more robust method. If the matrix has rank 1 or 2, the bb may or may not lie in the spanned subspace, so the classification will be 1b or 1c and minimization of ||x||22||x||22 yields a unique solution.

Case 2: If AA is 4 x 3, then we have more equations than unknowns or the overspecified or overdetermined case.

a 11 a 12 a 13 a 21 a 22 a 23 a 31 a 32 a 33 a 41 a 42 a 43 x 1 x 2 x 3 = b 1 b 2 b 3 b 4 a 11 a 12 a 13 a 21 a 22 a 23 a 31 a 32 a 33 a 41 a 42 a 43 x 1 x 2 x 3 = b 1 b 2 b 3 b 4
(7)

If this matrix has the maximum rank of 3, then we have case 2a or 2b depending on whether bb is in the span of AA or not. In either case, a unique solution xx exists which can be found by Equation 15 or Equation 21. For case 2a, we have a single exact solution with no equation error, ϵ=0ϵ=0 just as case 1a. For case 2b, we have a single optimal approximate solution with the least possible equation error. If the matrix has rank 1 or 2, the classification will be 2c or 2d and minimization of ||x||22||x||22 yelds a unique solution.

Case 3: If AA is 3 x 4, then we have more unknowns than equations or the underspecified case.

a 11 a 12 a 13 a 14 a 21 a 22 a 23 a 24 a 31 a 32 a 33 a 34 x 1 x 2 x 3 x 4 = b 1 b 2 b 3 a 11 a 12 a 13 a 14 a 21 a 22 a 23 a 24 a 31 a 32 a 33 a 34 x 1 x 2 x 3 x 4 = b 1 b 2 b 3
(8)

If this matrix has the maximum rank of 3, then we have case 3a and bb must be in the span of AA . For this case, many exact solutions xx exist, all having zero equation error and a single one can be found with minimum solution norm ||x||||x|| using Equation 17 or Equation 22. If the matrix has rank 1 or 2, the classification will be 3b or 3c.

Solutions

There are several assumptions or side conditions that could be used in order to define a useful unique solution of Equation 2. The side conditions used to define the Moore-Penrose pseudo-inverse are that the l2l2 norm squared of the equation error εε be minimized and, if there is ambiguity (several solutions with the same minimum error), the l2l2 norm squared of xx also be minimized. A useful alternative to minimizing the norm of xx is to require certain entries in xx to be zero (sparse) or fixed to some non-zero value (equality constraints).

In using sparsity in posing a signal processing problem (e.g. compressive sensing), an l1l1 norm can be used (or even an l0l0 “pseudo norm”) to obtain solutions with zero components if possible [12], [31].

In addition to using side conditions to achieve a unique solution, side conditions are sometimes part of the original problem. One interesting case requires that certain of the equations be satisfied with no error and the approximation be achieved with the remaining equations.

Moore-Penrose Pseudo-Inverse

If the l2l2 norm is used, a unique generalized solution to Equation 2 always exists such that the norm squared of the equation error εT*εεT*ε and the norm squared of the solution xT*xxT*x are both minimized. This solution is denoted by

x = A + b x = A + b
(9)

where A+A+ is called the Moore-Penrose inverse [1] of AA (and is also called the generalized inverse [6] and the pseudoinverse [1])

Roger Penrose [28] showed that for all AA, there exists a unique A+A+ satisfying the four conditions:

A A + A = A A A + A = A
(10)
A + A A + = A + A + A A + = A +
(11)
[ A A + ] * = A A + [ A A + ] * = A A +
(12)
[ A + A ] * = A + A [ A + A ] * = A + A
(13)

There is a large literature on this problem. Five useful books are [19], [1], [6], [9], [29]. The Moore-Penrose pseudo-inverse can be calculated in Matlab [22] by the pinv(A,tol) function which uses a singular value decomposition (SVD) to calculate it. There are a variety of other numerical methods given in the above references where each has some advantages and some disadvantages.

Properties

For cases 2a and 2b in Figure 1, the following NN by NN system of equations called the normal equations[1], [19] have a unique minimum squared equation error solution (minimum ϵTϵϵTϵ). Here we have the over specified case with more equations than unknowns. A derivation is outlined in "Derivations", equation Equation 28 below.

A T * A x = A T * b A T * A x = A T * b
(14)

The solution to this equation is often used in least squares approximation problems. For these two cases ATAATA is non-singular and the NN by MM pseudo-inverse is simply,

A + = [ A T * A ] - 1 A T * . A + = [ A T * A ] - 1 A T * .
(15)

A more general problem can be solved by minimizing the weighted equation error, ϵTWTWϵϵTWTWϵ where WW is a positive semi-definite diagonal matrix of the error weights. The solution to that problem [6] is

A + = [ A T * W T * W A ] - 1 A T * W T * W . A + = [ A T * W T * W A ] - 1 A T * W T * W .
(16)

For the case 3a in Figure 1 with more unknowns than equations, AATAAT is non-singular and has a unique minimum norm solution, ||x||||x||. The NN by MM pseudoinverse is simply,

A + = A T * [ A A T * ] - 1 . A + = A T * [ A A T * ] - 1 .
(17)

with the formula for the minimum weighted solution norm ||x||||x|| is

A + = [ W T W ] - 1 A T A [ W T W ] - 1 A T - 1 . A + = [ W T W ] - 1 A T A [ W T W ] - 1 A T - 1 .
(18)

For these three cases, either Equation 15 or Equation 17 can be directly calculated, but not both. However, they are equal so you simply use the one with the non-singular matrix to be inverted. The equality can be shown from an equivalent definition [1] of the pseudo-inverse given in terms of a limit by

A + = lim δ 0 [ A T * A + δ 2 I ] - 1 A T * = lim δ 0 A T * [ A A T * + δ 2 I ] - 1 . A + = lim δ 0 [ A T * A + δ 2 I ] - 1 A T * = lim δ 0 A T * [ A A T * + δ 2 I ] - 1 .
(19)

For the other 6 cases, SVD or other approaches must be used. Some properties [1], [9] are:

  • [ A + ] + = A [ A + ] + = A
  • [ A + ] * = [ A * ] + [ A + ] * = [ A * ] +
  • [ A * A ] + = A + A * + [ A * A ] + = A + A * +
  • λ+=1/λλ+=1/λ for λ0λ0 else λ+=0λ+=0
  • A + = [ A * A ] + A * = A * [ A A * ] + A + = [ A * A ] + A * = A * [ A A * ] +
  • A * = A * A A + = A + A A * A * = A * A A + = A + A A *

It is informative to consider the range and null spaces [9] of AA and A+A+

  • R ( A ) = R ( A A + ) = R ( A A * ) R ( A ) = R ( A A + ) = R ( A A * )
  • R ( A + ) = R ( A * ) = R ( A + A ) = R ( A * A ) R ( A + ) = R ( A * ) = R ( A + A ) = R ( A * A )
  • R ( I - A A + ) = N ( A A + ) = N ( A * ) = N ( A + ) = R ( A ) R ( I - A A + ) = N ( A A + ) = N ( A * ) = N ( A + ) = R ( A )
  • R ( I - A + A ) = N ( A + A ) = N ( A ) = R ( A * ) R ( I - A + A ) = N ( A + A ) = N ( A ) = R ( A * )

The Cases with Analytical Soluctions

The four Penrose equations in Equation 11 are remarkable in defining a unique pseudoinverse for any A with any shape, any rank, for any of the ten cases listed in Figure 1. However, only four cases of the ten have analytical solutions (actually, all do if you use SVD).

  • If AA is case 1a, (square and nonsingular), then
    A+=A-1A+=A-1
    (20)
  • If AA is case 2a or 2b, (over specified) then
    A+=[ATA]-1ATA+=[ATA]-1AT
    (21)
  • If AA is case 3a, (under specified) then
    A+=AT[AAT]-1A+=AT[AAT]-1
    (22)

Figure 2. Four Cases with Analytical Solutions

Fortunately, most practical cases are one of these four but even then, it is generally faster and less error prone to use special techniques on the normal equations rather than directly calculating the inverse matrix. Note the matrices to be inverted above are all rr by rr (rr is the rank) and nonsingular. In the other six cases from the ten in Figure 1, these would be singular, so alternate methods such as SVD must be used [19], [1], [6].

In addition to these four cases with “analytical” solutions, we can pose a more general problem by asking for an optimal approximation with a weighted norm [6] to emphasize or de-emphasize certain components or range of equations.

  • If AA is case 2a or 2b, (over specified) then the weighted error pseudoinverse is
    A+=[AT*WT*WA]-1AT*WT*WA+=[AT*WT*WA]-1AT*WT*W
    (23)
  • If AA is case 3a, (under specified) then the weighted norm pseudoinverse is
    A+=[WTW]-1ATA[WTW]-1AT-1A+=[WTW]-1ATA[WTW]-1AT-1
    (24)

Figure 3. Three Cases with Analytical Solutions and Weights

These solutions to the weighted approxomation problem are useful in their own right but also serve as the foundation to the Iterative Reweighted Least Squares (IRLS) algorithm developed in the next chapter.

Geometric interpretation and Least Squares Approximation

A particularly useful application of the pseudo-inverse of a matrix is to various least squared error approximations [19], [7]. A geometric view of the derivation of the normal equations can be helpful. If bb does not lie in the range space of AA, an error vector is defined as the difference between AxAx and bb. A geometric picture of this vector makes it clear that for the length of εε to be minimum, it must be orthogonal to the space spanned by the columns of AA. This means that A*ε=0A*ε=0. If both sides of Equation 2 are multiplied by A*A*, it is easy to see that the normal equations of Equation 14 result in the error being orthogonal to the columns of AA and, therefore its being minimal length. If bb does lie in the range space of AA, the solution of the normal equations gives the exact solution of Equation 2 with no error.

For cases 1b, 1c, 2c, 2d, 3a, 3b, and 3c, the homogeneous equation Equation 5 has non-zero solutions. Any vector in the space spanned by these solutions (the null space of AA) does not contribute to the equation error εε defined in Equation 3 and, therefore, can be added to any particular generalized solution of Equation 2 to give a family of solutions with the same approximation error. If the dimension of the null space of AA is dd, it is possible to find a unique generalized solution of Equation 2 with dd zero elements. The non-unique solution for these seven cases can be written in the form [6].

x = A + b + [ I - A + A ] y x = A + b + [ I - A + A ] y
(25)

where yy is an arbitrary vector. The first term is the minimum norm solution given by the Moore-Penrose pseudo-inverse A+A+ and the second is a contribution in the null space of AA. For the minimum ||x||||x||, the vector y=0y=0.

Derivations

To derive the necessary conditions for minimizing qq in the overspecified case, we differentiate q=ϵTϵq=ϵTϵ with respect to xx and set that to zero. Starting with the error

q = ϵ T ϵ = [ Ax - b ] T [ Ax - b ] = x T A T A x - x T A T b - b T A x + b T b q = ϵ T ϵ = [ Ax - b ] T [ Ax - b ] = x T A T A x - x T A T b - b T A x + b T b
(26)
q = x T A T A x - 2 x T A T b + b T b q = x T A T A x - 2 x T A T b + b T b
(27)

and taking the gradient or derivative gives

x q = 2 A T A x - 2 A T b = 0 x q = 2 A T A x - 2 A T b = 0
(28)

which are the normal equations in Equation 14 and the pseudoinverse in Equation 15 and Equation 21.

If we start with the weighted error problem

q = ϵ T W T W ϵ = [ Ax - b ] T W T W [ Ax - b ] q = ϵ T W T W ϵ = [ Ax - b ] T W T W [ Ax - b ]
(29)

using the same steps as before gives the normal equations for the minimum weighted squared error as

A T W T W A x = A T W T W b A T W T W A x = A T W T W b
(30)

and the pseudoinverse as

x = [ A T W T W A ] - 1 A T W T W b x = [ A T W T W A ] - 1 A T W T W b
(31)

To derive the necessary conditions for minimizing the Euclidian norm ||x||2||x||2 when there are few equations and many solutions to Equation 1, we define a Lagrangian

L ( x , μ ) = | | W x | | 2 2 + μ T ( Ax - b ) L ( x , μ ) = | | W x | | 2 2 + μ T ( Ax - b )
(32)

take the derivatives in respect to both xx and μμ and set them to zero.

x L = 2 W T W x + A T μ = 0 x L = 2 W T W x + A T μ = 0
(33)

and

μ L = Ax - b = 0 μ L = Ax - b = 0
(34)

Solve these two equation simultaneously for xx eliminating μμ gives the pseudoinverse in Equation 17 and Equation 22 result.

x = [ W T W ] - 1 A T A [ W T W ] - 1 A T - 1 b x = [ W T W ] - 1 A T A [ W T W ] - 1 A T - 1 b
(35)

Because the weighting matrices WW are diagonal and real, multiplication and inversion is simple. These equations are used in the Iteratively Reweighted Least Squares (IRLS) algorithm described in the next chapter.

Regularization

To deal with measurement error and data noise, a process called “regularization" is sometimes used [15], [7], [25].

Least Squares Approximation with Constraints

The solution of the overdetermined simultaneous equations is generally a least squared error approximation problem. A particularly interesting and useful variation on this problem adds inequality and/or equality constraints. This formulation has proven very powerful in solving the constrained least squares approximation part of FIR filter design [32]. The equality constraints can be taken into account by using Lagrange multipliers and the inequality constraints can use the Kuhn-Tucker conditions [14], [33], [20]. The iterative reweighted least squares (IRLS) algorithm described in the next chapter can be modified to give results which are an optimal constrained least p-power solution [4], [8], [5].

Conclusions

There is remarkable structure and subtlety in the apparently simple problem of solving simultaneous equations and considerable insight can be gained from these finite dimensional problems. These notes have emphasized the l2l2 norm but some other such as ll and l1l1 are also interesting. The use of sparsity [31] is particularly interesting as applied in Compressive Sensing [2], [13] and in the sparse FFT [18]. There are also interesting and important applications in infinite dimensions. One of particular interest is in signal analysis using wavelet basis functions [10]. The use of weighted error and weighted norm pseudoinverses provide a base for iterative reweighted least squares (IRLS) algorithms.

References

  1. Albert, Arthur. (1972). Regression and the Moore-Penrose Pseudoinverse. New York: Academic Press.
  2. Baraniuk, Richard G. (2007, July). Compressive Sensing. [also: http://dsp.rice.edu/cs]. IEEE Signal Processing Magazine, 24(4), 118–124.
  3. Burrus, C. S. and Barreto, J. A. (1992, May). Least -Power Error Design of FIR Filters. In Proceedings of the IEEE International Symposium on Circuits and Systems. (Vol. 2, p. 545–548). ISCAS-92, San Diego, CA
  4. Burrus, C. S. and Barreto, J. A. and Selesnick, I. W. (1992, September 13–16). Reweighted Least Squares Design of FIR Filters. In Paper Summaries for the IEEE Signal Processing Society's Fifth DSP Workshop. (p. 3.1.1). Starved Rock Lodge, Utica, IL
  5. Burrus, C. S. and Barreto, J. A. and Selesnick, I. W. (1994, November). Iterative Reweighted Least Squares Design of FIR Filters. IEEE Transactions on Signal Processing, 42(11), 2926–2936.
  6. Ben-Israel, Adi and Greville, T. N. E. (1974). Generalized Inverses: Theory and Applications. [Second edition, Springer, 2003]. New York: Wiley and Sons.
  7. Björck, Åke. (1996). Numerical Methods for Least Squares Problems. Philadelphia: Blaisdell, Dover, SIAM.
  8. Burrus, C. Sidney. (1998, September 8-11). Constrained Least Squares Design of FIR Filters using Iterative Reweighted Least Squares. In Proceedings of EUSIPCO-98. (p. 281–282). Rhodes, Greece
  9. Campbell, S. L. and Meyer, Jr, C. D. (1979). Generalized Inverses of Linear Transformations. [Reprint by Dover in 1991]. London: Pitman.
  10. Daubechies, Ingrid. (1992). Ten Lectures on Wavelets. [Notes from the 1990 CBMS-NSF Conference on Wavelets and Applications at Lowell, MA]. Philadelphia, PA: SIAM.
  11. Daubechies, Ingrid and DeVore, Ronald and Fornasier, Massimo and Gunturk, C. Sinan. (2010, January). Iteratively Reweighted Least Squares Minimization for Sparse Recovery. Communications on Pure and Applied Mathematics, 63(1), 1–38.
  12. Donoho, David L. and Elad, Michael. (2002). Optimally Sparse Representation in General (non-Orthogonal) Dictionaries via Minimization. Technical report. Statistics Department, Stanford University.
  13. Donoho, David L. (2004, September). Compressed Sensing. [http://www-stat.stanford.edu/ donoho/ Reports/2004/CompressedSensing091604.pdf]. Technical report. Statistics Department, Stanford University.
  14. Fletcher, R. (1987). Practical Methods of Optimization. (Second). New York: John Wiley & Sons.
  15. Golub, Gene H. and Loan, Charles F. Van. (1996). Matrix Computations. [3rd edition, 4th edition is forthcoming]. Baltimore, MD: The John Hopkins University Press.
  16. Gorodnitsky, Irina F. and Rao, Bhaskar D. (1997, March). Sparse Signal Reconstruction from Limited Data using FOCUSS: a Re-weighted Minimum Norm Algorithm. IEEE Transactions on Signal Processing, 45(3),
  17. Hefferon, Jim. (2011). Linear Algebra. [Copyright: cc-by-sa, URL:joshua.smcvt.edu]. Virginia Commonwealth Univeristy Mathematics Textbook Series.
  18. Hassanieh, Haitham and Indyk, Piotr and Katabi, Dina and Price, Eric. (2012). Nearly Optimal Sparse Fourier Transform. arXiv:1201.2501v1 [cs.DS] 12 Jan 2012.
  19. Lawson, C. L. and Hanson, R. J. (1974). Solving Least Squares Problems. [Second edition by SIAM in 1987]. Inglewood Cliffs, NJ: Prentice-Hall.
  20. Luenberger, D. G. (2008). Introduction to Linear and Nonlinear Programming. (Third). Springer.
  21. Luenberger, D. G. (1969, 1997). Optimization by Vector Space Methods. New York: John Wiley & Sons.
  22. Moler, Cleve and Little, John and Bangert, Steve. (1989). Matlab User's Guide. South Natick, MA: The MathWorks, Inc.
  23. Moler, Cleve. (2008). Numerical Computing with MATLAB. [available: http://www.mathworks.com/moler/]. South Natick, MA: The MathWorks, Inc.
  24. Moore, E. H. (1920). On the Reciprocal of the General Algebraic Matrix. Bulletin of the AMS, 26, 394–395.
  25. Neumaier, A. (1998). Solving ill-conditioned and singular linear systems: A Tutorial on Regularization. [available: http://www.mat.univie.ac.at/ neum/]. SIAM Reiview, 40, 636–666.
  26. Oden, J. Tinsley and Demkowicz, Leszek F. (1996). Applied Functional Analysis. Boca Raton: CRC Press.
  27. Penrose, R. (1955). A Generalized Inverse for Matrices. Proc. Cambridge Phil. Soc., 51, 406–413.
  28. Penrose, R. (1955). On best Approximate Solutions of Linear Matrix Equations. Proc. Cambridge Phil. Soc., 52, 17–19.
  29. Rao, C. R. and Mitra, S. K. (1971). Generalized Inverse of Matrices and its Applications. New York: John Wiley & Sons.
  30. Riesz, Frigyes and Sz.–Nagy, Béla. (1955). Functional Analysis. New York: Dover.
  31. Selesnick, Ivan. (2012, May). Introduction to Sparsity in Signal Processing. [Available: http://cnx.org/content/m43545/latest/]. Connexions Web Site.
  32. Selesnick, Ivan W. and Lang, Markus and Burrus, C. Sidney. (1996, August). Constrained Least Square Design of FIR Filters without Explicitly Specified Transition Bands. IEEE Transactions on Signal Processing, 44(8), 1879–1892.
  33. Strang, Gilbert. (1986). Introduction to Linear Algebra. [4th Edition, 2009]. New York: Wellesley Cambridge.
  34. Vargas, Ricardo and Burrus, C. Sidney. (2012). Iterative Design of Digital Filters. [arXiv:1207.4526v1 [cs.IT] July 19, 2012]. arXiv.
  35. Young, R. M. (1980). An Introduction to Nonharmonic Fourier Series. New York: Academic Press.

Collection Navigation

Content actions

Download:

Collection as:

PDF | EPUB (?)

What is an EPUB file?

EPUB is an electronic book format that can be read on a variety of mobile devices.

Downloading to a reading device

For detailed instructions on how to download this content's EPUB to your specific device, click the "(?)" link.

| More downloads ...

Module as:

PDF | EPUB (?)

What is an EPUB file?

EPUB is an electronic book format that can be read on a variety of mobile devices.

Downloading to a reading device

For detailed instructions on how to download this content's EPUB to your specific device, click the "(?)" link.

| More downloads ...

Add:

Collection to:

My Favorites (?)

'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.

| A lens I own (?)

Definition of a lens

Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

Who can create a lens?

Any individual member, a community, or a respected organization.

What are tags? tag icon

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

| External bookmarks

Module to:

My Favorites (?)

'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.

| A lens I own (?)

Definition of a lens

Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

Who can create a lens?

Any individual member, a community, or a respected organization.

What are tags? tag icon

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

| External bookmarks