# OpenStax_CNX

You are here: Home » Content » Digital Signal Processing and Digital Filter Design (Draft) » Continuous-Time Signals

• Preface: Digital Signal Processing and Digital Filter Design

### Lenses

What is a lens?

#### Definition of a lens

##### Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

##### What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

##### Who can create a lens?

Any individual member, a community, or a respected organization.

##### What are tags?

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

#### Affiliated with (What does "Affiliated with" mean?)

This content is either by members of the organizations listed or about topics related to the organizations listed. Click each link to see a list of all content affiliated with the organization.
• Rice Digital Scholarship

This module is included in aLens by: Digital Scholarship at Rice UniversityAs a part of collection: "Brief Notes on Signals and Systems"

Click the "Rice Digital Scholarship" link to see all content affiliated with them.

• NSF Partnership

This collection is included inLens: NSF Partnership in Signal Processing
By: Sidney Burrus

Click the "NSF Partnership" link to see all content affiliated with them.

Click the tag icon to display tags associated with this content.

• Featured Content

This collection is included inLens: Connexions Featured Content
By: Connexions

Click the "Featured Content" link to see all content affiliated with them.

#### Also in these lenses

• UniqU content

This collection is included inLens: UniqU's lens
By: UniqU, LLC

Click the "UniqU content" link to see all content selected in this lens.

• Lens for Engineering

This module and collection are included inLens: Lens for Engineering
By: Sidney Burrus

Click the "Lens for Engineering" link to see all content selected in this lens.

### Recently Viewed

This feature requires Javascript to be enabled.

### Tags

(What is a tag?)

These tags come from the endorsement, affiliation, and other lenses that include this content.

Inside Collection (Textbook):

Textbook by: C. Sidney Burrus. E-mail the author

# Continuous-Time Signals

Module by: C. Sidney Burrus. E-mail the author

Signals occur in a wide range of physical phenomenon. They might be human speech, blood pressure variations with time, seismic waves, radar and sonar signals, pictures or images, stress and strain signals in a building structure, stock market prices, a city's population, or temperature across a plate. These signals are often modeled or represented by a real or complex valued mathematical function of one or more variables. For example, speech is modeled by a function representing air pressure varying with time. The function is acting as a mathematical analogy to the speech signal and, therefore, is called an analog signal. For these signals, the independent variable is time and it changes continuously so that the term continuous-time signal is also used. In our discussion, we talk of the mathematical function as the signal even though it is really a model or representation of the physical signal.

The description of signals in terms of their sinusoidal frequency content has proven to be one of the most powerful tools of continuous and discrete-time signal description, analysis, and processing. For that reason, we will start the discussion of signals with a development of Fourier transform methods. We will first review the continuous-time methods of the Fourier series (FS), the Fourier transform or integral (FT), and the Laplace transform (LT). Next the discrete-time methods will be developed in more detail with the discrete Fourier transform (DFT) applied to finite length signals followed by the discrete-time Fourier transform (DTFT) for infinitely long signals and ending with the Z-transform which allows the powerful tools of complex variable theory to be applied.

More recently, a new tool has been developed for the analysis of signals. Wavelets and wavelet transforms [9], [1], [5], [16], [15] are another more flexible expansion system that also can describe continuous and discrete-time, finite or infinite duration signals. We will very briefly introduce the ideas behind wavelet-based signal analysis.

## The Fourier Series

The problem of expanding a finite length signal in a trigonometric series was posed and studied in the late 1700's by renowned mathematicians such as Bernoulli, d'Alembert, Euler, Lagrange, and Gauss. Indeed, what we now call the Fourier series and the formulas for the coefficients were used by Euler in 1780. However, it was the presentation in 1807 and the paper in 1822 by Fourier stating that an arbitrary function could be represented by a series of sines and cosines that brought the problem to everyone's attention and started serious theoretical investigations and practical applications that continue to this day [8], [3], [11], [10], [7], [12]. The theoretical work has been at the center of analysis and the practical applications have been of major significance in virtually every field of quantitative science and technology. For these reasons and others, the Fourier series is worth our serious attention in a study of signal processing.

### Definition of the Fourier Series

We assume that the signal x(t)x(t) to be analyzed is well described by a real or complex valued function of a real variable tt defined over a finite interval {0tT}{0tT}. The trigonometric series expansion of x(t)x(t) is given by

x ( t ) = a ( 0 ) 2 + k = 1 a ( k ) cos ( 2 π T k t ) + b ( k ) sin ( 2 π T k t ) . x ( t ) = a ( 0 ) 2 + k = 1 a ( k ) cos ( 2 π T k t ) + b ( k ) sin ( 2 π T k t ) .
(1)

where xk(t)=cos(2πkt/T)xk(t)=cos(2πkt/T) and yk(t)=sin(2πkt/T)yk(t)=sin(2πkt/T) are the basis functions for the expansion. The energy or power in an electrical, mechanical, etc. system is a function of the square of voltage, current, velocity, pressure, etc. For this reason, the natural setting for a representation of signals is the Hilbert space of L2[0,T]L2[0,T]. This modern formulation of the problem is developed in [6], [11]. The sinusoidal basis functions in the trigonometric expansion form a complete orthogonal set in L2[0,T]L2[0,T]. The orthogonality is easily seen from inner products

( cos ( 2 π T k t ) , cos ( 2 π T t ) ) = 0 T ( cos ( 2 π T k t ) cos ( 2 π T t ) ) d t = δ ( k - ) ( cos ( 2 π T k t ) , cos ( 2 π T t ) ) = 0 T ( cos ( 2 π T k t ) cos ( 2 π T t ) ) d t = δ ( k - )
(2)

and

( cos ( 2 π T k t ) , sin ( 2 π T t ) ) = 0 T ( cos ( 2 π T k t ) sin ( 2 π T t ) ) d t = 0 ( cos ( 2 π T k t ) , sin ( 2 π T t ) ) = 0 T ( cos ( 2 π T k t ) sin ( 2 π T t ) ) d t = 0
(3)

where δ(t)δ(t) is the Kronecker delta function with δ(0)=1δ(0)=1 and δ(k0)=0δ(k0)=0. Because of this, the kkth coefficients in the series can be found by taking the inner product of x(t)x(t) with the kkth basis functions. This gives for the coefficients

a ( k ) = 2 T 0 T x ( t ) cos ( 2 π T k t ) d t a ( k ) = 2 T 0 T x ( t ) cos ( 2 π T k t ) d t
(4)

and

b ( k ) = 2 T 0 T x ( t ) sin ( 2 π T k t ) d t b ( k ) = 2 T 0 T x ( t ) sin ( 2 π T k t ) d t
(5)

where TT is the time interval of interest or the period of a periodic signal. Because of the orthogonality of the basis functions, a finite Fourier series formed by truncating the infinite series is an optimal least squared error approximation to x(t)x(t). If the finite series is defined by

x ^ ( t ) = a ( 0 ) 2 + k = 1 N a ( k ) cos ( 2 π T k t ) + b ( k ) sin ( 2 π T k t ) , x ^ ( t ) = a ( 0 ) 2 + k = 1 N a ( k ) cos ( 2 π T k t ) + b ( k ) sin ( 2 π T k t ) ,
(6)

the squared error is

ε = 1 T 0 T | x ( t ) - x ^ ( t ) | 2 d t ε = 1 T 0 T | x ( t ) - x ^ ( t ) | 2 d t
(7)

which is minimized over all a(k)a(k) and b(k)b(k) by Equation 4 and Equation 5. This is an extraordinarily important property.

It follows that if x(t)L2[0,T]x(t)L2[0,T], then the series converges to x(t)x(t) in the sense that ε0ε0 as NN[6], [11]. The question of point-wise convergence is more difficult. A sufficient condition that is adequate for most application states: If f(x)f(x) is bounded, is piece-wise continuous, and has no more than a finite number of maxima over an interval, the Fourier series converges point-wise to f(x)f(x) at all points of continuity and to the arithmetic mean at points of discontinuities. If f(x)f(x) is continuous, the series converges uniformly at all points [11], [8], [3].

A useful condition [6], [11] states that if x(t)x(t) and its derivatives through the qqth derivative are defined and have bounded variation, the Fourier coefficients a(k)a(k) and b(k)b(k) asymptotically drop off at least as fast as 1kq+11kq+1 as kk. This ties global rates of convergence of the coefficients to local smoothness conditions of the function.

The form of the Fourier series using both sines and cosines makes determination of the peak value or of the location of a particular frequency term difficult. A different form that explicitly gives the peak value of the sinusoid of that frequency and the location or phase shift of that sinusoid is given by

x ( t ) = d ( 0 ) 2 + k = 1 d ( k ) cos ( 2 π T k t + θ ( k ) ) x ( t ) = d ( 0 ) 2 + k = 1 d ( k ) cos ( 2 π T k t + θ ( k ) )
(8)

and, using Euler's relation and the usual electrical engineering notation of j=-1j=-1,

e j x = cos ( x ) + j sin ( x ) , e j x = cos ( x ) + j sin ( x ) ,
(9)

the complex exponential form is obtained as

x ( t ) = k = - c ( k ) e j 2 π T k t x ( t ) = k = - c ( k ) e j 2 π T k t
(10)

where

c ( k ) = a ( k ) + j b ( k ) . c ( k ) = a ( k ) + j b ( k ) .
(11)

The coefficient equation is

c ( k ) = 1 T 0 T x ( t ) e - j 2 π T k t d t c ( k ) = 1 T 0 T x ( t ) e - j 2 π T k t d t
(12)

The coefficients in these three forms are related by

| d | 2 = | c | 2 = a 2 + b 2 | d | 2 = | c | 2 = a 2 + b 2
(13)

and

θ = a r g { c } = tan - 1 ( b a ) θ = a r g { c } = tan - 1 ( b a )
(14)

It is easier to evaluate a signal in terms of c(k)c(k) or d(k)d(k) and θ(k)θ(k) than in terms of a(k)a(k) and b(k)b(k). The first two are polar representation of a complex value and the last is rectangular. The exponential form is easier to work with mathematically.

Although the function to be expanded is defined only over a specific finite region, the series converges to a function that is defined over the real line and is periodic. It is equal to the original function over the region of definition and is a periodic extension outside of the region. Indeed, one could artificially extend the given function at the outset and then the expansion would converge everywhere.

### A Geometric View

It can be very helpful to develop a geometric view of the Fourier series where x(t)x(t) is considered to be a vector and the basis functions are the coordinate or basis vectors. The coefficients become the projections of x(t)x(t) on the coordinates. The ideas of a measure of distance, size, and orthogonality are important and the definition of error is easy to picture. This is done in [6], [11], [17] using Hilbert space methods.

### Properties of the Fourier Series

The properties of the Fourier series are important in applying it to signal analysis and to interpreting it. The main properties are given here using the notation that the Fourier series of a real valued function x(t)x(t) over {0tT}{0tT} is given by F{x(t)}=c(k)F{x(t)}=c(k) and x˜(t)x˜(t) denotes the periodic extensions of x(t)x(t).

1. Linear: F{x+y}=F{x}+F{y}F{x+y}=F{x}+F{y}
Idea of superposition. Also scalability: F{ax}=aF{x}F{ax}=aF{x}
2. Extensions of x(t)x(t): x˜(t)=x˜(t+T)x˜(t)=x˜(t+T)
x˜(t)x˜(t) is periodic.
3. Even and Odd Parts: x(t)=u(t)+jv(t)x(t)=u(t)+jv(t) and C(k)=A(k)+jB(k)=|C(k)|ejθ(k)C(k)=A(k)+jB(k)=|C(k)|ejθ(k)
 uu vv AA BB |C||C| θθ even 0 even 0 even 0 odd 0 0 odd even 0 0 even 0 even even π/2π/2 0 odd odd 0 even π/2π/2
4. Convolution: If continuous cyclic convolution is defined by
y(t)=h(t)x(t)=0Th˜(t-τ)x˜(τ)dτy(t)=h(t)x(t)=0Th˜(t-τ)x˜(τ)dτ
(15)

then F{h(t)x(t)}=F{h(t)}F{x(t)}F{h(t)x(t)}=F{h(t)}F{x(t)}
5. Multiplication: If discrete convolution is defined by
e(n)=d(n)*c(n)=m=-d(m)c(n-m)e(n)=d(n)*c(n)=m=-d(m)c(n-m)
(16)

then F{h(t)x(t)}=F{h(t)}*F{x(t)}F{h(t)x(t)}=F{h(t)}*F{x(t)}
This property is the inverse of property 4 and vice versa.
6. Parseval: 1T0T|x(t)|2dt=k=-|C(k)|21T0T|x(t)|2dt=k=-|C(k)|2
This property says the energy calculated in the time domain is the same as that calculated in the frequency (or Fourier) domain.
7. Shift: F{x˜(t-t0)}=C(k)e-j2πt0k/TF{x˜(t-t0)}=C(k)e-j2πt0k/T
A shift in the time domain results in a linear phase shift in the frequency domain.
8. Modulate: F{x(t)ej2πKt/T}=C(k-K)F{x(t)ej2πKt/T}=C(k-K)
Modulation in the time domain results in a shift in the frequency domain. This property is the inverse of property 7.
9. Orthogonality of basis functions:
0Te-j2πmt/Tej2πnt/Tdt=Tδ(n-m)=Tifn=m0ifnm.0Te-j2πmt/Tej2πnt/Tdt=Tδ(n-m)=Tifn=m0ifnm.
(17)
Orthogonality allows the calculation of coefficients using inner products in Equation 4 and Equation 5. It also allows Parseval's Theorem in property 6. A relaxed version of orthogonality is called “tight frames" and is important in over-specified systems, especially in wavelets.

### Examples

• An example of the Fourier series is the expansion of a square wave signal with period 2π2π. The expansion is
x(t)=4π[sin(t)+13sin(3t)+15sin(5t)].x(t)=4π[sin(t)+13sin(3t)+15sin(5t)].
(18)
Because x(t)x(t) is odd, there are no cosine terms (all a(k)=0a(k)=0) and, because of its symmetries, there are no even harmonics (even kk terms are zero). The function is well defined and bounded; its derivative is not, therefore, the coefficients drop off as 1k1k.
• A second example is a triangle wave of period 2π2π. This is a continuous function where the square wave was not. The expansion of the triangle wave is
x(t)=4π[sin(t)-132sin(3t)+152sin(5t)+].x(t)=4π[sin(t)-132sin(3t)+152sin(5t)+].
(19)
Here the coefficients drop off as 1k21k2 since the function and its first derivative exist and are bounded.

Note the derivative of a triangle wave is a square wave. Examine the series coefficients to see this. There are many books and web sites on the Fourier series that give insight through examples and demos.

### Theorems on the Fourier Series

Four of the most important theorems in the theory of Fourier analysis are the inversion theorem, the convolution theorem, the differentiation theorem, and Parseval's theorem [4].

• The inversion theorem is the truth of the transform pair given in Equation 1, Equation 4, and Equation 5..
• The convolution theorem is property 4.
• The differentiation theorem says that the transform of the derivative of a function is jωjω times the transform of the function.
• Parseval's theorem is given in property 6.

All of these are based on the orthogonality of the basis function of the Fourier series and integral and all require knowledge of the convergence of the sums and integrals. The practical and theoretical use of Fourier analysis is greatly expanded if use is made of distributions or generalized functions (e.g. Dirac delta functions, δ(t)δ(t)) [14], [2]. Because energy is an important measure of a function in signal processing applications, the Hilbert space of L2L2 functions is a proper setting for the basic theory and a geometric view can be especially useful [6], [4].

The following theorems and results concern the existence and convergence of the Fourier series and the discrete-time Fourier transform [13]. Details, discussions and proofs can be found in the cited references.

• If f(x)f(x) has bounded variation in the interval (-π,π)(-π,π), the Fourier series corresponding to f(x)f(x) converges to the value f(x)f(x) at any point within the interval, at which the function is continuous; it converges to the value 12[f(x+0)+f(x-0)]12[f(x+0)+f(x-0)] at any such point at which the function is discontinuous. At the points π,-ππ,-π it converges to the value 12[f(-π+0)+f(π-0)]12[f(-π+0)+f(π-0)]. [8]
• If f(x)f(x) is of bounded variation in (-π,π)(-π,π), the Fourier series converges to f(x)f(x), uniformly in any interval (a,b)(a,b) in which f(x)f(x) is continuous, the continuity at aa and bb being on both sides. [8]
• If f(x)f(x) is of bounded variation in (-π,π)(-π,π), the Fourier series converges to 12[f(x+0)+f(x-0)]12[f(x+0)+f(x-0)], bounded throughout the interval (-π,π)(-π,π). [8]
• If f(x)f(x) is bounded and if it is continuous in its domain at every point, with the exception of a finite number of points at which it may have ordinary discontinuities, and if the domain may be divided into a finite number of parts, such that in any one of them the function is monotone; or, in other words, the function has only a finite number of maxima and minima in its domain, the Fourier series of f(x)f(x) converges to f(x)f(x) at points of continuity and to 12[f(x+0)+f(x-0)]12[f(x+0)+f(x-0)] at points of discontinuity. [8], [3]
• If f(x)f(x) is such that, when the arbitrarily small neighborhoods of a finite number of points in whose neighborhood |f(x)||f(x)| has no upper bound have been excluded, f(x)f(x) becomes a function with bounded variation, then the Fourier series converges to the value 12[f(x+0)+f(x-0)]12[f(x+0)+f(x-0)], at every point in (-π,π)(-π,π), except the points of infinite discontinuity of the function, provided the improper integral -ππf(x)dx-ππf(x)dx exist, and is absolutely convergent. [8]
• If f is of bounded variation, the Fourier series of f converges at every point xx to the value [f(x+0)+f(x-0)]/2[f(x+0)+f(x-0)]/2. If f is, in addition, continuous at every point of an interval I=(a,b)I=(a,b), its Fourier series is uniformly convergent in II. [18]
• If a(k)a(k) and b(k)b(k) are absolutely summable, the Fourier series converges uniformly to f(x)f(x) which is continuous. [13]
• If a(k)a(k) and b(k)b(k) are square summable, the Fourier series converges to f(x)f(x) where it is continuous, but not necessarily uniformly. [13]
• Suppose that f(x)f(x) is periodic, of period XX, is defined and bounded on [0,X][0,X] and that at least one of the following four conditions is satisfied: (i) ff is piecewise monotonic on [0,X][0,X], (ii) ff has a finite number of maxima and minima on [0,X][0,X] and a finite number of discontinuities on [0,X][0,X], (iii) ff is of bounded variation on [0,X][0,X], (iv) ff is piecewise smooth on [0,X][0,X]: then it will follow that the Fourier series coefficients may be defined through the defining integral, using proper Riemann integrals, and that the Fourier series converges to f(x)f(x) at a.a.xx, to f(x)f(x) at each point of continuity of ff, and to the value 12[f(x-)+f(x+)]12[f(x-)+f(x+)] at all xx. [4]
• For any 1p<1p< and any fCp(S1)fCp(S1), the partial sums
Sn=Sn(f)=|k|nf^(k)ekSn=Sn(f)=|k|nf^(k)ek
(20)
converge to ff, uniformly as nn; in fact, ||Sn-f||||Sn-f|| is bounded by a constant multiple of n-p+1/2n-p+1/2. [6]

The Fourier series expansion results in transforming a periodic, continuous time function, x˜(t)x˜(t), to two discrete indexed frequency functions, a(k)a(k) and b(k)b(k) that are not periodic.

## The Fourier Transform

Many practical problems in signal analysis involve either infinitely long or very long signals where the Fourier series is not appropriate. For these cases, the Fourier transform (FT) and its inverse (IFT) have been developed. This transform has been used with great success in virtually all quantitative areas of science and technology where the concept of frequency is important. While the Fourier series was used before Fourier worked on it, the Fourier transform seems to be his original idea. It can be derived as an extension of the Fourier series by letting the length or period TT increase to infinity or the Fourier transform can be independently defined and then the Fourier series shown to be a special case of it. The latter approach is the more general of the two, but the former is more intuitive [14], [2].

### Definition of the Fourier Transform

The Fourier transform (FT) of a real-valued (or complex) function of the real-variable tt is defined by

X ( ω ) = - x ( t ) e - j ω t d t X ( ω ) = - x ( t ) e - j ω t d t
(21)

giving a complex valued function of the real variable ωω representing frequency. The inverse Fourier transform (IFT) is given by

x ( t ) = 1 2 π - X ( ω ) e j ω t d ω . x ( t ) = 1 2 π - X ( ω ) e j ω t d ω .
(22)

Because of the infinite limits on both integrals, the question of convergence is important. There are useful practical signals that do not have Fourier transforms if only classical functions are allowed because of problems with convergence. The use of delta functions (distributions) in both the time and frequency domains allows a much larger class of signals to be represented [14].

### Properties of the Fourier Transform

The properties of the Fourier transform are somewhat parallel to those of the Fourier series and are important in applying it to signal analysis and interpreting it. The main properties are given here using the notation that the FT of a real valued function x(t)x(t) over all time tt is given by F{x}=X(ω)F{x}=X(ω).

1. Linear: F{x+y}=F{x}+F{y}F{x+y}=F{x}+F{y}
2. Even and Oddness: if x(t)=u(t)+jv(t)x(t)=u(t)+jv(t) and X(ω)=A(ω)+jB(ω)X(ω)=A(ω)+jB(ω) then
 uu vv AA BB |X||X| θθ even 0 even 0 even 0 odd 0 0 odd even 0 0 even 0 even even π/2π/2 0 odd odd 0 even π/2π/2
3. Convolution: If continuous convolution is defined by:
y(t)=h(t)*x(t)=-h(t-τ)x(τ)dτ=-h(λ)x(t-λ)dλy(t)=h(t)*x(t)=-h(t-τ)x(τ)dτ=-h(λ)x(t-λ)dλ
(23)
then F{h(t)*x(t)}=F{h(t)}F{x(t)}F{h(t)*x(t)}=F{h(t)}F{x(t)}
4. Multiplication: F{h(t)x(t)}=12πF{h(t)}*F{x(t)}F{h(t)x(t)}=12πF{h(t)}*F{x(t)}
5. Parseval: -|x(t)|2dt=12π-|X(ω)|2dω-|x(t)|2dt=12π-|X(ω)|2dω
6. Shift: F{x(t-T)}=X(ω)e-jωTF{x(t-T)}=X(ω)e-jωT
7. Modulate: F{x(t)ej2πKt}=X(ω-2πK)F{x(t)ej2πKt}=X(ω-2πK)
8. Derivative: F{dxdt}=jωX(ω)F{dxdt}=jωX(ω)
9. Stretch: F{x(at)}=1|a|X(ω/a)F{x(at)}=1|a|X(ω/a)
10. Orthogonality: -e-jω1tejω2t=2πδ(ω1-ω2)-e-jω1tejω2t=2πδ(ω1-ω2)

### Examples of the Fourier Transform

Deriving a few basic transforms and using the properties allows a large class of signals to be easily studied. Examples of modulation, sampling, and others will be given.

• If x(t)=δ(t)x(t)=δ(t) then X(ω)=1X(ω)=1
• If x(t)=1x(t)=1 then X(ω)=2πδ(ω)X(ω)=2πδ(ω)
• If x(t)x(t) is an infinite sequence of delta functions spaced TT apart, x(t)=n=-δ(t-nT)x(t)=n=-δ(t-nT), its transform is also an infinite sequence of delta functions of weight 2π/T2π/T spaced 2π/T2π/T apart, X(ω)=2πk=-δ(ω-2πk/T)X(ω)=2πk=-δ(ω-2πk/T).
• Other interesting and illustrative examples can be found in [14], [2].

Note the Fourier transform takes a function of continuous time into a function of continuous frequency, neither function being periodic. If “distribution" or “delta functions" are allowed, the Fourier transform of a periodic function will be a infinitely long string of delta functions with weights that are the Fourier series coefficients.

## The Laplace Transform

The Laplace transform can be thought of as a generalization of the Fourier transform in order to include a larger class of functions, to allow the use of complex variable theory, to solve initial value differential equations, and to give a tool for input-output description of linear systems. Its use in system and signal analysis became popular in the 1950's and remains as the central tool for much of continuous time system theory. The question of convergence becomes still more complicated and depends on complex values of ss used in the inverse transform which must be in a “region of convergence" (ROC).

### Definition of the Laplace Transform

The definition of the Laplace transform (LT) of a real valued function defined over all positive time tt is

F ( s ) = - f ( t ) e - s t d t F ( s ) = - f ( t ) e - s t d t
(24)

and the inverse transform (ILT) is given by the complex contour integral

f ( t ) = 1 2 π j c - j c + j F ( s ) e s t d s f ( t ) = 1 2 π j c - j c + j F ( s ) e s t d s
(25)

where s=σ+jωs=σ+jω is a complex variable and the path of integration for the ILT must be in the region of the ss plane where the Laplace transform integral converges. This definition is often called the bilateral Laplace transform to distinguish it from the unilateral transform (ULT) which is defined with zero as the lower limit of the forward transform integral Equation 24. Unless stated otherwise, we will be using the bilateral transform.

Notice that the Laplace transform becomes the Fourier transform on the imaginary axis, for s=jωs=jω. If the ROC includes the jωjω axis, the Fourier transform exists but if it does not, only the Laplace transform of the function exists.

There is a considerable literature on the Laplace transform and its use in continuous-time system theory. We will develop most of these ideas for the discrete-time system in terms of the z-transform later in this chapter and will only briefly consider only the more important properties here.

The unilateral Laplace transform cannot be used if useful parts of the signal exists for negative time. It does not reduce to the Fourier transform for signals that exist for negative time, but if the negative time part of a signal can be neglected, the unilateral transform will converge for a much larger class of function that the bilateral transform will. It also makes the solution of linear, constant coefficient differential equations with initial conditions much easier.

### Properties of the Laplace Transform

Many of the properties of the Laplace transform are similar to those for Fourier transform [2], [14], however, the basis functions for the Laplace transform are not orthogonal. Some of the more important ones are:

1. Linear: L{x+y}=L{x}+L{y}L{x+y}=L{x}+L{y}
2. Convolution: If y(t)=h(t)*x(t)=h(t-τ)x(τ)dτy(t)=h(t)*x(t)=h(t-τ)x(τ)dτ
then L{h(t)*x(t)}=L{h(t)}L{x(t)}L{h(t)*x(t)}=L{h(t)}L{x(t)}
3. Derivative: L{dxdt}=sL{x(t)}L{dxdt}=sL{x(t)}
4. Derivative (ULT): L{dxdt}=sL{x(t)}-x(0)L{dxdt}=sL{x(t)}-x(0)
5. Integral: L{x(t)dt}=1sL{x(t)}L{x(t)dt}=1sL{x(t)}
6. Shift: L{x(t-T)}=C(k)e-TsL{x(t-T)}=C(k)e-Ts
7. Modulate: L{x(t)ejω0t}=X(s-jω0)L{x(t)ejω0t}=X(s-jω0)

Examples can be found in [14], [2] and are similar to those of the z-transform presented later in these notes. Indeed, note the parallals and differences in the Fourier series, Fourier transform, and Z-transform.

## References

1. Burrus, C. Sidney and Gopinath, Ramesh A. and Guo, Haitao. (1998). Introduction to Wavelets and the Wavelet Transform. Upper Saddle River, NJ: Prentice Hall.
2. Bracewell, R. N. (1985). The Fourier Transform and Its Applications. (Third). New York: McGraw-Hill.
3. Carslaw, H. S. (1906, 1930). Theory of Fourier's Series and Integrals. (third). New York: Dover.
4. Champeney, D. C. (1987). A Handbook of Fourier Theorems. Cambridge: Cambridge University Press.
5. Daubechies, Ingrid. (1992). Ten Lectures on Wavelets. [Notes from the 1990 CBMS-NSF Conference on Wavelets and Applications at Lowell, MA]. Philadelphia, PA: SIAM.
6. Dym, H. and McKean, H. P. (1972). Fourier Series and Integrals. New York: Academic Press.
7. Folland, Gerald B. (1992). Fourier Analysis and its Applications. Pacific Grove: Wadsworth & Brooks/Cole.
8. Hobson, E. W. (1926). The Theory of Functions of a Real Variable and the Theory of Fourier's Series. (Second, Vol. 2). New York: Dover.
9. Hubbard, Barbara Burke. (1996). The World According to Wavelets. [Second Edition 1998]. Wellesley, MA: A K Peters.
10. Körner, T. W. (1988). Fourier Analysis. Cambridge: Cambridge University Press.
11. Lanczos, C. (1956). Applied Analysis. Englewood Cliffs, NJ: Prentice Hall.
12. of LEX, Transnational College. (1995). Who is Fourier? Boston: Language Research Foundation.
13. Oppenheim, A. V. and Schafer, R. W. (1989). Discrete-Time Signal Processing. Englewood Cliffs, NJ: Prentice-Hall.
14. Papoulis, A. (1962). The Fourier Integral and Its Applications. McGraw-Hill.
15. Strang, Gilbert and Nguyen, T. (1996). Wavelets and Filter Banks. Wellesley, MA: Wellesley–Cambridge Press.
16. Vetterli, Martin and Kovačević, Jelena. (1995). Wavelets and Subband Coding. Upper Saddle River, NJ: Prentice–Hall.
17. Young, R. M. (1980). An Introduction to Nonharmonic Fourier Series. New York: Academic Press.
18. Zygmund, A. (1935, 1955). Trigonometrical Series. New York: Dover.

## Content actions

PDF | EPUB (?)

### What is an EPUB file?

EPUB is an electronic book format that can be read on a variety of mobile devices.

PDF | EPUB (?)

### What is an EPUB file?

EPUB is an electronic book format that can be read on a variety of mobile devices.

#### Collection to:

My Favorites (?)

'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.

| A lens I own (?)

#### Definition of a lens

##### Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

##### What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

##### Who can create a lens?

Any individual member, a community, or a respected organization.

##### What are tags?

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

| External bookmarks

#### Module to:

My Favorites (?)

'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.

| A lens I own (?)

#### Definition of a lens

##### Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

##### What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

##### Who can create a lens?

Any individual member, a community, or a respected organization.

##### What are tags?

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

| External bookmarks