Skip to content Skip to navigation Skip to collection information

OpenStax-CNX

You are here: Home » Content » An Introduction to Source-Coding: Quantization, DPCM, Transform Coding, and Sub-band Coding » Performance of DPCM

Navigation

Lenses

What is a lens?

Definition of a lens

Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

Who can create a lens?

Any individual member, a community, or a respected organization.

What are tags? tag icon

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

This content is ...

Affiliated with (What does "Affiliated with" mean?)

This content is either by members of the organizations listed or about topics related to the organizations listed. Click each link to see a list of all content affiliated with the organization.

Also in these lenses

  • UniqU content

    This collection is included inLens: UniqU's lens
    By: UniqU, LLC

    Click the "UniqU content" link to see all content selected in this lens.

  • Lens for Engineering

    This module and collection are included inLens: Lens for Engineering
    By: Sidney Burrus

    Click the "Lens for Engineering" link to see all content selected in this lens.

Recently Viewed

This feature requires Javascript to be enabled.

Tags

(What is a tag?)

These tags come from the endorsement, affiliation, and other lenses that include this content.
 

Performance of DPCM

Module by: Phil Schniter. E-mail the author

Summary: Here we characterize the performance of DPCM via the simpler surrogate known as "quantized predictive encoding", which is known to have very similar performance in practice. To do this, we derive the optimum prediction coefficients, the resulting prediction error variance and gain over PCM.

  • As we noted earlier, the DPCM performance gain is a consequence of variance reduction obtained through prediction. Here we derive the optimal predictor coefficients, prediction error variance, and bit rate for the system in figure 4 from Differential Pulse Code Modulation. This system is easier to analyze than DPCM systems with quantizer in loop (e.g., figure 5 from Differential Pulse Code Modulation) and it is said that the difference in prediction-error behavior is negligible when R>2R>2(see page 267 of Jayant & Noll).
  • Optimal Prediction Coefficients: First we find coefficients h minimizing prediction error variance:
    min h E { e 2 ( n ) } . min h E { e 2 ( n ) } .
    (1)
    Throughout, we assume that x(n)x(n) is a zero-mean stationary random process with autocorrelation
    r x ( k ) : = E { x ( n ) x ( n - k ) } = r x ( - k ) . r x ( k ) : = E { x ( n ) x ( n - k ) } = r x ( - k ) .
    (2)
    A necessary condition for optimality is the following:
    j { 1 , , N } , 0 = 1 2 h j E { e 2 ( n ) } = E e ( n ) e ( n ) h j = E e ( n ) x ( n - j ) The "Orthogonality Principle" = E x ( n ) - i = 1 N h i x ( n - i ) x ( n - j ) = E x ( n ) x ( n - j ) - i = 1 N h i E x ( n - i ) x ( n - j ) = r x ( j ) - i = 1 N h i r x ( j - i ) j { 1 , , N } , 0 = 1 2 h j E { e 2 ( n ) } = E e ( n ) e ( n ) h j = E e ( n ) x ( n - j ) The "Orthogonality Principle" = E x ( n ) - i = 1 N h i x ( n - i ) x ( n - j ) = E x ( n ) x ( n - j ) - i = 1 N h i E x ( n - i ) x ( n - j ) = r x ( j ) - i = 1 N h i r x ( j - i )
    (3)
    where we have used equation 1 from Differential Pulse Code Modulation. We can rewrite this as a system of linear equations:
    r x ( 1 ) r x ( 2 ) r x ( N ) r x = r x ( 0 ) r x ( 1 ) r x ( N - 1 ) r x ( 1 ) r x ( 0 ) r x ( N - 2 ) r x ( N - 1 ) r x ( N - 2 ) r x ( 0 ) R N h 1 h 2 h N h r x ( 1 ) r x ( 2 ) r x ( N ) r x = r x ( 0 ) r x ( 1 ) r x ( N - 1 ) r x ( 1 ) r x ( 0 ) r x ( N - 2 ) r x ( N - 1 ) r x ( N - 2 ) r x ( 0 ) R N h 1 h 2 h N h
    (4)
    which yields an expression for the optimal prediction coefficients:
h = R N - 1 r x . h = R N - 1 r x .
(5)
  • Error for Length-N Predictor: The definition x(n):=x(n),x(n-1),,x(n-N)tx(n):=x(n),x(n-1),,x(n-N)t and Equation 5 can be used to show that the minimum prediction error variance is
    σe2 | min,N=E{e2(n)}=Ext(n)1-h2=1-htE{x(n)xt(n)}1-h=1-htrx(0)rxtrxRN1-h=rx(0)-2htrx+htRNh=rx(0)-rxtRN-1rx.σe2 | min,N=E{e2(n)}=Ext(n)1-h2=1-htE{x(n)xt(n)}1-h=1-htrx(0)rxtrxRN1-h=rx(0)-2htrx+htRNh=rx(0)-rxtRN-1rx.
    (6)
  • Error for Infinite-Length Predictor: We now characterize σ e 2 | min , N σ e 2 | min , N as NN. Note that
    r x ( 0 ) r x t r x R N R N + 1 1 - h = σ e 2 | min , N 0 r x ( 0 ) r x t r x R N R N + 1 1 - h = σ e 2 | min , N 0
    (7)
    Using Cramer's rule,
    1 = σ e 2 | min , N r x t 0 R N R N + 1 = σ e 2 | min , N R N R N + 1 σ e 2 | min , N = R N + 1 R N . 1 = σ e 2 | min , N r x t 0 R N R N + 1 = σ e 2 | min , N R N R N + 1 σ e 2 | min , N = R N + 1 R N .
    (8)

    Aside: Cramer's Rule:

    Given matrix equation Ay=bAy=b, where A=(a1,a2,,aN)RN×NA=(a1,a2,,aN)RN×N,

    y k = a 1 , , a k - 1 , b , a k + 1 , , a N A y k = a 1 , , a k - 1 , b , a k + 1 , , a N A
    (9)
    where |·||·| denotes determinant.

    A result from the theory of Toeplitz determinants (see Jayant & Noll) gives the final answer:
    σ e 2 | min = lim N R N + 1 R N = exp 1 2 π - π π ln S x ( e j ω ) d ω σ e 2 | min = lim N R N + 1 R N = exp 1 2 π - π π ln S x ( e j ω ) d ω
    (10)
    where Sx(ejω)Sx(ejω) is the power spectral density of the WSS random process x(n)x(n):
    S x ( e j ω ) : = n = - r x ( n ) e - j ω n . S x ( e j ω ) : = n = - r x ( n ) e - j ω n .
    (11)
    (Note that, because rx(n)rx(n) is conjugate symmetric for stationary x(n)x(n), Sx(ejω)Sx(ejω) will always be non-negative and real.)
  • ARMA Source Model: If the random process x(n)x(n) can be modelled as a general linear process , i.e., white noise v(n)v(n) driving a causal LTI system B(z)B(z):
    x ( n ) = v ( n ) + k = 1 b k v ( n - k ) with k | b k | 2 < , x ( n ) = v ( n ) + k = 1 b k v ( n - k ) with k | b k | 2 < ,
    (12)
    then it can be shown that
    σ v 2 = exp 1 2 π - π π ln S x ( e j ω ) d ω . σ v 2 = exp 1 2 π - π π ln S x ( e j ω ) d ω .
    (13)
    Thus the MSE-optimal prediction error variance equals that of the driving noise v(n)v(n) when N=N=.
  • Prediction Error Whiteness: We can also demonstrate that the MSE-optimal prediction error is white when N=N=. This is a simple fact of the orthogonality principle seen earlier:
    0 = E { e ( n ) x ( n - k ) } , k = 1 , 2 , . 0 = E { e ( n ) x ( n - k ) } , k = 1 , 2 , .
    (14)
    The prediction error has autocorrelation
    E { e ( n ) e ( n - k ) } = E e ( n ) x ( n - k ) + i = 1 h i x ( n - k - i ) = E { e ( n ) x ( n - k ) } 0 for k > 0 + i = 1 h i E { e ( n ) x ( n - k - i ) } 0 = σ e 2 | min δ ( k ) . E { e ( n ) e ( n - k ) } = E e ( n ) x ( n - k ) + i = 1 h i x ( n - k - i ) = E { e ( n ) x ( n - k ) } 0 for k > 0 + i = 1 h i E { e ( n ) x ( n - k - i ) } 0 = σ e 2 | min δ ( k ) .
    (15)
  • AR Source Model: When the input can be modelled as an autoregressive (AR) process of order N:
    X ( z ) = 1 1 + a 1 z - 1 + a 2 z - 2 + + a N z - N V ( z ) , X ( z ) = 1 1 + a 1 z - 1 + a 2 z - 2 + + a N z - N V ( z ) ,
    (16)
    then MSE-optimal results (i.e., σe2=σe2|minσe2=σe2|min and whitening) may be obtained with a forward predictor of order N. Specifically, the prediction coefficients hi can be chosen as hi=aihi=ai and so the prediction error E(z)E(z) becomes
    E ( z ) = 1 - H ( z ) X ( z ) = 1 + a 1 z - 1 + a 2 z - 2 + + a N z - N 1 + a 1 z - 1 + a 2 z - 2 + + a N z - N V ( z ) = V ( z ) , E ( z ) = 1 - H ( z ) X ( z ) = 1 + a 1 z - 1 + a 2 z - 2 + + a N z - N 1 + a 1 z - 1 + a 2 z - 2 + + a N z - N V ( z ) = V ( z ) ,
    (17)
  • Efficiency Gain over PCM: Prediction reduces the variance at the quantizer input without changing the variance of the reconstructed signal.
    • By keeping the number of quantization levels fixed, could reduce quantization step width and obtain lower quantization error than PCM at the same bit rate.
    • By keeping the decision levels fixed, could reduce the number of quantization levels and obtain a lower bit rate than PCM at the same quantization error level.
    Assuming that x(n)x(n) and e(n)e(n) are distributed similarly, use of the same style of quantizer on DPCM vs. PCM systems yields
    SNR DPCM = SNR PCM + 10 log 10 σ x 2 σ e 2 . SNR DPCM = SNR PCM + 10 log 10 σ x 2 σ e 2 .
    (18)

Collection Navigation

Content actions

Download:

Collection as:

EPUB (?)

What is an EPUB file?

EPUB is an electronic book format that can be read on a variety of mobile devices.

Downloading to a reading device

For detailed instructions on how to download this content's EPUB to your specific device, click the "(?)" link.

| More downloads ...

Module as:

PDF | More downloads ...

Add:

Collection to:

My Favorites (?)

'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.

| A lens I own (?)

Definition of a lens

Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

Who can create a lens?

Any individual member, a community, or a respected organization.

What are tags? tag icon

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

| External bookmarks

Module to:

My Favorites (?)

'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.

| A lens I own (?)

Definition of a lens

Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

Who can create a lens?

Any individual member, a community, or a respected organization.

What are tags? tag icon

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

| External bookmarks