Skip to content Skip to navigation Skip to collection information

OpenStax_CNX

You are here: Home » Content » An Introduction to Source-Coding: Quantization, DPCM, Transform Coding, and Sub-band Coding » MSE-Optimal Memoryless Scalar Quantization

Navigation

Lenses

What is a lens?

Definition of a lens

Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

Who can create a lens?

Any individual member, a community, or a respected organization.

What are tags? tag icon

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

This content is ...

Affiliated with (What does "Affiliated with" mean?)

This content is either by members of the organizations listed or about topics related to the organizations listed. Click each link to see a list of all content affiliated with the organization.

Also in these lenses

  • UniqU content

    This collection is included inLens: UniqU's lens
    By: UniqU, LLC

    Click the "UniqU content" link to see all content selected in this lens.

  • Lens for Engineering

    This module and collection are included inLens: Lens for Engineering
    By: Sidney Burrus

    Click the "Lens for Engineering" link to see all content selected in this lens.

Recently Viewed

This feature requires Javascript to be enabled.

Tags

(What is a tag?)

These tags come from the endorsement, affiliation, and other lenses that include this content.
 

MSE-Optimal Memoryless Scalar Quantization

Module by: Phil Schniter. E-mail the author

Summary: The mean-squared error minimizing scalar quantizer (the Lloyd-Max quantizer) is derived here using Lagrange optimization. Background on Lagrange optimization is also provided. Finally, error variance is derived for the asymptotic case of many quantization levels.

  • Though uniform quantization is convenient for implementation and analysis, non-uniform quantization yeilds lower σq2 when px()px() is non-uniformly distributed. By decreasing |q(x)||q(x)| for frequently occuring x (at the expense of increasing |q(x)||q(x)| for infrequently occuring x), the average error power can be reduced.
  • Lloyd-Max Quantizer: MSE-optimal thresholds {xk}{xk} and outputs {yk}{yk} can be determined given an input distribution px()px(), and the result is the Lloyd-Max quantizer. Necessary conditions on {xk}{xk} and {yk}{yk} are
    σq2xk=0fork{2,,L}andσq2yk=0fork{1,,L}.σq2xk=0fork{2,,L}andσq2yk=0fork{1,,L}.
    (1)
    Using equation 2 from Memoryless Scalar Quantization (third equation), /babf(x)dx=f(b)/babf(x)dx=f(b), /aabf(x)dx=-f(a)/aabf(x)dx=-f(a), and above,
    σq2xk=(xk-yk-1)2px(xk)-(xk-yk)2px(xk)=0xk=yk+yk-12,k{2L},x1=-,xL+1=,σq2yk=2xkxk+1(x-yk)px(x)dx=0yk=xkxk+1xpx(x)dxxkxk+1px(x)dx,k{1L}σq2xk=(xk-yk-1)2px(xk)-(xk-yk)2px(xk)=0xk=yk+yk-12,k{2L},x1=-,xL+1=,σq2yk=2xkxk+1(x-yk)px(x)dx=0yk=xkxk+1xpx(x)dxxkxk+1px(x)dx,k{1L}
    (2)
    It can be shown that above are sufficient for global MMSE when 2logpx(x)/x202logpx(x)/x20, which holds for uniform, Gaussian, and Laplace pdfs, but not Gamma. Note:
    • optimum decision thresholds are halfway between neighboring output values,
    • optimum output values are centroids of the pdf within the appropriate interval, i.e., are given by the conditional means
      yk=E{x|xXk}=xpx(x|xXk)dx=xpx(x,xXk)Pr(xXk)dx,=xkxk+1xpx(x)dxxkxk+1px(x)dx.yk=E{x|xXk}=xpx(x|xXk)dx=xpx(x,xXk)Pr(xXk)dx,=xkxk+1xpx(x)dxxkxk+1px(x)dx.
      (3)
    Iterative Procedure to Find {xk}{xk} and {yk}{yk}:
    1. Choose y^1y^1.
    2. For k=1,,L-1k=1,,L-1,
      given y^ky^k and x^kx^k, solve Equation 2 (lower equation) for x^k+1x^k+1,
      given y^ky^k and x^k+1x^k+1, solve Equation 2 (upper equation) for y^k+1y^k+1.end;
    3. Compare y^Ly^L to yL calculated from Equation 2 (lower equation) based on x^Lx^L and xL+1=xL+1=. Adjust y^1y^1 accordingly, and go to step 1.
  • Lloyd-Max Performance for large L: As with the uniform quantizer, can analyze quantization error performance for large L. Here, we assume that
    • the pdf px(x)px(x) is constant over xXkxXk for k{1,,L}k{1,,L},
    • the pdf px(x)px(x) is symmetric about x=0x=0,
    • the input is bounded, i.e., x(-xmax,xmax)x(-xmax,xmax) for some (potentially large) xmax.
    So with assumption
    p x ( x ) = p x ( y k ) for x , y k X k p x ( x ) = p x ( y k ) for x , y k X k
    (4)
    and definition
    Δ k : = x k + 1 - x k , Δ k : = x k + 1 - x k ,
    (5)
    we can write
    P k : = Pr { x X k } = p x ( y k ) Δ k where we require P k = 1 P k : = Pr { x X k } = p x ( y k ) Δ k where we require P k = 1
    (6)
    and thus, from equation 2 from Memoryless Scalar Quantization(lower equation), σq2 becomes
    σ q 2 = k = 1 L P k Δ k x k x k + 1 ( x - y k ) 2 d x . σ q 2 = k = 1 L P k Δ k x k x k + 1 ( x - y k ) 2 d x .
    (7)
    For MSE-optimal {yk}{yk}, know
    0 = σ q 2 y k = 2 P k Δ k x k x k + 1 ( x - y k ) d x y k = x k + x k + 1 2 , 0 = σ q 2 y k = 2 P k Δ k x k x k + 1 ( x - y k ) d x y k = x k + x k + 1 2 ,
    (8)
    which is expected since the centroid of a flat pdf over Xk is simply the midpoint of Xk. Plugging ykyk into Equation 7,
    σ q 2 = k = 1 L P k 3 Δ k ( x - x k / 2 - x k + 1 / 2 ) 3 x k x k + 1 = k = 1 L P k 3 Δ k ( x k + 1 / 2 - x k / 2 ) 3 - ( x k / 2 - x k + 1 / 2 ) 3 = k = 1 L P k 3 Δ k 2 Δ k 2 3 = 1 12 k = 1 L P k Δ k 2 . σ q 2 = k = 1 L P k 3 Δ k ( x - x k / 2 - x k + 1 / 2 ) 3 x k x k + 1 = k = 1 L P k 3 Δ k ( x k + 1 / 2 - x k / 2 ) 3 - ( x k / 2 - x k + 1 / 2 ) 3 = k = 1 L P k 3 Δ k 2 Δ k 2 3 = 1 12 k = 1 L P k Δ k 2 .
    (9)
    Note that for uniform quantization (Δk=ΔΔk=Δ), the expression above reduces to the one derived earlier. Now we minimize σq2 w.r.t. {Δk}{Δk}. The trick here is to define
    α k : = p x ( y k ) 3 Δ k so that σ q 2 = 1 12 k = 1 L p x ( y k ) Δ k 3 = 1 12 k = 1 L α k 3 . α k : = p x ( y k ) 3 Δ k so that σ q 2 = 1 12 k = 1 L p x ( y k ) Δ k 3 = 1 12 k = 1 L α k 3 .
    (10)
    For px(x)px(x) constant over Xk and ykXkykXk,
    k = 1 L α k = k = 1 L p x ( y k ) 3 Δ k | y k = x k + x k + 1 2 = - x max x max p x ( x ) 3 d x = C x (a known constant) , k = 1 L α k = k = 1 L p x ( y k ) 3 Δ k | y k = x k + x k + 1 2 = - x max x max p x ( x ) 3 d x = C x (a known constant) ,
    (11)
    we have the following constrained optimization problem:
    min { α k } k α k 3 s.t. k α k = C x . min { α k } k α k 3 s.t. k α k = C x .
    (12)
    This may be solved using Lagrange multipliers.

    Aside: Optimization via Lagrange Multipliers:

    Consider the problem of minimizing N-dimensional real-valued cost function J(x)J(x), where x=(x1,x2,,xN)tx=(x1,x2,,xN)t, subject to M<NM<N real-valued equality constraints fm(x)=amfm(x)=am, m=1,,Mm=1,,M. This may be converted into an unconstrained optimization of dimension N+MN+M by introducing additional variables λ=(λ1,,λM)tλ=(λ1,,λM)t known as Lagrange multipliers. The uncontrained cost function is
    J u ( x , λ ) = J ( x ) + m λ m f m ( x ) - a m , J u ( x , λ ) = J ( x ) + m λ m f m ( x ) - a m ,
    (13)
    and necessary conditions for its minimization are
    x J u ( x , λ ) = 0 x J ( x ) + m λ m x f m ( x ) = 0 λ J u ( x , λ ) = 0 f m ( x ) = a m for m = 1 , , M . x J u ( x , λ ) = 0 x J ( x ) + m λ m x f m ( x ) = 0 λ J u ( x , λ ) = 0 f m ( x ) = a m for m = 1 , , M .
    (14)
    The typical procedure used to solve for optimal x is the following:
    1. Equations for xn, n=1,,Nn=1,,N, in terms of {λm}{λm} are obtained from Equation 14 (upper equation).
    2. These N equations are used in Equation 14 (lower equation) to solve for the M optimal λm.
    3. The optimal {λm}{λm} are plugged back into the N equations for xn, yielding optimal {xn}{xn}.
    Necessary conditions are
    , α k α k 3 + λ k α k - C x = 0 λ = - 3 α 2 α = - λ / 3 λ k α k 3 + λ k α k - C x = 0 k α k = C x , , α k α k 3 + λ k α k - C x = 0 λ = - 3 α 2 α = - λ / 3 λ k α k 3 + λ k α k - C x = 0 k α k = C x ,
    (15)
    which can be combined to solve for λ:
    k = 1 L - λ 3 = C x λ = - 3 C x L 2 . k = 1 L - λ 3 = C x λ = - 3 C x L 2 .
    (16)
    Plugging λ back into the expression for α, we find
    α = C x / L , . α = C x / L , .
    (17)
    Using the definition of α, the optimal decision spacing is
    Δ k = C x L p x ( y k ) 3 = - x max x max p x ( x ) 3 d x L p x ( y k ) 3 , Δ k = C x L p x ( y k ) 3 = - x max x max p x ( x ) 3 d x L p x ( y k ) 3 ,
    (18)
    and the minimum quantization error variance is
    σ q 2 min = 1 12 k p x ( y k ) Δ k 3 = 1 12 k p x ( y k ) - x max x max p x ( x ) 3 d x 3 L 3 p x ( y k ) = 1 12 L 2 - x max x max p x ( x ) 3 d x 3 . σ q 2 min = 1 12 k p x ( y k ) Δ k 3 = 1 12 k p x ( y k ) - x max x max p x ( x ) 3 d x 3 L 3 p x ( y k ) = 1 12 L 2 - x max x max p x ( x ) 3 d x 3 .
    (19)
    An interesting observation is that α3α3, the thth interval's optimal contribution to σq2, is constant over .

Collection Navigation

Content actions

Download:

Collection as:

EPUB (?)

What is an EPUB file?

EPUB is an electronic book format that can be read on a variety of mobile devices.

Downloading to a reading device

For detailed instructions on how to download this content's EPUB to your specific device, click the "(?)" link.

| More downloads ...

Module as:

PDF | More downloads ...

Add:

Collection to:

My Favorites (?)

'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.

| A lens I own (?)

Definition of a lens

Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

Who can create a lens?

Any individual member, a community, or a respected organization.

What are tags? tag icon

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

| External bookmarks

Module to:

My Favorites (?)

'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.

| A lens I own (?)

Definition of a lens

Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

Who can create a lens?

Any individual member, a community, or a respected organization.

What are tags? tag icon

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

| External bookmarks