# Connexions

You are here: Home » Content » Image Coding » Practical Entropy Coding Techniques

### Lenses

What is a lens?

#### Definition of a lens

##### Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

##### What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

##### Who can create a lens?

Any individual member, a community, or a respected organization.

##### What are tags?

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

#### In these lenses

• eScience, eResearch and Computational Problem Solving

This collection is included inLens: eScience, eResearch and Computational Problem Solving
By: Jan E. Odegard

Click the "eScience, eResearch and Computational Problem Solving" link to see all content selected in this lens.

### Recently Viewed

This feature requires Javascript to be enabled.

Inside Collection (Course):

Course by: Nick Kingsbury. E-mail the author

# Practical Entropy Coding Techniques

Module by: Nick Kingsbury. E-mail the author

Summary: This module introduces practical entropy coding techniques, such as Huffman Coding, Run-length Coding (RLC) and Arithmetic Coding.

In the module of Use of Laplacian PDFs in Image Compression we have assumed that ideal entropy coding has been used in order to calculate the bit rates for the coded data. In practise we must use real codes and we shall now see how this affects the compression performance.

There are three main techniques for achieving entropy coding:

• Huffman Coding - one of the simplest variable length coding schemes.
• Run-length Coding (RLC) - very useful for binary data containing long runs of ones of zeros.
• Arithmetic Coding - a relatively new variable length coding scheme that can combine the best features of Huffman and run-length coding, and also adapt to data with non-stationary statistics.
We shall concentrate on the Huffman and RLC methods for simplicity. Interested readers may find out more about Arithmetic Coding in chapters 12 and 13 of the JPEG Book.

First we consider the change in compression performance if simple Huffman Coding is used to code the subimages of the 4-level Haar transform.

The calculation of entropy in this equation from our discussion of entropy assumed that each message with probability pi pi could be represented by a word of length i =log2 pi i 2 pi bits. Huffman codes require the i i to be integers and assume that the pi pi are adjusted to become:

p i ^ =2 i p i ^ 2 i
(1)
where the i i are integers, chosen subject to the constraint that i p i ^ 1 i p i ^ 1 (to guarantee that sufficient uniquely decodable code words are available) and such that the mean Huffman word length (Huffman entropy), H ^ =i pi i H ^ i pi i , is minimised.

We can use the probability histograms which generated the entropy plots in figures of level 1 energies, level 2 energies, level 3 energies and level 4 energies to calculate the Huffman entropies H ^ H ^ for each subimage and compare these with the true entropies to see the loss in performance caused by using real Huffman codes.

An algorithm for finding the optimum codesizes i i is recommended in the JPEG specification [the JPEG Book, Appendix A, Annex K.2, fig K.1]; and a Mathlab M-file to implement it is given in M-file code.

Table 1: Numerical results used in the figure - Entropies and Bit rates of Subimages for Qstep=15
Column: 1 2 3 4 5 6 -
0.0264 0.0265 0.0264 0.0266
0.0220 0.0222 0.0221 0.0221 Level 4
0.0186 0.0187 0.0185 0.0186
0.0171 0.0172 0.0171 0.0173 -
0.0706 0.0713 0.0701 0.0705
0.0556 0.0561 0.0557 0.0560 Level 3
3.7106 3.7676 0.0476 0.0482 0.0466 0.0471 -
0.1872 0.1897 0.1785 0.1796
0.1389 0.1413 0.1340 0.1353 Level 2
0.1096 0.1170 0.1038 0.1048 -
0.4269 0.4566 0.3739 0.3762
0.2886 0.3634 0.2691 0.2702 Level 1
0.2012 0.3143 0.1819 0.1828 -
Totals: 3.7106 3.7676 1.6103 1.8425 1.4977 1.5071

Figure 1 shows the results of applying this algorithm to the probability histograms and Table 1 lists the same results numerically for ease of analysis. Columns 1 and 2 compare the ideal entropy with the mean word length or bit rate from using a Huffman code (the Huffman entropy) for the case of the untransformed image where the original pels are quantized with Qstep =15 Qstep 15 . We see that the increase in bit rate from using the real code is: 3.76763.71061=1.5% 3.7676 3.7106 1 1.5 % But when we do the same for the 4-level transformed subimages, we get columns 3 and 4. Here we see that real Huffman codes require an increase in bit rate of: 1.84251.61031=14.4% 1.8425 1.6103 1 14.4 % Comparing the results for each subimage in columns 3 and 4, we see that most of the increase in bit rate arises in the three level-1 subimages at the bottom of the columns. This is because each of the probability histograms for these subimages (see figure) contain one probability that is greater than 0.5. Huffman codes cannot allocate a word length of less than 1 bit to a given event, and so they start to lose efficiency rapidly when log2 pi 2 pi becomes less than 1, ie when pi >0.5 pi 0.5 .

Run-length codes (RLCs) are a simple and effective way of improving the efficiency of Huffman coding when one event is much more probable than all of the others combined. They operate as follows:

• The pels of the subimage are scanned sequentially (usually in columns or rows) to form a long 1-dimensional vector.
• Each run of consecutive zero samples (the most probable events) in the vector is coded as a single event.
• Each non-zero sample is coded as a single event in the normal way.
• The two types of event (runs-of-zeros and non-zero samples) are allocated separate sets of codewords in the same Huffman code, which may be designed from a histogram showing the frequencies of all events.
• To limit the number of run events, the maximum run length may be limited to a certain value (we have used 128) and runs longer than this may be represented by two or more run codes in sequence, with negligible loss of efficiency.
Hence RLC may be added before Huffman coding as an extra processing step, which converts the most probable event into many separate events, each of which has pi <0.5 pi 0.5 and may therefore be coded efficiently. Figure 2 shows the new probability histograms and entropies for level 1 of the Haar transform when RLC is applied to the zero event of the three bandpass subimages. Comparing this with a previous figure, note the absence of the high probability zero events and the new states to the right of the original histograms corresponding to the run lengths.

The total entropy per event for an RLC subimage is calculated as before from the entropy histogram. However to get the entropy per pel we scale the entropy by the ratio of the number of events (runs and non-zero samples) in the subimage to the number of pels in the subimage (note that with RLC this ratio will no longer equal one - it will hopefully be much less).

Figure 2 gives the entropies per pel after RLC for each subimage, which are now less than the entropies in this figure. This is because RLC takes advantage of spatial clustering of the zero samples in a subimage, rather than just depending on the histogram of amplitudes.

Clearly if all the zeros were clustered into a single run, this could be coded much more efficiently than if they are distributed into many runs. The entropy of the zero event tells us the mean number of bits to code each zero pel if the zero pels are distributed randomly, ie if the probability of a given pel being zero does not depend on the amplitudes of any nearby pels.

In typical bandpass subimages, non-zero samples tend to be clustered around key features such as object boundaries and areas of high texture. Hence RLC usually reduces the entropy of the data to be coded. There are many other ways to take advantage of clustering (correlation) of the data - RLC is just one of the simplest.

In Figure 1, comparing column 5 with column 3, we see the modest (7%) reduction in entropy per pel achieved by RLC, due clustering in the Lenna image. The main advantage of RLC is apparent in column 6, which shows the mean bit rate per pel when we use a real Huffman code on the RLC histograms of Figure 2. The increase in bit rate over the RLC entropy is only 1.50711.49771=0.63% 1.5071 1.4977 1 0.63 % compared with 14.4% when RLC is not used (columns 3 and 4).

Finally, comparing column 6 with column 3, we see that, relative to the simple entropy measure, combined RLC and Huffman coding can reduce the bit rate by 11.50711.6103=6.4% 1 1.5071 1.6103 6.4% The closeness of this ratio to unity justifies our use of simple entropy as a tool for assessing the information compression properties of the Haar transform - and of other energy compression techniques as we meet them.

The following is the listing of the M-file to calculate the Huffman entropy from a given histogram.



% Find Huffman code sizes: JPEG fig K.1, procedure Code_size.
% huffhist contains the histogram of event counts (frequencies).
freq = huffhist(:);
codesize = zeros(size(freq));
others = -ones(size(freq)); %Pointers to next symbols in code tree.

% Find non-zero entries in freq, and loop until only 1 entry left.
nz = find(freq > 0);
while length(nz) > 1,
% Find v1 for least value of freq(v1) > 0.
[y,i] = min(freq(nz));
v1 = nz(i);
% Find v2 for next least value of freq(v2) > 0.
nz = nz([1:(i-1) (i+1):length(nz)]); % Remove v1 from nz.
[y,i] = min(freq(nz));
v2 = nz(i);
% Combine frequency values.
freq(v1) = freq(v1) + freq(v2);
freq(v2) = 0;
codesize(v1) = codesize(v1) + 1;
% Increment code sizes for all codewords in this tree branch.
while others(v1) > -1,
v1 = others(v1);
codesize(v1) = codesize(v1) + 1;
end
others(v1) = v2;
codesize(v2) = codesize(v2) + 1;
while others(v2) > -1,
v2 = others(v2);
codesize(v2) = codesize(v2) + 1;
end
nz = find(freq > 0);
end

% Generate Huffman entropies by multiplying probabilities by code sizes.
huffent = (huffhist(:)/sum(huffhist(:))) .* codesize;



## Content actions

PDF | EPUB (?)

### What is an EPUB file?

EPUB is an electronic book format that can be read on a variety of mobile devices.

PDF | EPUB (?)

### What is an EPUB file?

EPUB is an electronic book format that can be read on a variety of mobile devices.

#### Collection to:

My Favorites (?)

'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.

| A lens I own (?)

#### Definition of a lens

##### Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

##### What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

##### Who can create a lens?

Any individual member, a community, or a respected organization.

##### What are tags?

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

| External bookmarks

#### Module to:

My Favorites (?)

'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.

| A lens I own (?)

#### Definition of a lens

##### Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

##### What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

##### Who can create a lens?

Any individual member, a community, or a respected organization.

##### What are tags?

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

| External bookmarks