Quantization - MULTIMEDIA

Quantization, involved in image processing, is a lossy compression technique achieved by compressing a range of values to a single quantum value. When the number of discrete symbols in a given stream is reduced, the stream becomes more compressible. For example, reducing the number of colors required to represent a digital image makes it possible to reduce its file size. Specific applications include DCT data quantization in JPEG and DWT data quantization in JPEG2000.

Typical rate-distortion function

Typical rate-distortion function

Uniform Scalar Quantization

A uniformscalar quantizer partitions the domain of input values into equally spaced intervals, except possibly at the two outer intervals. The endpoints of partition intervals are called the quantizer's decision boundaries. The output or reconstruction value corresponding to each interval is taken to be the midpoint of the interval. The length of each interval is referred to as the step size, denoted by the symbol A. Uniform scalar quantizers are of two types: midrise and midtread. A midtread quantizer has zero as one of its output values, whereas the midrise quantizer has a partition interval that brackets zero. A midrise quantizer is used with an even number of output levels, and a midtread quantizer with an oddnumber.

A midtread quantizer is important when source data represents the zero value by fluctuating between small positive and negative numbers. Applying the midtread quantizer in this case would produce an accurate and steady representation of the value zero. For the specialcase A = 1, we can simply compute the output values for these quantizers as

Uniform Scalar Quantization

Uniformscalar quantizers: (a) midrise; (b) midtread

Uniformscalar quantizers: (a) midrise; (b) midtread

The goal for the design of a successful uniform quantizer is to minimize the distortion for a given source input with a desired number of output values. This can be done by adjusting the step size A to match the input statistics.

Let's examine the performance of an M level quantizer. Let B ={b0, b1…bm} be the set of decision boundaries and Y = {y0, y1…ym}be the set of reconstruction or output values. Suppose the input is uniformly distributed in the interval [—Xmax, Xmax]. The rate of the quantizer is

R = [ log2 M ]

That is, R is the number of bits required to code M things — in thiscase, the M output levels.

The step size A is given by

Δ = 2Xmax / M

since the entire range of input values is from —Xmax to Xmax. For bounded input, the quantization error caused by the quantizer is referred to as granular distortion. If the quantizer replaces a whole range of values, from a maximum value to ∞ and similarly for negative values, that - part of the distortion is called the overload distortion.

To get an overall figure for granular distortion, notice that decision boundaries bi for a midrise quantizer are [(i — 1)Δ, I Δ], i = 1.. M/2, covering positive data X (and another half for negative X values). Output values yi are the midpoints (i Δ - Δ /2, i=1.. M/2,

Quantization error of a uniformly distributed source

Quantization error of a uniformly distributed source

again just considering positive data. The total distortion is twice the sum over the positive data, or

total distortion

where we divide by the range of X to normalize to a value of at most 1.

Since there construction values yi are the midpoints of each interval, the quantization error must lie within the values [-Δ/2, Δ/2].The following figure is a graph of quantization error for a uniformly distributed source. The quantization error in this case is also uniformly distributed. Therefore, the average squared error is the same as the variance σ2d of the quantization error calculated from just the interval [0, Δ] with error values in [-Δ/2,Δ/2]. The error value at x is e(x) x — Δ /2, so the variance of errors is given by

variance of errors

Similarly, the signal variance is σ2x — (2 X max) 2 / 12, so if the quantizer is n bits, M = 2n, then we have

signal variance

Hence, we have the important result that increasing one bit in the quantizer increases the signal - to - quantization noise ratio by 6.02 dB. More sophisticated estimates of D result from more sophisticated models of the probability distribution of errors.

Nonuniform Scalar Quantization

If the input source is not uniformly distributed, a uniform quantizer may be inefficient. Increasing the number of decision levels within the region where the source is densely distributed can effectively lower granular distortion. In addition, without having to increase the total number of decision levels, we can enlarge the region in which the source is sparsely distributed. Such nonuniform quantizers thus have nonuniformly defined decision boundaries.

There are two common approaches for nonuniform quantization: the Lloyd - Max quantizer and the companded quantizer.

Lloyd - Max Quantizer.* For a uniform quantizer, the total distortion is equal to the granular distortion. If the source distribution is not uniform, we must explicitly consider its probability distribution {probability density function) fx(x). Now we need the correct decision boundaries bi and reconstruction values yi, by solving for both simultaneously. To do so, we plug variables bi, yi into a total distortion measure

Nonuniform Scalar Quantization

Then we can minimize the total distortion by setting the derivative to zero. Differentiating with respect to yj yields the set of reconstruction values

yj yields the set of reconstruction values

This says that the optimal reconstruction value is the weighted centroid of the x interval. Differentiating with respect to bj and setting the result to zero yields

setting the result to zero yields

This gives a decision boundary bj at the midpoint of two adjacent reconstruction values. Solving these two equations simultaneously is carried out by iteration. The result is termed the Lloyd - Max quantizer.



Starting with aninitial guess of the optimal reconstruction levels, the algorithm above iteratively estimates the optimal boundaries, based on the current estimate of the reconstruction levels. It then updates the current estimate of the reconstruction levels, using the newly computed boundary information. The process is repeated until the reconstruction levels converge.

Companded Quantizer. In companded quantization, the input is mapped by a compressor function G and then quantized using a uniform quantizer. After transmission, the quantized values are mapped back using an expander function G_1. The block diagram for the companding process, where X is the quantized version of X. If the input source is bounded by xmax, then any nonuniform quantizer can be represented as a companded quantizer.The two commonly used companders are the µ - law and A - law companders.

Companded quantization

Companded quantization

Basic vector quantization procedure

Basic vector quantization procedure

Vector Quantization*

One of the fundamental ideas in Shannon's original work on information theory is that any compression system performs better if it operates on vectors or groups of samples rather than on individual symbols or samples. We can form vectors of input samples by concatenating a number of consecutive samples into a single vector. For example, an input vector might be a segment of a speech sample, a group of consecutive pixels in an image, or a chunk of data in any other format.

The idea behind vector quantization (VQ) is similar to that of scalar quantization but extended into multiple dimensions. Instead of representing values within an interval in one dimensional space by a reconstruction value, as in scalar quantization, in VQ an n - component code vector represents vectors that lie within a region in h - dimensional space. A collection of these code vectors forms the codebook for the vector quantizer.

Since there is no implicit ordering of codevectors, as there is in the one - dimensional case, an index set is also needed to index into the codebook. The above figure shows the basic vector quantization procedure. In the diagram, the encoder finds the closest code vector to the input vector and outputs the associated index. On the decoder side, exactly the same codebook is used. When the coded index of the input vector is received, a simple table lookup is performed to determine there construction vector.

Finding the appropriate codebook and searching for the closest code vector at the encoder end may require considerable computational resources. However, the decoder can execute quickly, since only a constant time operation is needed to obtain the reconstruction. Because of this property, VQ is attractive for systems with a lot of resources at the encoder end while the decoder has only limited resources, and the need is for quick execution time. Most multimedia applications fall into this category.

All rights reserved © 2020 Wisdom IT Services India Pvt. Ltd DMCA.com Protection Status