# The Jpeg-Ls Standard - MULTIMEDIA

Generally, we would likely apply a lossless compression scheme to images that are critical in some sense, say medical images of a brain, or perhaps images that are difficult or costly to acquire. A scheme in competition with the lossless mode provided in JPEG2000 is the JPEG - LS standard, specifically aimed at lossless encoding, The main advantage of JPEG - LS over JPEG2000 is that JPEG - LS is based on a low - complexity algorithm. JPEG - LS is part of a larger ISO effort aimed at better compression of medical images.

JPEG - LS is in fact the current ISO / ITU standard for lossless or "near lossless" compression of continuous - tone images. The core algorithm in JPEG - LS is called Low Complexity Lossless Compression for Images (LOCO - I), proposed by Hewlett - Packard . The design of this algorithm is motivated by the observation that complexity reduction is often more important overall than any small increase in compression offered by more complex algorithms.

LOCO - I exploits a concept called context modeling. The idea of context modeling is to take advantage of the structure in the input source — conditional probabilities of what pixel values follow from each other in the image. This extra knowledge is called the context. If the input source contains substantial structure, as is usually the case, we could potentially compress it using fewer bits than the Oth - order entropy.

Performance comparison for JPEG and JPEG2000 on different image types: (a) Natural images; (b) Computer generated images; (c) Medical images

Comparison of JPEG and JPEG2000: (a) original image; (b) JPEG {left) and JPEG2000 (right) images compressed at 0.75 bpp; (c) JPEG (left) and JPEG2000 (right) images compressed at 0.25 bpp. (This figure also appears in the color insert section.)

JPEG - LS context model

As a simple example, suppose we have a binary source with P (0) = 0.4 and P (1) = 0.6. Then the Oth - order entropy H(S) = 0.4log2(0.4) - 0.6log2(0.6) = 0.97. Now suppose we also know that this source has the property that if the previous symbol is 0, the probability of the current symbol being 0 is 0.8, and if the previous symbol is 1, the probability of the current symbol being 0 is 0.1.

If we use the previous symbol as our context, we can divide the input symbols into two sets, corresponding to context 0 and context 1, respectively. Then the entropy of each of the two sets is

H(Si)= -0.81og2(0.8) - 0.2 log2(0.2) = 0.72

H(S2)= -0.11og2(0.1) - 0.91og2(0.9) = 0.47

The average bit - rate for the entire source would be 0.4 x 0.72 + 0.6 x 0.47 - 0.57, which is substantially less than the Oth - order entropy of the entire source in this case.

LOCO - I uses a context model. In raster scan order, the context pixels a, b, c, and d all appear before the current pixel .v. Thus, this is called a causal context.

LOCO - I can be broken down into three components:

1. Prediction. Predicting the value of the next sample x' using a causal template
2. Context determination. Determining the context in which x' occurs
3. Residual coding. Entropy coding of the prediction residual conditioned by the context of x'

Prediction

A better version of prediction can use an adaptive model based on a calculation of the local edge direction. However, because JPEG - LS is aimed at low complexity, the LOCO - I algorithm instead uses a fixed predictor that performs primitive tests to detect vertical and horizontal edges. The fixed predictor used by the algorithm is given as follows:

It is easy to see that this predictor switches between three simple predictors. It outputs a when there is a vertical edge to the left of the current location; it outputs b when there is a horizontal edge above the current location; and finally it outputs a + b ~ c when the neighboring samples are relatively smooth.

Context Determination

The context model that conditions the current prediction error (the residual) is indexed using a three - component context vector Q =(q1, q2, q3), whose components are

These differences represent the local gradient that captures the local smoothness or edge contents that surround the current sample. Because these differences can potentially take on a wide range of values, the underlying context model is huge, making the context - modeling approach impractical. To solve this problem, parameter reduction methods are needed.

An effective method is to quantize these differences so that they can be represented by a limited number of values. The components of Q are quantized using a quantizer with decision boundaries — T, …,—1, 0, 1, … , T. In JPEG - LS, T = 4. The context size is further reduced by replacing any context vector Q whose first element is negative by —Q.

Therefore, the number of different context states is (2T + 1) 3 + 1 / 2 = 365 in total. The vector Q is then mapped into an integer in [0, 364].

Residual Coding

For any image, the prediction residual has a finite size, α.It can be shown that the error residuals follow a two - sided geometric distribution (TSGD). As a result, they are coded using adaptively selected codes based on Golomb codes, which are optimal for sequences with geometric distributions.

Near - Lossless Mode

The JPEG - LS standard also offers a near - lossless made, in which the reconstructed samples deviate from the original by no more than an amount < 5. The main lossless JPEG - LS mode can be considered a special case of the near - lossless mode with 5 = 0. Near - lossless compression is achieved using quantization: residuals are quantized using a uniform quantizer having intervals of length 25 + 1. The quantized values of e are given by

Since 5 can take on only a small number of integer values, the division operation can be implemented efficiently using lookup tables. In near - lossless mode, the prediction and context determination step described previously are based on the quantized values only.