# Distortion Measures - MULTIMEDIA

A distortion measure is a mathematical quantity that specifies how close an approximation is its original some distortion criteria. When looking at compressed data, it is natural to think of the distortion in terms of the numerical difference between the original data and the reconstructed data. However, when the data to be compressed is an image, such a measure may not yield the intended result.

For example, if the reconstructed image is the same as original image except that it is shifted to the right by one vertical scan line, an average human observer would have a hard time distinguishing it from the original and would therefore conclude that the distortion is small. However, when the calculation is carried out numerically, we find a large distortion, because of the large changes in individual pixels of the reconstructed image. The problem is that we need a measure of perceptual distortion, not a more naive numerical approach. However, the study of perceptual distortions is beyond the scope of this book.

Of the many numerical distortion measures that have been defined, we present the three most commonly used in image compression. If we are interested in the average pixel difference, the mean square error (MSE) σ2 is often used. It is defined as

where xn, yn, and N are the input data sequence, reconstructed data sequence, and length of the data sequence, respectively.

If we are interested in the size of the error relative to the signal, we can measure the signal - to - noise ratio (SNR) by taking the ratio of the average square of the original data sequence and the mean square error (MSE), as discussed in Chapter. In decibel units (dB), it is defined as

where σ2x is the average square value of the original data sequence and σ2d is the MSE. Another commonly used measure for distortion is the peak - signal - to - noise ratio (PSNR), which measures the size of the error relative to the peak value of the signal xpeak. It is given by