Lossless compression techniques




Most lossless compression programs do two things in sequence: the first step generates a statistical model for the input data, and the second step uses this model to map input data to bit sequences in such a way that "probable" (e.g. frequently encountered) data will produce shorter output than "improbable" data.

The primary encoding algorithms used to produce bit sequences are Huffman coding and arithmetic coding. Arithmetic coding achieves compression rates close to the best possible for a particular statistical model, which is given by the information entropy, whereas Huffman compression is simpler and faster but produces poor results for models that deal with symbol probabilities close to 1.

There are two primary ways of constructing statistical models: in a static model, the data is analyzed and a model is constructed, then this model is stored with the compressed data. This approach is simple and modular, but has the disadvantage that the model itself can be expensive to store, and also that it forces a single model to be used for all data being compressed, and so performs poorly on files containing heterogeneous data. Adaptive models dynamically update the model as the data is compressed. Both the encoder and decoder begin with a trivial model, yielding poor compression of initial data, but as they learn more about the data performance improves. Most popular types of compression used in practice now use adaptive coders.

Lossless compression methods may be categorized according to the type of data they are designed to compress. While, in principle, any general-purpose lossless compression algorithm (general-purpose meaning that they can compress any bitstring) can be used on any type of data, many are unable to achieve significant compression on data that is not of the form for which they were designed to compress. Many of the lossless compression techniques used for text also work reasonably well for indexed images.

Text

Statistical modeling algorithms for text (or text-like binary data such as executables) include:

· Context Tree Weighting method (CTW)

· Burrows-Wheeler transform (block sorting preprocessing that makes compression more efficient)

· LZ77 (used by DEFLATE)

· LZW

Multimedia

Techniques that take advantage of the specific characteristics of images such as the common phenomenon of contiguous 2-D areas of similar tones. Every pixel but the first is replaced by the difference to its left neighbor. This leads to small values having a much higher probability than large values. This is often also applied to sound files and can compress files which contain mostly low frequencies and low volumes. For images this step can be repeated by taking the difference to the top pixel, and then in videos the difference to the pixel in the next frame can be taken.

A hierarchical version of this technique takes neighboring pairs of data points, stores their difference and sum, and on a higher level with lower resolution continues with the sums. This is called discrete wavelet transform. JPEG2000 additionally uses data points from other pairs and multiplication factors to mix then into the difference. These factors have to be integers so that the result is an integer under all circumstances. So the values are increased, increasing file size, but hopefully the distribution of values is more peaked.

The adaptive encoding uses the probabilities from the previous sample in sound encoding, from the left and upper pixel in image encoding, and additionally from the previous frame in video encoding. In the wavelet transformation the probabilities are also passed through the hierarchy.

Huffman coding

Huffman tree generated from the exact frequencies of the text "this is an example of a huffman tree". The frequencies and codes of each character are below. Encoding the sentence with this code requires 135 bits, not counting space for the tree.

Char Freq Code
space    
a    
e    
f    
h    
i    
m    
n    
s    
t    
l    
o    
p    
r    
u    
x    

In computer science and information theory, Huffman coding is an entropy encoding algorithm used for lossless data compression. The term refers to the use of a variable-length code table for encoding a source symbol (such as a character in a file) where the variable-length code table has been derived in a particular way based on the estimated probability of occurrence for each possible value of the source symbol. It was developed by David A. Huffman while he was a Ph.D. student at MIT, and published in the 1952 paper "A Method for the Construction of Minimum-Redundancy Codes".

Huffman coding uses a specific method for choosing the representation for each symbol, resulting in a prefix code (sometimes called "prefix-free codes") (that is, the bit string representing some particular symbol is never a prefix of the bit string representing any other symbol) that expresses the most common characters using shorter strings of bits than are used for less common source symbols. Huffman was able to design the most efficient compression method of this type: no other mapping of individual source symbols to unique strings of bits will produce a smaller average output size when the actual symbol frequencies agree with those used to create the code. A method was later found to do this in linear time if input probabilities (also known as weights) are sorted.

For a set of symbols with a uniform probability distribution and a number of members which is a power of two, Huffman coding is equivalent to simple binary block encoding, e.g., ASCII coding. Huffman coding is such a widespread method for creating prefix codes that the term "Huffman code" is widely used as a synonym for "prefix code" even when such a code is not produced by Huffman's algorithm.

Although Huffman coding is optimal for a symbol-by-symbol coding (i.e. a stream of unrelated symbols) with a known input probability distribution, its optimality can sometimes accidentally be over-stated. For example, arithmetic coding and LZW coding often have better compression capability. Both these methods can combine an arbitrary number of symbols for more efficient coding, and generally adapt to the actual input statistics, the latter of which is useful when input probabilities are not precisely known or vary significantly within the stream. In general, improvements arise from input symbols being related (e.g., "cat" is more common than "cta").

Lempel–Ziv–Welch (LZW) is a universal lossless data compression algorithm created by Abraham Lempel, Jacob Ziv, and Terry Welch. It was published by Welch in 1984 as an improved implementation of the LZ78 algorithm published by Lempel and Ziv in 1978. The algorithm is designed to be fast to implement but is not usually optimal because it performs only limited analysis of the data.

Encoding

A dictionary is initialized to contain the single-character strings corresponding to all the possible input characters (and nothing else). The algorithm works by scanning through the input string for successively longer substrings until it finds one that is not in the dictionary. When such a string is found, it is added to the dictionary, and the index for the string less the last character (i.e., the longest substring that is in the dictionary) is retrieved from the dictionary and sent to output. The last input character is then used as the next starting point to scan for substrings.

In this way successively longer strings are registered in the dictionary and made available for subsequent encoding as single output values. The algorithm works best on data with repeated patterns, so the initial parts of a message will see little compression. As the message grows, however, the compression ratio tends asymptotically to the maximum.[1]

Decoding

The decoding algorithm works by reading a value from the encoded input and outputting the corresponding string from the initialized dictionary. At the same time it obtains the next value from the input, and registers to the dictionary the concatenation of that string and the first character of the string obtained by decoding the next input value. The decoder then proceeds to the next input value (which was already read in as the "next value" in the previous pass) and repeats the process until there is no more input, at which point the final input value is decoded without any more additions to the dictionary.

In this way the decoder builds up a dictionary which is identical to that used by the encoder, and uses it to decode subsequent input values. Thus the full dictionary does not need be sent with the encoded data; just the initial dictionary containing the single-character strings is sufficient (and is typically defined beforehand within the encoder and decoder rather than being explicitly sent with the encoded data.)

The method became widely used in the program compress, which became a more or less standard utility in Unix systems circa 1986. (It has since disappeared from many for both legal and technical reasons, but as of 2008 at least FreeBSD includes the utility of compress and uncompress as a part of the distribution.) Several other popular compression utilities also used the method, or closely related ones.

It became very widely used after it became part of the GIF image format in 1987. It may also (optionally) be used in TIFF files.

LZW compression provided a better compression ratio, in most applications, than any well-known method available up to that time. It became the first widely used universal data compression method on computers. It would typically compress large English texts to about half of their original sizes.

Today, an implementation of the algorithm is contained within the popular Adobe Acrobat software program.

Run-length encoding (RLE) is a very simple form of data compression in which runs of data (that is, sequences in which the same data value occurs in many consecutive data elements) are stored as a single data value and count, rather than as the original run. This is most useful on data that contains many such runs: for example, relatively simple graphic images such as icons, line drawings, and animations. It is not recommended for use with files that don't have many runs as it could potentially double the file size.

JBIG is an image compression standard for bi-level images, developed by the Joint Bi-level Image Experts Group. It is suitable for both lossless and lossy compression. According to a press release from the Group, in its lossless mode JBIG2 typically generates files one third to one fifth the size of Fax Group 4 and one half to one quarter the size of JBIG, the previous bi-level compression standard released by the Group. JBIG2 has been published in 2000 as the international standard ITU T.88, and in 2001 as ISO/IEC 14492.

JBIG is a method for compressing bi-level (two-color) image data. The acronym JBIG stands for Joint Bi-level Image Experts Group, a standards committee that had its origins within the International Standards Organization (ISO). The compression standard they developed bears the name of this committee.

In 1988, the ISO and CCITT formed JBIG by joining the ISO/IEC JTC1/SC29/WG9 group and the CCITT SGVIII subgroup for the joint purpose of developing a standard, lossless method of compressing bi-level data. In 1993, the standard defining the JBIG method of bi-level data encoding was finalized and released.

The main features of JBIG are:

  • Lossless compression of one-bit-per-pixel image data
  • Ability to encode individual bitplanes of multiple-bit pixels
  • Progressive or sequential encoding of image data

JBIG is intended to completely replace the less efficient MR (Modified READ) and MMR (Modified Modified READ) compression algorithms used by the CCITT Group 3 (G3) and Group 4 (G4) data transmission protocols, respectively. In 1995, the International Telecommunication Union (ITU) proposed an extension of the G3 and G4 standards to allow the use of JBIG-compressed image data in conjunction with these protocols. (Refer to the section on CCITT compression for more details about the G3 and G4 protocols.)

On scanned images of line art and printed text, JBIG achieves compression ratios 10 percent to 50 percent greater than that of G4, and up to 500 percent greater on computer-generated images of printed text. Bi-level images processed with half-toning or dithering are compressed 2 to 30 times smaller than when compressed with G4.

JBIG achieves these impressive compression ratios by adapting to the information content of the image data being encoded. An adaptive arithmetic coder is used to predict and code future data symbols based on the characteristics of the data currently being encoded. G3 and G4 however, are non-adaptive and use the same fixed patterns and algorithms to encode all image data regardless of the content.

JBIG also supports both sequential and progressive encoding methods. Sequential encoding reads data from the top to bottom and from left to right of an image and encodes it as a single image. Progressive encoding allows a series of multiple-resolution versions of the same image data to be stored within a single JBIG data stream. In contrast, G3 and G4 only support sequential coding at a fixed resolution.

JBIG is platform-independent and implements easily over a wide variety of distributed environments. It achieves excellent compression ratios on bi-level images, and it is capable of efficiently encoding some types of color and gray-scale images as well. JBIG's progressive encoding capabilities appear to make it the obvious choice for transmitting and storing bi-level information on networked environments, such as the World Wide Web. So why isn't JBIG more popular and widespread?

It is questionable how much success JBIG will have in replacing either G3 or G4. G3 and MR are the most widely used protocol and compression method, respectively, for facsimile transmission, and they are already supported by most telecommunications equipment that use bi-level image data. And G4, while typically requiring too great a bandwidth for conventional facsimile purposes, is a primary method of data compression in most document imaging systems. G4 achieves effective compression ratios of up to 20-to-1 on both scanned and computer-generated bi-level document image data.

Perhaps the greatest advantage the CCITT protocols offer over JBIG is that they are free and unencumbered by patents and legal disputes. Anyone may freely implement and distribute G3 and G4 codecs without the need of licensing agreements or royalty payments. JBIG, on the other hand, contains many patented processes (24 are listed in the JBIG Recommendation); the most prominent is the IBM arithmetic Q-coder, which is an option in JPEG, but is mandatory in JBIG.

JBIG Basics

Bi-level images contain only two colors and are stored using a single bit per pixel. Black-and-white images, such as the pages of a book, are the most common type of bi-level images. However, any two colors may be represented by the 1 (foreground color) or 0 (background color) state of a single bit.

Typical bi-level compression algorithms encode only a single scan line at a time using a run-length encoding technique. Such algorithms are referred to as 1D encoding methods. 2D encoding methods encode runs of pixels by describing the differences between the pixel values in the current and the previous scan lines.

JBIG encodes redundant image data by comparing a pixel in a scan line with a set of pixels already scanned by the encoder. These additional pixels are called a template, and they form a simple map of the pattern of pixels that surround the pixel that is being encoded. The values of these pixels are used to identify redundant patterns in the image data. These patterns are then compressed using an adaptive arithmetic compression coder.

The adaptive nature of templates allows the color of the pixel values being encoded to be predicted with a high degree of success. For gray-scale images with halftoning, compression ratios are increased by as much as 80 percent over non-adaptive methods.

Although designed primarily as a method for compressing bi-level image data, JBIG is capable of compressing color or gray-scale images with a depth of up to 255 bits per pixel. Such multi-bit pixel images are compressed by bitplane rather than by pixel. For example, an 8-bit image compressed using JBIG would be encoded into eight separate bitplanes.

This type of encoding may be used as an alternative to lossless JPEG. JBIG has been found to produce better compression results than lossless JPEG (using the Q-coder) on images with two to five bits per pixel and to produce identical results on image data with pixels six to eight bits in depth.

It is recommended that each bitplane be preprocessed with a gray-coding algorithm to normalize the changes between adjacent byte values in the image data. This process increases the efficiency of the JBIG encoder.

JBIG images may be encoded sequentially or progressively. Sequentially encoded images are stored in a single layer at full resolution and without other lower resolution images being stored in the same data stream. This sequential JBIG image is equivalent in function and application to a G4 image. Such an image is decoded in a single pass and has at least as good a compression ratio as G4.

Progressively encoded images start with the highest resolution image and end with the lowest. The high-resolution image is stored in a separate layer and is then used to produce a lower resolution image, also stored in its own layer. Each layer after the first layer is called a resolution doubling. An image with three layers is said to have two doublings.

There is no imposed limit to the number of doublings that may be encoded. For example, a 1200-dpi image may be encoded as one layer (1200 dpi), three layers (1200, 600, and 300 dpi), or five layers (1200, 600, 300, 150, and 75 dpi). The lowest resolution is determined by whatever is considered useful. Even a 10-dpi image, though not legible, is still useful as an icon.

Progressive decoding is the opposite process, with the lowest resolution image being decoded first, followed by increased resolutions of the image until the full resolution is achieved. This technique has the advantage of allowing data to appear immediately on the output device. Only data up to the appropriate resolution of the output device need be decoded and sent.

Both sequential and progressive JBIG encoding are completely compatible. Images compressed using sequential encoding are readable by progressive JBIG decoders. Sequential JBIG decoders are only capable of reading the first, lowest-resolution layer within a progressively-encoded JBIG image.

Many applications that utilize JBIG may only have use for sequential encoding and decoding, especially those used for facsimile transmission. It is therefore possible to implement a simplified JBIG algorithm that encodes only the first layer in a JBIG data stream. Such encoders produce a valid JBIG-encoded data stream that is readable by all JBIG decoders.

Progressive encoding does not add much more data to a JBIG data stream than does sequential encoding, but it does have greater memory requirements. Because a lower resolution image is encoded from data of the next higher resolution image (and vice versa when decoding), a frame buffer must be used to store image data that is being used as a reference.

Lossy compression

A lossy compression method is one where compressing data and then decompressing it retrieves data that is different from the original, but is close enough to be useful in some way. Lossy compression is most commonly used to compress multimedia data (audio, video, still images), especially in applications such as streaming media and internet telephony. By contrast, lossless compression is required for text and data files, such as bank records, text articles, etc. In many cases it is advantageous to make a master lossless file which can then be used to produce compressed files for different purposes; for example a multi-megabyte file can be used at full size to produce a full-page advertisement in a glossy magazine, and a 10 kilobyte lossy copy made for a small image on a web page.



Поделиться:




Поиск по сайту

©2015-2024 poisk-ru.ru
Все права принадлежать их авторам. Данный сайт не претендует на авторства, а предоставляет бесплатное использование.
Дата создания страницы: 2016-04-27 Нарушение авторских прав и Нарушение персональных данных


Поиск по сайту: