The March issue of Wired magazine has an interesting article about a relatively new method of generating high-quality images from relatively low-quality samples. The technique, known as compressed sensing or compressed sampling, is in some sense an inversion of the more familiar process of data compression.
Data compression is used in a variety of applications to reduce the size of data, in order to achieve lower storage requirements or lower bandwidth requirements. All compression techniques rely on the fact that almost all “interesting” sets of data are non-random, and have some significant degree of information redundancy, or entropy. One type of compression, typically used with text, is lossless, meaning that the original data can be reconstructed with complete fidelity from the compressed version. The “DEFLATE” method, used in compression software like PKZIP and gzip(1), is an example of a lossless compression method. Lossy compression methods, on the other hand, accept some inaccuracies in reconstructing the data in exchange for more effective compression. Often, the type of “loss” that is tolerable is dependent on considerations of human perception. For example, the compression of images using the JPEG standard is done using knowledge of what image features are most important to human vision. Similarly, compression of audio files into MP3 format relies on knowledge of human hearing. The lossy compression techniques can be quite effective in reducing data size. The JPEG file delivered by a digital camera may be only 10 % of the size of the raw image.
The compressed sampling technique essentially arises from asking the following question: “Since in the compressed image we have thrown out most of the original data, is there a way that we can skip recording all of the original, and just grab the important parts that we need to save?” It turns out that the answer to this question is yes. If we take a selected random sample of pixels in an image, for example, there are mathematical techniques that we can use to generate a much higher quality image, one very similar to what we would have obtained by capturing the whole image in the first place. Essentially, this uses the knowledge that the image data has a significant degree of redundancy to drive a linear programming algorithm to construct the most plausible “original”. (A more technical description is given in this blog post by Terence Tao, one of the pioneers of the technique.)
Applying all of this to your vacation snapshots may seem a bit over the top, like using a hand grenade to kill a mouse. But there are circumstances where the technique can be very valuable. One, pointed out in the Wired article, is using the method to reduce the time required to acquire medical diagnostic images, such as MRIs. In other cases, the sensors may be deployed remotely, on a spacecraft for example. Reducing the amount of image data that must be acquired can save battery power and bandwidth, both of which may be scarce resources. Admittedly, there is something about the technique that seems like getting something for nothing, but it’s just really using mathematical tools to take advantage of what we know about the data we’re trying to collect.