Ordered dithering

From Wikipedia, the free encyclopedia
In this example, the original photograph is shown on left. The version on the right shows the effect of quantizing it to 16 colors and dithering using the 8×8 ordered dithering pattern.
The characteristic 17 patterns of the 4×4 ordered dithering matrix can be seen clearly when used with only two colors, black and white. Each pattern is shown above the corresponding undithered shade.

Ordered dithering is an image dithering algorithm. It is commonly used to display a continuous image on a display of smaller color depth. For example, Microsoft Windows uses it in 16-color graphics modes. The algorithm is characterized by noticeable crosshatch patterns in the result.

Threshold map[edit]

The algorithm reduces the number of colors by applying a threshold map M to the pixels displayed, causing some pixels to change color, depending on the distance of the original color from the available color entries in the reduced palette.

Threshold maps come in various sizes, which is typically a power of two:

The map may be rotated or mirrored without affecting the effectiveness of the algorithm. This threshold map (for sides with length as power of two) is also known as an index matrix or Bayer matrix.[1]

Arbitrary size threshold maps can be devised with a simple rule: First fill each slot with a successive integer. Then reorder them such that the average distance between two successive numbers in the map is as large as possible, ensuring that the table "wraps" around at edges.[citation needed] For threshold maps whose dimensions are a power of two, the map can be generated recursively via:

This function can also be expressed using only bit arithmetic:[2]

M(i, j) = bit_reverse(bit_interleave(bitwise_xor(i, j), i)) / n ^ 2

Pre-calculated threshold maps[edit]

Rather than storing the threshold map as a matrix of × integers from 0 to , depending on the exact hardware used to perform the dithering, it may be beneficial to pre-calculate the thresholds of the map into a floating point format, rather than the traditional integer matrix format shown above.

For this, the following formula can be used:

Mpre(i,j) = Mint(i,j) / n^2

This generates a standard threshold matrix.

for the 2×2 map:

this creates the pre-calculated map:

Additionally, normalizing the values to average out their sum to 0 (as done in the dithering algorithm shown below) can be done during pre-processing as well by subtracting 12 from every value:

Mpre(i,j) = Mint(i,j) / n^2 – 0.5

creating the pre-calculated map:

Algorithm[edit]

The ordered dithering algorithm renders the image normally, but for each pixel, it offsets its color value with a corresponding value from the threshold map according to its location, causing the pixel's value to be quantized to a different color if it exceeds the threshold.

For most dithering purposes, it is sufficient to simply add the threshold value to every pixel (without performing normalization by subtracting 12), or equivalently, to compare the pixel's value to the threshold: if the brightness value of a pixel is less than the number in the corresponding cell of the matrix, plot that pixel black, otherwise, plot it white. This lack of normalization slightly increases the average brightness of the image, and causes almost-white pixels to not be dithered. This is not a problem when using a gray scale palette (or any palette where the relative color distances are (nearly) constant), and it is often even desired, since the human eye perceives differences in darker colors more accurately than lighter ones, however, it produces incorrect results especially when using a small or arbitrary palette, so proper normalization should be preferred.

Two images mimicking a gradient of 140 × 140 = 19600 different colors. Both images use the same 64 colors. The image on the right has been dithered. The dithering was done using a non-normalizing dithering algorithm, causing the image to have a slight over-representation of bright pixels.

In other words, the algorithm performs the following transformation on each color c of every pixel:

where M(i, j) is the threshold map on the i-th row and j-th column, c is the transformed color, and r is the amount of spread in color space. Assuming an RGB palette with 23N evenly distanced colors where each color (a triple of red, green and blue values) is represented by an octet from 0 to 255, one would typically choose . (12 is again the normalizing term.)

Because the algorithm operates on single pixels and has no conditional statements, it is very fast and suitable for real-time transformations. Additionally, because the location of the dithering patterns always stays the same relative to the display frame, it is less prone to jitter than error-diffusion methods, making it suitable for animations. Because the patterns are more repetitive than error-diffusion method, an image with ordered dithering compresses better. Ordered dithering is more suitable for line-art graphics as it will result in straighter lines and fewer anomalies.

The values read from the threshold map should preferably scale into the same range as the minimal difference between distinct colors in the target palette. Equivalently, the size of the map selected should be equal to or larger than the ratio of source colors to target colors. For example, when quantizing a 24 bpp image to 15 bpp (256 colors per channel to 32 colors per channel), the smallest map one would choose would be 4×2, for the ratio of 8 (256:32). This allows expressing each distinct tone of the input with different dithering patterns.[citation needed]

A variable palette: pattern dithering[edit]

Non-Bayer approaches[edit]

The above thresholding matrix approach describes the Bayer family of ordered dithering algorithms. A number of other algorithms are also known; they generally involve changes in the threshold matrix, equivalent to the "noise" in general descriptions of dithering.

Halftone[edit]

Halftone dithering performs a form of clustered dithering, creating a look similar to halftone patterns, using a specially crafted matrix.

Void and cluster[edit]

The Void and cluster algorithm uses a pre-generated blue noise as the matrix for the dithering process.[3] The blue noise matrix keeps the Bayer's good high frequency content, but with a more uniform coverage of all the frequencies involved shows a much lower amount of patterning.[4]

The "voids-and-cluster" method gets its name from the matrix generation procedure, where a black image with randomly initialized white pixels is gaussian-blurred to find the brightest and darkest parts, corresponding to voids and clusters. After a few swaps have evenly distributed the bright and dark parts, the pixels are numbered by importance. It takes significant computational resources to generate the blue noise matrix: on a modern computer a 64×64 matrix requires a couple seconds using the original algorithm.[5]

This algorithm can be extended to make animated dither masks which also consider the axis of time. This is done by running the algorithm in three dimensions and using a kernel which is a product of a two-dimensional gaussian kernel on the XY plane, and a one-dimensional Gaussian kernel on the Z axis.[6]

References[edit]

  1. ^ Bayer, Bryce (June 11–13, 1973). "An optimum method for two-level rendition of continuous-tone pictures" (PDF). IEEE International Conference on Communications. 1: 11–15. Archived from the original (PDF) on 2013-05-12. Retrieved 2012-07-19.
  2. ^ Joel Yliluoma. “Arbitrary-palette positional dithering algorithm
  3. ^ Ulichney, Robert A (1993). "The void-and-cluster method for dither array generation" (PDF). Retrieved 2014-02-11.
  4. ^ Wronski, Bart (31 October 2016). "Dithering part three – real world 2D quantization dithering".
  5. ^ Peters, Christoph. "Free blue noise textures". momentsingraphics.de.
  6. ^ Wolfe, Alan; Morrical, Nathan; Akenine-Möller, Tomas; Ramamoorthi, Ravi (2022). Spatiotemporal Blue Noise Masks. The Eurographics Association. doi:10.2312/sr.20221161. ISBN 978-3-03868-187-8. S2CID 250164404.

Further reading[edit]

  • Ancin, Hakan; Bhattacharjya, Anoop K.; Shu, Joseph S. (2 January 1998). Beretta, Giordano B.; Eschbach, Reiner (eds.). "Improving void-and-cluster for better halftone uniformity". Photonics West '98 Electronic Imaging. Color Imaging: Device-Independent Color, Color Hardcopy, and Graphic Arts III. 3300: 321–329. Bibcode:1998SPIE.3300..321A. CiteSeerX 10.1.1.40.5331. doi:10.1117/12.298295. S2CID 6219511.

External links[edit]