Just did a bit of a deep dive into dithering myself, for my project of creating an epaper laptop. https://peterme.net/building-an-epaper-laptop-dithering.html it compares both error diffusion algorithms as well as Bayer, blue noise, and some more novel approaches. Just in case anyone wants to read a lot more about dithering!
I had a project with those 7 colour e-paper displays and used dithering and it looked amazing. Crazy how much you could fake with just 7 colours and dithering
Nice writeup. I've been looking at this for a print-on-demand project and found that physical ink bleed changes the constraints quite a bit compared to e-paper. In my experience error diffusion often gets muddy due to dot gain, whereas ordered dithering seems to handle the physical expansion of the ink better.
Thanks! I would imagine printing on paper would be a completely different ball game. I actually considered scanning the actual epaper display to show each of the dithering techniques in their intended environment as it does change the look quite a bit. From the little I know about typography and things like ink-wells I can definitely see how certain algorithms can change quite significantly. The original post here has a pattern which looks similar to old newspapers, maybe that's worth looking into?
> In my experience error diffusion often gets muddy due to dot gain
Absolutely - there's a reason why traditional litho printing uses a clustered dot screen (dots at a constant pitch with varying size).
I've spent some time tinkering with FPGAs and been interested by the parallels between two-dimensional halftoning of graphics and the various approaches to doing audio output with a 1-bit IO pin: pulse width modulation (largely analogous to the traditional printer's dot screen) seems to cope better with imperfections in filters and asymmetries in output drivers than pulse density modulation (analogous to error diffusion dithers).
Traditional litho actually uses either lines in curved crosshatch patterns or irregular stippling. Might be doable using an altered error-diffusion approach that rewards tracing a clearly defined line as opposed to placing individual dots or blots.
After implementing a number of dithering approaches, including blue noise and the three line approach used in modern games, I’ve found that quasi random sequences give the best results. Have you tried them out?
Ooh, I haven't actually! I'll need to implement and test this for sure. Looking at the results though it does remind me of a dither (https://pippin.gimp.org/a_dither/), which I guess makes sense since they are created in a broadly similar way.
Looks pretty good! It looks a bit like a dither, but with fewer artifacts. Definitely a "sharper" look than blue noise, but in places like the transitions between the text boxes you can definitely see a bit more artifacts (almost looks like the boxes have a staggered edge).
What is the advantage over blue noise? I've had very good results with a 64x64 blue noise texture and it's pretty fast on a modern GPU. Are quasirandom sequences faster or better quality?
(There's no TAA in my use case, so there's no advantage for interleaved gradient noise there.)
EDIT: Actually, I remember trying R2 sequences for dither. I didn't think it looked much better than interleaved gradient noise, but my bigger problem was figuring out how to add a temporal component. I tried generalizing it to 3 dimensions, but the result wasn't great. I also tried shifting it around, but I thought animated interleaved gradient noise still looked better. This was my shadertoy: https://www.shadertoy.com/view/33cXzM
Outside of being informative in a really fun way (I learned far more in a couple minutes than I thought I would), that website is stunning. I've been a web dev for over 10 years and I'm still baffled at how people make sites like this, does anyone have any info or resources on how to go about making these sorts of transitional 3d sites beyond just "learn threejs"?
I used ordered dithering in my ZX Spectrum raytracer (https://gabrielgambetta.com/zx-raytracer.html#fourth-iterati...). In this case it's applied to a color image, but since every 8x8-pixel block can only have one of two colors (one of these fun limitations of the Spectrum), it's effectively monochrome dithering.
I built a blue noise generator and dithering library in Rust and TypeScript. It generates blue noise textures and applies blue noise dithering to images. There’s a small web demo to try it out [1]. The code is open source [2] [3]
There is something very satisfying in viewing media at 100% resolution of your screen. Every pixel is crisp and plays a role. Joy not available by watching videos or viewing scaled images.
When you look at something like Pietà by Michelangelo or Lolita by Vladimir Nabokov, you realise that some humans are given abilities that far exceed your own and that you will never reach their level.
When this happens, you need to stop and appreciate the sheer genius of the creator.
Normally I am not a fan of gimmicky page formats but this series really hits it out of the park with well-considered presentation.
I can't wait until the next installment on error diffusion. I still think Atkinson dithering looks great, so much so that I made a web component to dither images.
Bayer dithering in particular is part of the signature look of Flipnote Studio animations, which you may recognize from animators like kekeflipnote (e.g. https://youtu.be/Ut-fJCc0zS4)
Bayer dithering was also employed heavily on the original PlayStation. The PS1's GPU was capable of Gouraud shading with 24-bit color precision, but the limited capacity (1 MB) and bandwidth of VRAM made it preferable to use 16-bit framebuffers and textures. In an attempt to make the resulting color bands less noticeable, Sony thus added the ability to dither pixels written to the framebuffer on-the-fly using a 4x4 Bayer matrix hardcoded in the GPU [1]. On a period-accurate CRT TV using a cheap composite video cable, the picture would get blurred enough to hide away the dithering artifacts; obviously an emulator or a modern LCD TV will quickly reveal them, resulting in a distinct grainy look that is often replicated in modern "PS1-style" indie games.
Interestingly enough, despite the GPU being completely incapable of "true" 24-bit rendering, Sony decided to ship the PS1 with a 24-bit video DAC and the ability to display 24-bit framebuffers regardless. This ended up being used mainly for title screens and video playback, as the PS1's hardware MJPEG decoder retained support for 24-bit output.
Look at it this way though, this site is low-key a CV portfolio piece because he isn't just writing about dithering, he's demonstrating that he can research, analyze and then both code and create a site at a level most vibers cannot.
Absolutely - there's a reason why traditional litho printing uses a clustered dot screen (dots at a constant pitch with varying size).
I've spent some time tinkering with FPGAs and been interested by the parallels between two-dimensional halftoning of graphics and the various approaches to doing audio output with a 1-bit IO pin: pulse width modulation (largely analogous to the traditional printer's dot screen) seems to cope better with imperfections in filters and asymmetries in output drivers than pulse density modulation (analogous to error diffusion dithers).
https://extremelearning.com.au/unreasonable-effectiveness-of...
Looks pretty good! It looks a bit like a dither, but with fewer artifacts. Definitely a "sharper" look than blue noise, but in places like the transitions between the text boxes you can definitely see a bit more artifacts (almost looks like the boxes have a staggered edge).
Thanks for bringing this to my attention!
(There's no TAA in my use case, so there's no advantage for interleaved gradient noise there.)
EDIT: Actually, I remember trying R2 sequences for dither. I didn't think it looked much better than interleaved gradient noise, but my bigger problem was figuring out how to add a temporal component. I tried generalizing it to 3 dimensions, but the result wasn't great. I also tried shifting it around, but I thought animated interleaved gradient noise still looked better. This was my shadertoy: https://www.shadertoy.com/view/33cXzM
[1] https://blue-noise.blode.co [2] https://github.com/mblode/blue-noise-rust [3] https://github.com/mblode/blue-noise-typescript
https://github.com/ivanesmantovich/halftone-theme-vsc
It's ok for people to get excited about shared passions
When this happens, you need to stop and appreciate the sheer genius of the creator.
This is one of those posts.
I can't wait until the next installment on error diffusion. I still think Atkinson dithering looks great, so much so that I made a web component to dither images.
If the author stops by, I'd be interested to hear about the tech used.
Dithering - Part 1
https://news.ycombinator.com/item?id=45750954
Interestingly enough, despite the GPU being completely incapable of "true" 24-bit rendering, Sony decided to ship the PS1 with a 24-bit video DAC and the ability to display 24-bit framebuffers regardless. This ended up being used mainly for title screens and video playback, as the PS1's hardware MJPEG decoder retained support for 24-bit output.
[1]: https://psx-spx.consoledev.net/graphicsprocessingunitgpu/#24...
Look at it this way though, this site is low-key a CV portfolio piece because he isn't just writing about dithering, he's demonstrating that he can research, analyze and then both code and create a site at a level most vibers cannot.