The Megapixel
As part of the Manchester Science Festival 2016, Katie Steckles decided to build a monitor for the domino computer. Or rather, to demonstrate how a computer monitor builds up an image from red, green and blue pixels by making a giant one from pens. This was the Megapixel project.
The final image comprised around eight thousand pixels, each much like these ones that Angie coloured:
Dropped into @msimanchester for some therapeutic lunchtime colouring in with @mcrmegapixel #msf16 pic.twitter.com/QUtE6eSkWa
— Angie Chan (@angieokchan) October 26, 2016
This came about because Matt Parker, for Festival of the Spoken Nerd’s Full Frontal Nerdity show, wanted to convert photographs into spreadsheets. He did the first couple himself, longhand, but later asked me to build a tool to automate the process.
My PhD had a chapter or two on accurately measuring colours in digital images, and so I was given the job of applying that knowledge in reverse, to make sure the final Megapixel display was calibrated properly.
Calibration
All computer displays, and some TVs, have a range of calibration settings to make sure images displayed on them look correct. Usually this is done by displaying an image with some sensible default calibration, adjusting it, and repeating until everything is correct. That is fine at the 25, 30 or 60fps a computer can push out, but it took us about three weeks to display one frame on the Megapixel, so we had to calibrate the display before it had been built.
Gamma
The first step was easy. The first step was gamma.
The human eye evolved to see a black panther hiding in the shadows at night, and so is very good at seeing even very small differences between dark colours. Small differences between light colours are less important — when there is so much light about, the differences are bigger. Therefore digital images don’t waste bandwidth storing lots of shades of white. Instead, they use those bits for storing lots of shades of black and other dark colours.
In 8-bit, there are 256 shades of grey, defined by RGB values from 0 to 255. But a grey where the RGB values are 128 is only 22% as bright as one where they’re 255. RGB 64, a quarter of the way up the scale, is about 5% as bright as RGB 255. That said, the scale looks roughly linear to human eyes, because we are very good at seeing that 5%.
The maths used for this is deliberately simple. It isn’t designed to exactly mimic human vision — it’s just there to save data, and to be easy to calculate and combine — so we use a power law:
\[\text{Amount of light} \propto \left ( \frac{RGB}{255} \right )^\gamma\]and by convention, $\gamma = 2.2$. We mostly want to work in terms of light rather than RGB values, so we apply this correction to everything from here on.
There are some notable exceptions to all this: Some systems use more complex ‘gamma’ corrections, including the new Hybrid-Log Gamma, which starts out the same as normal gamma, and then at an RGB value around 142, switches to a log scale and shoots off into really bright colours. And the old Asteroids arcade machines used line graphics on a single-colour screen, and they did it by directly controlling the beam of the electron gun in an old CRT display. This meant the game was just in black and white, but did mean that it did not have pixels at all, arguably making it higher-resolution than any modern gaming system.
Most screens have a slightly non-linear output as well: if the computer asks for two colours, one twice as bright as the other, it probably won’t get exactly that. Low-end systems use another gamma factor to correct for this, again, not because it’s accurate but because it’s easy and basically works. But the Megapixel, by its design, does not suffer from this. It stands to reason that colouring forty pixels gives twice as much light as colouring twenty, since there’s twice as much space for it to pass through. So to display images on the Megapixel display, we set gamma to exactly one — or don’t bother applying it at all.
Colour Correction
All displays use red, green and blue sub-pixels to create colour images, but not all of them use exactly the same shades of red, green and blue. This is usually corrected for by taking RGB values from one “colour space”, and converting them to another using a matrix. We had to do exactly this for the Megapixel, and since displaying even a small test image takes a day, we had to do it more-or-less blind.
It is a myth that the eye has red, green and blue light receptor cells. It does have three types of colour-receptive "cone" cells, but all of them respond at least a little to any colour of visible light, and the sensitivity peaks are really at violet, green and yellow. When building a display, you can choose any three primary colours you want, and the range of colours it will be able to display (its 'gamut') will depend on what you choose. It is not possible for the gamut to include all the colours, but choosing good shades of red, green and blue lets you get most of them.
Worse, we were starting out quite a long way from the ideal configuration. Our shades or red, green and blue were decided by the colours of Edding OHP pens, which were apparently not designed with this use case in mind.
On the other hand, it wasn’t necessary to get it perfect. Most people have never bothered to calibrate their computer display because it looks fine with the defaults. There is quite a range of broadly-similar calibrations out there, roughly centred on the actual correct one, and I figured if we could get the Megapixel into that range, it would be fine.
So I made the bold (indeed, false) assumptions that all of our cameras and monitors were perfectly calibrated to the standard RGB colourspace sRGB, and that sRGB was a totally accurate model of human colour perception.
Next, we coloured a test pixel fully red, green and blue, and measured the colours (accounting for gamma, of course). We assumed that the black pen was totally black. Again, this is demonstrably false, but didn’t seem to affect the process apart from making the maths simpler.
In order to create test images without actually doing any colouring in, I wrote some code to blow up an image, and replace each pixel with a tiny picture of a Megapixel pixel — so instead of one white pixel, it would display many red, green and blue pixels. This worked pleasingly well, so I replaced the shades of red, green and blue with those from the photograph of the test pixel, and the image immediately turned purple. This was the colour distortion we needed to correct: we have to change the RGB values so that it looks right after the enpurpling effect of the pens’ somewhat off-kilter colour-space.
The challenge is to find a number of red, green and blue dots which will result in the same amount of “actual” red, green and blue as the original image.
The most elegant way to visualise the change is as a change of coorinate system. Mathematically this is called a “change of basis” and can be described by a matrix. Mouse over the graph below to see it in action (in a 2D version; the maths is exactly the same in any number of dimensions, if a bit pointless in 1D).
The matrix version is arguably more mathematically elegant, but far less intuitive. Here it is, but feel free to skip it. Colours are row vectors:
\[\left [ \begin{matrix} R && G && B \end{matrix} \right ]\]Define $P$ as the colours of the pens (adjusted for gamma). The top row is the $RGB$ colour of the red pen, the second row the green pen, and the bottom row the blue:
\[P = \left [ \begin{matrix} 0.91 && 0.03 && 0.08 \\ 0.00 && 0.38 && 0.08 \\ 0.00 && 0.14 && 0.82 \end{matrix} \right ]\]Multiplying a colour vector by $P$ gives the colour that much red, green and blue ink would produce. That’s the opposite of what we need, so this is the matrix we want:
\[M = P^{-1} = \left [ \begin{matrix} 1.1 && -0.04 && -0.11 \\ 0 && 2.75 && -0.28 \\ 0 && -0.49 && 1.27 \end{matrix} \right ]\]Now, multiplying a colour from the original image by $M$ gives the amount of pen we need.
Some of the numbers in $M$ are negative. This can cause some colours to contain negative amounts of ink. These colours obviously cannot be produced. These are the colours below the red arrow in the graph, or left of the blue arrow. Other colours require more ink than will fit in a pixel; these can be made if the rest of the image is darkened.
But you don’t need to understand this yourself, since I built a tool to convert sRGB image into Edding-pen-RGB (also available from Github either hosted or as a repo).
The final result is available on the Megapixel website.
Here's @standupmaths and volunteer @SparroWill putting up some more pixels in the window... you missed a bit! #msf16 pic.twitter.com/JLqkUOF52A
— Manchester MegaPixel (@mcrmegapixel) October 30, 2016
Here it is! A photo of the finished image, and the original. We'll have better-lit versions shortly. pic.twitter.com/G5lhDqHtqO
— Manchester MegaPixel (@mcrmegapixel) October 30, 2016