Gamma-correct rendering, part 4

Welcome to the fourth part of the trilogy. Last time, I showed how to perform gamma-correct rendering and compositing directly in 8-bits-per-channel RGBA space. This approach affords convenience with reasonable performance for applications with modest graphical requirements. One disadvantage is that image processing operations such as rescaling or blurring require explicit modification in order to be made gamma-correct. Also, the "on-the-fly" gamma correction technique is not amenable to the use of associated (aka "premultiplied") alpha, which facilitates more efficient image processing. In this article, I'll address these issues by rendering in 16-bits-per-channel linear RGBA.

Get on with it!

OK, let's take the demo from part 3 as the starting point and work from there. I'd like to compare rendering in three pixel formats: plain sRGB, gamma-corrected sRGB, and 16-bit linear RGB. For this article, I'm not particularly interested in differences between blurring algorithms, so I'll use stack blur throughout and commandeer the existing radio buttons for the purpose of switching between pixel formats. I'll remove all the check boxes, as they're no longer needed. The timing code is moved out of render() into on_draw() so that the entire rendering time is measured rather than just the blurring part. The demo app now looks like this:

The two sRGB options are pretty much the same as in part 3. So let's consider consider how to implement the "16-bit linear RGB" option. AGG's platform_support::create_img() function only creates buffers suitable for the primary pixel format, so we need to create our own 16-bits-per-channel buffers. I couldn't find any ready-to-use classes in AGG for this, so I defined my own pixel_map class that simply allocates a suitably-sized block of memory using agg::pod_array:

template<class PixFmt> class pixel_map

{

agg::pod_array<agg::int8u> m_buf;

agg::rendering_buffer m_rbuf;

PixFmt m_pixfmt;

public:

void create(unsigned width, unsigned height)

{

m_buf.resize(sizeof(PixFmt::color_type) * width * height);

m_rbuf.attach(m_buf.data(), width, height, width * 8);

m_pixfmt.attach(m_rbuf);

}

agg::rendering_buffer& rbuf() { return m_rbuf; }

PixFmt& pixfmt() { return m_pixfmt; }

};

In the_application, I add a two-element array of pixel_map objects, and override on_init() to initialize them:

pixel_map<agg::pixfmt_bgra64_pre> m_backbuffer[2];

...

virtual void on_init()

{

unsigned width = rbuf_window().width();

unsigned height = rbuf_window().height();

// Create two back buffers, one for the background, one for everything else.

m_backbuffer[0].create(width, height);

m_backbuffer[1].create(width, height);

}

So far, so good... but to get render() to compile for both 8-bit and 16-bit formats, I'm going to have to resort to a little template trickery. The first problem is that regular pixel format classes have a single-parameter constructor taking a reference to a rendering buffer, whereas gamma-correcting formats take an additional parameter specifying the gamma lookup table to use. No matter - I'll define a template function make_pixfmt() to return a correctly-initialized pixel format of the specified type. This function can be specialized for gamma-corrected formats:

// Template function to initialize a pixel format.

template<class PixFmt>

PixFmt make_pixfmt(agg::rendering_buffer& rbuf)

{

return PixFmt(rbuf);

}

// Specialization of make_pixfmt for gamma-corrected formats.

template<>

agg::pixfmt_bgra32_gamma<gamma_type> make_pixfmt(agg::rendering_buffer& rbuf)

{

return agg::pixfmt_bgra32_gamma<gamma_type>(rbuf, m_gamma_lut);

}

Splendid. Now we can render at 16 bits per channel, but how do we display the results on the screen? Well, we need to convert to 8-bit sRGB. But how? As it happens, AGG does include some facilities for color format conversion tucked away in the include/util directory. Specifically, the color_conv() template function will perform format conversion, given a suitable functor. There is an existing functor color_conv_bgra64_to_bgra32, which can be modified to convert from 16-bit linear RGB to sRGB:

template<class Gamma>

class color_conv_bgra64_to_bgra32_gamma

{

public:

typedef agg::int8u int8u;

typedef agg::int16u int16u;

color_conv_bgra64_to_bgra32_gamma(Gamma& g) { m_gamma = &g; }

void operator () (int8u* dst,

const int8u* src,

unsigned width) const

{

do

{

const int16u* p = reinterpret_cast<const int16u*>(src);

*dst++ = int8u(m_gamma->inv(p[0]));

*dst++ = int8u(m_gamma->inv(p[1]));

*dst++ = int8u(m_gamma->inv(p[2]));

*dst++ = int8u(p[3] >> 8);

src += 8;

}

while(--width);

}

private:

const Gamma* m_gamma;

};

For this to work properly, we're going to need a full 16-bit reverse lookup table:

Now we're in a position to have a first stab at implementing on_draw() for the 16-bit linear RGB case:

typedef agg::gamma_lut<agg::int8u, agg::int16u, 8, 16> gamma_type;

virtual void on_draw()

{

start_timer();

if (m_method.cur_item() == 2)

{

// Render at 16 bits per channel.

render(m_backbuffer[0].pixfmt(), m_backbuffer[1].pixfmt());

// Convert to sRGB.

agg::color_conv(&rbuf_window(), &m_backbuffer[0].rbuf(),

color_conv_bgra64_to_bgra32_gamma<gamma_type>(m_gamma_lut));

}

else ...

Let's see how this looks:

It looks absolutely terrible! What went wrong? Well, there are a couple of problems. One is that the pixel format is pixfmt_bgra64_pre, which takes premultiplied source pixels and blends them with premultiplied destination pixels. Therefore, any source pixels that are blended with the pixel format must be premultiplied. In the case of rasterized polygons, this simply means that the polygon fill colour must have its RGB values scaled by the alpha value. (Note that premultiplication is a no-op for fully-opaque colours.) The second issue is that 8-bit sRGB colour values must be linearized when converting to 16-bit. We can take care of all this by adding a few helper functions to the_application:

void apply_gamma_dir(agg::rgba8 & pix)

{

// No-op

}

void apply_gamma_dir(agg::rgba16 & pix)

{

pix.r = m_gamma_lut.dir(pix.r >> 8);

pix.g = m_gamma_lut.dir(pix.g >> 8);

pix.b = m_gamma_lut.dir(pix.b >> 8);

}

template<class ColorT>

void munge_color(ColorT & pix)

{

if (m_method.cur_item() == 2)

{

apply_gamma_dir(pix);

pix.premultiply();

}

else if (m_method.cur_item() == 1)

{

// No munging required.

}

else

{

pix.premultiply();

}

}

In the render() function, each colour must be passed though munge_color() before being used for rasterization. Let's see how things look now:

Things have certainly improved, but it's still not quite right. The bounding box control has turned a sickly shade of green, and the other controls look rather pale as well. What's up with that? It's because the agg::render_ctrl() function uses each control's self-defined colours directly without adjusting them to suit the current pixel format. This can be fixed by reimplementing render_ctrl() as a member function that calls munge_colour() as needed:

// Modified version of agg:render_ctrl that alters the control colors

// to suit the target pixel format.

template<class Rasterizer, class Scanline, class Renderer, class Ctrl>

void render_ctrl(Rasterizer& ras, Scanline& sl, Renderer& r, Ctrl& c)

{

unsigned i;

for(i = 0; i < c.num_paths(); i++)

{

ras.reset();

ras.add_path(c, i);

Renderer::color_type clr(c.color(i));

munge_color(clr);

render_scanlines_aa_solid(ras, sl, r, clr);

}

}

Finally, everything looks right:

By way of comparison, here are the results for the 8-bits-per-channel renderings:

To make a "real world" comparison, the uncorrected sRGB rendering is performed using premultiplied alpha. Even then, the gamma-corrected rendering is only about 25% slower (the displayed times were averaged over 100 renderings). The 16-bit rendering is around 10% slower again - partly due to the fact that the final result must be converted to sRGB before being displayed. This is the advantage of "on-the-fly" gamma correction - the 8-bits-per-channel pixel data it produces is ready for display without further processing.

Interestingly, the blur effect is almost indistinguishable between the corrected sRGB and 16-bit renderings. It's very slightly darker in the corrected sRGB rendering, but almost imperceptibly so.

Conclusion

In this mini-series, I've tried to explain what gamma is and the detrimental effect it can have on rendering if not corrected for. I've presented two methods of implementing gamma-correction: firstly by continuing to work in 8-bit space and gamma-correcting individual blending operations, and secondly by going the whole nine yards and rendering in 16-bit linear space. The choice, as they say, is yours.

Posted by Jim Barry, 2010-07-05.