So, I'm not sure how to start...
Recently I've felt like comparing available hqx implementations.
Those that I've managed to find were the original one, the slightly tweaked (and most cargo-culted (after all, I've fallen for it too)) from googlecode, fmpeg conversion of it and a shader one (glsl one has a trivial typo, but other than that works).
As I was comparing ffmpeg one with the shader, I've learned that those aren't equivalent - shader does float <-> integer correctly, ffmpeg - not quite.
So, I've looked at the original again...
Well, looking at it, what it calls YUV isn't really YUV.
As some of you here have some experience with 16bit color, do you have any idea what the original author what trying to get via this conversion ?
Code: Select all
{
Y = (r + g +b) >> 2;
u = 128 + ((r -b) >> 2);
v = 128 + ((-r +2g -b) >> 3);
}
The above doesn't account for the Interp functions, which are weighted sums, but also don't do the rounding quite right...