Quantcast
Channel: Richard Geldreich's Blog
Viewing all 302 articles
Browse latest View live

FasTC library


ETC1 texture format visualizations

$
0
0
I've been thinking about how to improve my ETC1 block encoder's quality. What little curiosities lie inside this seemingly simple format?

Hmm: Out of all possible ETC1 subblock colors in 5:5:5 differential mode, how many involve clamping R, G, and/or B to 0 or 255? Turns out, 72% (189704 out of 262144) of the possibilities involve clamping one or more components. That's much more often than I thought!

Here's a bitmap visualizing when the clamping occurs on any of the 4 block colors encoded by each 5:5:5 base color/3-bit intensity table combination. White pixels signify that one or more color components had to be clamped, and black signifies no clamping:


The basic assumption that each ETC1 subblock color lies nicely spread out along a single colorspace line isn't accurate, due to [0,255] clamping. So any optimization techniques written with this assumption in mind could be missing better solutions. Also, this impacts converting ETC1 to other formats like DXT1, because both endpoints of each colorspace line in DXT1 are separately encoded. Is this really a big deal? I dunno, but it's good to know.

Anyhow, here's a visualization of all possible subcolors. First, there are 4 images, one for each subblock color [0,3]. The 2-bit ETC1 selectors basically selector a color from one of these images.

Within an image, there are 8 rows, one for each of the ETC1 intensity tables. Within a row, there are 32 small "tiles" for blue, and within each little 32x32 tile is red (X) and green (Y).





Visualizing ETC1 block encoding error as a 4D function

$
0
0
Given a particular 4x4 pixel block, what does the error of all possible ETC1 5:5:5 base color+3-bit intensity encodings look like? The resulting 4D visualization could inspire better optimization algorithms.

To compute these images, I created an ETC1 block in differential mode (5:5:5 base color with a 3:3:3 delta), set the base color to R,G,B, the diff color to (0,0,0), and set both subblock intensity table values to the same index from 0-7. I then encoded the source pixels (by finding the optimal selectors for each pixel), decoded them, and computed the overall block error (as perceptually R,G,B weighted color distance).

These visualizations are linear, where the brightest value (255) is max error, black is 0 error. The blocks used to compute each visualization are here too:










Finding the "best" block color+intensity table index to use in a subblock is basically a 4D search through functions like above. Hill climbing optimization seems useful, except for those pesky local minimums. For fun, I've already tried random-restart hill climbing, and it works, but there's got to be a better way.

rg_etc1 starts at the block's average color and scans outwards along the RGB axes, trying to find better colors. It always tries all 8 intensity tables every time it tries a candidate color (which in retrospect seems wildly inefficient, but hey I wrote it over a weekend years ago). It also has several refinement steps. One of them factors in the selectors of the best color found so far, in an attempt to improve the current block color. rg_etc1 ran circles around Mali's reference encoder, from what I remember, which was my goal.

ETC1 with 3D/4D random-restart hill climbing

$
0
0
For fun, I implemented a full ETC1 block encoder using random-restart hill climbing, to see how it behaves compared to my current custom optimizer (the one in rg_etc1). This method works surprisingly well and is quite simple. (Note I'm switching to luma PSNR, because I've been using perceptually weighted color distance. My previous posts used average RGB PSNR.)

The number of attempts per block is fixed. The first 4D hill climb always starts at the subblock's average color, with an intensity table index of 3. The second 4D hill climb starts at a random color/intensity. (In differential mode, the 2nd subblock's hill climb position is constrained to lie near the first one, otherwise we can't code it.) Eventually, it switches from 4D to 3D hill climbing, by randomly climbing only within the best found intensity plane.

The nearly-best ETC1 encoding (using rg_etc1 - not hill climbing) was 38.053 dB:
Y: Error: Max:  38, Mean: 2.181, MSE: 10.181, RMSE: 3.191, PSNR: 38.053, SSIM: 0.983632

- 1 hill climb, 33.998 dB:


Y: Error: Max:  35, Mean: 3.943, MSE: 25.901, RMSE: 5.089, PSNR: 33.998, SSIM: 0.933979

- 2 hill climbs, 37.808 dB:



Y: Error: Max:  33, Mean: 2.281, MSE: 10.770, RMSE: 3.282, PSNR: 37.808, SSIM: 0.980324

- 4 hill climbs, 37.818 dB:

 Y: Error: Max:  33, Mean: 2.280, MSE: 10.748, RMSE: 3.278, PSNR: 37.818, SSIM: 0.980280

- 16 hill climbs, 37.919 dB:

Y: Error: Max:  38, Mean: 2.241, MSE: 10.499, RMSE: 3.240, PSNR: 37.919, SSIM: 0.981631

That 2nd random 4D hill climb helps a lot. Quality quickly plateaus however, at least on this image, and subsequent climbs don't add much. Very interestingly to me, even just 4 climbs nearly matches the quality of my hand-tuned ETC1 optimizer.

Quick etcpak quality test

$
0
0
etcpak is a useful and really fast ETC1 (and some of 2) texture compressor. There is no such thing as a free lunch however, and there are some tradeoffs involved here. Quick example:

Original (kodim03):


crnlib in ETC1 uber mode (8.067 seconds):


RGB: Error: Max:  88, Mean: 2.086, MSE: 9.770, RMSE: 3.126, PSNR: 38.232, SSIM: 0.982703
Y: Error: Max:  34, Mean: 1.304, MSE: 3.750, RMSE: 1.936, PSNR: 42.391, SSIM: 0.982703

etcpak, ETC1 mode only (.006 seconds):


RGB: Error: Max:  80, Mean: 2.492, MSE: 12.757, RMSE: 3.572, PSNR: 37.073, SSIM: 0.980072
Y: Error: Max:  49, Mean: 1.494, MSE: 4.996, RMSE: 2.235, PSNR: 41.144, SSIM: 0.980072

Note I've integrated etcpak directly into my project, and used the BlockData class directly. This thing is *fast*, even without threading!

crnlib has several lower quality settings that are much faster (and still higher quality than etcpak), but nowhere near the speed of etcpak. I've not been focused on pure speed, but on quality and unique features like RDO and intermediate formats like .CRN.

I think the primary value of etcpak is its high performance and relatively compact code size (especially for an ETC2-aware compressor). On many textures/images it'll look perfectly fine. Next up is ETC2Comp, limited to ETC1 mode.

On 30Hz console games

$
0
0
That framerate feels incredibly low to me now. I've worked on 60Hz and 30Hz console titles, and the optimization efforts required felt very different. Keeping a smooth, hypnotic 60Hz was sometimes extremely tricky. Now with VR 30Hz seems so incredibly antiquated.

Let's evaluate the current state of ETC1/2 compression libraries

$
0
0
For regular block encoders (not RDO or crunch-style systems), I think what I need to do is to plot this like I would a lossless Pareto Frontier, with the Y axis being some measure of quality and the X axis being encoding speed across a wide range of test textures. Perhaps I can normalize the quality metric achieved by each encoder at its various settings vs. the highest achievable quality, for each image.

As far as I can tell so far, nobody's beating the quality/performance of etcpak, at its performance point. It's going to be fascinating to compare etcpak vs. ETC2Comp. Let's see how these two compare for pure ETC1 encoding, which is available across a huge range of devices. I'll compare against crnlib's ETC1 block encoder in multithreading mode, which was released before either etcpak or ETC2Comp.

ETC1 compressor quality comparison on 1,572 textures

$
0
0
Here's a quick quality comparison of etc2comp, etcpak, and my ETC1 encoder (which is directly derived from rg_etc1 but modified to support perceptual colorspace metrics).

etc2comp was limited to ETC1 mode, with REC709 error metrics, using an "effort" value of 1.0. Looking at the code and comments, this mode favors luma accuracy over chroma, and red over blue.

I encoded 1,572 PNG textures from the corpus I used to build and test crunch. I decoded these images using rg_etc1's unpacker, then computed the PSNR error on the luma and wrote the data to a big .CSV file.

In this graph, everything is sorted by the PSNR achieved using basislib (my ETC1 encoder), because it generally has the highest PSNR of the three encoders.


My encoder in "uber" mode is the best most of the time, but there are a few exceptions which I'm going to investigate. Uber mode is much slower than the other encoders, but that's not the point of this test. All I care about is the max. achievable quality of each encoder.

As expected, etcpak's quality is lowest, and it has a lot of quality dips. On the flip side, it's extremely fast. etcpak's internal design is quite clever and well optimized, and it would be interesting if the author included a higher quality mode.

etc2comp's quality in pure ETC1 mode is a little lower than my lib at its "normal" setting, and noticeably lower in the "uber" setting. (I only show the uber setting's quality above to keep the graph simple.) etc2comp's speed is very good, a bit faster than my encoder in "normal" mode. The code looks clean, and it compiles without a hitch in VS2015 which was nice.

For etc2comp, I would really like to see its ETC1 mode's quality improved. I look forward to trying it in ETC2 mode, to see how much the extra block modes help. Perhaps there is a possibility that a really high quality ETC1 encoder could beat a ETC1/2 encoder (that doesn't try as hard) some of the time. (Maybe - ETC2 has some really nice looking modes in there!)

I've generated several Pareto Frontier scattergraphs, showing quality vs. encoding throughput, and etc2comp's ETC1 mode is in the middle from a quality and performance perspective. (Which is good! It's a very practical, production-ready encoder.) I'll post this once I have more accurate performance data for etcpak.


Comparison of three ETC1 and ETC2 block encoders

$
0
0
Update 9/19: John Brooks at Blue Shift (BSI) and I investigated this on a single image (kodim24.png), and after a bunch of back and forth email debugging I noticed the "effort" parameter the benchmark code passed into etc2comp wasn't in the same range as what BSI's tool passes in. It's not [0,1], it's [0,100]. Looks like that explains its weak performance here. I'm re-running the benchmark tonight at various effort levels, and given what I've seen on kodim24 etc2comp should perform *much* better now. It would be awesome if the etc2comp's compression function parameters were documented here, and I recommend that "effort" be changed to an enum with 10-20 values vs. a float with an odd range:

https://github.com/google/etc2comp/blob/master/EtcLib/Etc/Etc.h

This post shows that compressed texture quality ultimately depends on the library you use to encode your textures. It's possible for a ETC1-only compressor to output textures that actually look better than an ETC2-capable compressor, if that ETC2 compressor's handling of the ETC1 block types is lacking, or if the ETC2 encoder doesn't support all the ETC2 block types, like etcpak. Good software matters here.

This first graph compares all three encoders in ETC1 mode, using RGB (average) PSNR. (See my previous post for Luma PSNR.) Note that these RGB PSNR's are computed the same way as ImageMagick's "compare" tool. etc2comp and etcpak's encoders just aren't very strong in ETC1 mode:


Now the next two graphs are really interesting: I enabled ETC2 support in etc2comp and etcpak (basislib doesn't support ETC2 block encoding yet, just decoding). Notice that, even with ETC2 support enabled, basislib in ETC1 mode is still doing pretty well much of the time. It's kinda cool to watch an ETC1 encoder beat an ETC2 encoder a lot of the time. (I can't wait to implement ETC2 support on top of my ETC1 encoder!)


Notice, at the lower PSNR's, etc2comp shines vs. basislib ETC1. I'm guessing this is where the new ETC2 block types (T, H, and planar) really help. As you go up to higher PSNR's etc2comp gets weaker and weaker compared to basislib's ETC1 encoder, and my guess is it's being held back by its relatively weak support for ETC1. (These are just guesses, I don't have the time to dive in and verify these assertions at the moment.)

Here's basislib ETC1 vs. the others in ETC2 mode, after enabling perceptual colorspace metrics in etc2comp and basislib, and graphing using luma PSNR:


I tend to relentlessly optimize my DXT/ETC/etc. block encoders for perceptual error, not RGB error, and maybe this is reflected in this graph.

Anyhow, I think etc2comp is relying too much on the ETC2 block types. So basically, etc2comp's ETC1 encoder (which is usually used by the majority of blocks) isn't taking full advantage of what's possible in ETC1 yet.

How to compute PSNR (from an old Berkeley course)

$
0
0
This was part of Berkeley's CS294 Fall '97 courseware on "Multimedia Systems and Applications", but it got moved and disappeared. It was a useful little page so I'm duplicating it here for reference purposes:

https://web.archive.org/web/20090418023748/http://bmrc.berkeley.edu/courseware/cs294/fall97/assignment/psnr.html

https://web.archive.org/web/20090414211107/http://bmrc.berkeley.edu/courseware/cs294/fall97/index.html


Image Quality Computation

Back to Assignment ]

Signal-to-noise (SNR) measures are estimates of the quality of a reconstructed image compared with an original image. The basic idea is to compute a single number that reflects the quality of the reconstructed image. Reconstructed images with higher metrics are judged better. In fact, traditional SNR measures do not equate with human subjective perception. Several research groups are working on perceptual measures, but for now we will use the signal-to-noise measures because they are easier to compute. Just remember that higher measures do not always mean better quality.

The actual metric we will compute is the peak signal-to-reconstructed image measure which is called PSNR. Assume we are given a source image f(i,j) that contains N by N pixels and a reconstructed image F(i,j) where F is reconstructed by decoding the encoded version of f(i,j). Error metrics are computed on the luminance signal only so the pixel values f(i,j) range between black (0) and white (255).

First you compute the mean squared error (MSE) of the reconstructed image as follows


The summation is over all pixels. The root mean squared error (RMSE) is the square root of MSE. Some formulations use N rather N^2 in the denominator for MSE.

PSNR in decibels (dB) is computed by using


Typical PSNR values range between 20 and 40. They are usually reported to two decimal points (e.g., 25.47). The actual value is not meaningful, but the comparison between two values for different reconstructed images gives one measure of quality. The MPEG committee used an informal threshold of 0.5 dB PSNR to decide whether to incorporate a coding optimization because they believed that an improvement of that magnitude would be visible.

Some definitions of PSNR use 2552/MSE rather than 255/RMSE. Either formulation will work because we are interested in the relative comparison, not the absolute values. For our assignments we will use the definition given above.

The other important technique for displaying errors is to construct an error image which shows the pixel-by-pixel errors. The simplest computation of this image is to create an image by taking the difference between the reconstructed and original pixels. These images are hard to see because zero difference is black and most errors are small numbers which are shades of black. The typical construction of the error image multiples the difference by a constant to increase the visible difference and translates the entire image to a gray level. The computation is


You can adjust the constant (2) or the translation (128) to change the image. Some people use white (255) to signify no error and difference from white as an error which means that darker pixels are bigger errors.


References

A.N. Netravali and B.G. Haskell, Digital Pictures: Representation, Compression, and Standards (2nd Ed), Plenum Press, New York, NY (1995).

M. Rabbani and P.W. Jones, Digital Image Compression Techniques, Vol TT7, SPIE Optical Engineering Press, Bellevue, Washington (1991).

Important note about PSNR

$
0
0
Yes, I know PSNR (and RMSE, etc.) is not an ideal quality metric for image and video compression. Keep in mind there is a large diversity of data stored as textures in modern games and applications: Albedo maps, specular maps, gloss maps, normal maps, light maps, various engine-specific multichannel control maps, 2D sprites, transparency (alpha) maps, satellite photos, cubemaps, etc. And let's not even talk about how anisotropic filtering, shading, normal mapping, shadowing, etc. impacts perceived quality once these textures are mapped onto 3D meshes.

RGB and Luma PSNR are simple and, in my experience writing and tuning crunch, reliable enough for practical usage. I'm not writing an image or video compressor, I'm writing a texture compressor.

Let's try DXT1 vs. ETC1/2 benchmarks

$
0
0
John Brooks at Blue Shift brought up this idea earlier. I think it's a great idea! I love good old DXT1 (or "BC1" as some call it). Let's see how ETC2 in particular compares against my old favorite.

ETC1/2 vs. DXT1 texture compression benchmark

$
0
0
I'm using the same testing tool, dataset and methodology explained in my ETC1/2 benchmark. In this benchmark, I've added in my vanilla (non-RDO/CRN) DXT1 block encoder (really, its DXT1 endpoint optimizer class), which is derived from crunch's.

In 2009 my DXT1 encoder was as good or better than all available DXT1 compressors that I tested it against, such as squish, ATI Compressonator, NVidia's original and old NVDXT libary, and D3DX's. Not sure how much change has occurred in DXT1 compression since that time. I can also throw in other DXT1 encoders if there's interest.

RGB error metrics:


Here's just ETC2 vs. DXT1:


This is fascinating!

Next up: BC7.

About the HW1 codebase having "too many globals"

$
0
0
First off, this project was a death march. What Paul Bettner (formerly Ensemble, now at Playful Corp) publicly said years ago is true: Ensemble Studios was addicted to crunching. I lived, breathed, and slept that codebase. We had demos every 4-8 weeks or something. This time in my life was beyond intense. I totally understand why Microsoft shut us down, because we really needed to be put out of our collective misery.

I was more or less addicted to crunch at Ensemble. Working on all those demo milestones was a 3 year adventure. That team was so amazing, and we all got along so well. I could never do it again like that unless lives depended on it.

Anyhow, the engine/tools team on that project built a low-level, very 360-specific "game OS" in C++ for the simulation team. Why did we build a whole new engine from the ground up? Because the Age3 engine just completely melted down after Billy Khan and I ported it to 360. (That was 4 months of the most painful, mind numbing full-time coding, porting and debugging I've ever done.) It ran at ~7 FPS, on a single thread, and took 3-5 minutes to load.

The HW1 engine consisted of many global managers, very heavy use of synchronous/asynchronous cross-thread messaging, and lightweight platform-specific wrappers built on top of the Win32 and D3D API's. The renderer, animation, sound, streaming, decompression, networking, and overlapped I/O systems were heavily multithreaded. (Overlapped I/O actually worked properly on Xbox 360's OS.) We used 360-specific D3D9 extensions that allowed us to compose command buffers from multiple threads. There are lots of other cool things we did on HW1 that I'll cover here on rainy days.

The original idea for using message passing for most of our parallelism in our next engine was from Bill Jackson, now CCO at Boss Fight Entertainment in Dallas. I implemented it and refined the idea before I really understood how useful it was. It was inspired by message passing and concurrency in Erlang. It worked well and was really fun to use, but was hard to debug. Something like 5,000 intra and inter thread messages were involved in loading a map in the background while Scaleform UI was playing back on its own core. We also had a simple job system, but most of our coherency was implemented using message passing. (See this article on a similar Message Passing system by Nicholas Vining.)

We tried to follow our expression of the Unix philosophy on this game: Lots of little objects, tools, and services interacting in an ecosystem. Entire "game OS" services were designed to only send/receive and process messages on particular 360 CPU cores.

My manager and I created this powerful, highly abstracted virtual file I/O system with streaming support. The entire game (except the 360 executable) could quickly load over the network using TCP/IP, or off the hard drive or DVD using package files. Hot reloading was supported over the network, so artists could watch their textures, models, animations, terrain, and lights change in semi real-time.

Something like singletons made no sense for the managers. These services were abstracting away one specific global piece of hardware or global C API, so why bother. I've been told the C-based Halo codebases "followed not strictly the same philosophy, but of the same mind".

This codebase was very advanced for its time. It made the next series of codebases I learned and enhanced feel 5-10 behind the times. I don't talk about it because this entire period of time in my life was so intense.

SSIM

$
0
0
Alright, I'm implementing SSIM. There are like 30 different implementations on the web, and most either rely on huge dependencies like OpenCV or have crappy licenses. So which one do I compare mine too? The situation with SSIM seems worse than PSNR. There are just so many variations on how to compute this thing.

I'm choosing this implementation for comparison purposes, because I already have the fundamental image processing primitives handy:

http://mehdi.rabah.free.fr/SSIM/SSIM.cpp

On Multi-Scale SSIM: I've been given conflicting information on whether or not this is actually useful to me. Let's first try regular SSIM.

For testing, I compared my implementation, using my own float image processing code, vs. the code above that uses doubles and OpenCV. To generate some distorted test images, I loaded kodim18 into Paint Shop Pro X8 and saved to various JPEG quality levels from 1-99. I then ran the two tools and graphed the results in Excel:




The X axis represents the various quality levels, from highest to lowest quality. The 12 PSP JPEG quality levels tested are 1, 5, 10, 20, 30, 40, 50, 60, 70, 80, 90, and 99. Y axis is SSIM.

Thanks to John Brooks at Blue Shift for feedback on this post.


Here's a useful PCA paper I found while writing HW1's renderer

$
0
0
I used this technique in a real-time GPU DXT1 encoder I wrote around 10 years ago:

"Candid Covariance-Free Incremental Principal Component Analysis"
http://www.cse.msu.edu/~weng/research/CCIPCApami.pdf

With this approach you can compute a decent-enough PCA in a few lines of shader code.

HW1 used this encoder to compress all of the GPU splatted terrain textures into a GPU texture cache. One of my coworkers, Colt McAnlis, designed and wrote the game's amazing terrain texture caching system.

Image error metrics

$
0
0
While developing and refining crunch I used a matrix of statistics like this:

RGB Total   Error: Max:  73, Mean: 17.404, MSE: 176.834, RMSE: 13.298, PSNR: 25.655, SSIM: 0.000000
RGB Average Error: Max:  73, Mean: 5.801, MSE: 58.945, RMSE: 7.678, PSNR: 30.426, SSIM: 0.907993
Luma        Error: Max:  64, Mean: 4.640, MSE: 37.593, RMSE: 6.131, PSNR: 32.380, SSIM: 0.945000
Red         Error: Max:  69, Mean: 5.387, MSE: 52.239, RMSE: 7.228, PSNR: 30.951, SSIM: 0.921643
Green       Error: Max:  70, Mean: 5.052, MSE: 48.298, RMSE: 6.950, PSNR: 31.291, SSIM: 0.934051
Blue        Error: Max:  73, Mean: 6.966, MSE: 76.296, RMSE: 8.735, PSNR: 29.306, SSIM: 0.868285

I computed these stats from a PNG image uploaded by @dougallj showing the progress he's been making on his experimental ETC1 encoder with kodim18, originally from here:


The code that computes this stuff is actually used by the DXT1 front-end to determine how the 8x8 "macroblocks" should be tiled.

The per-channel stuff is useful for debugging, and for tuning the encoder's perceptual RGB weights (which is only used when the compressor is in perceptual mode). Per-channel stats are also useful when trying to get a rough idea what weights a closed source block encoder uses, too.

More on SSIM

$
0
0
This paper is referenced in the SSIM article on Wikipedia:

"A comprehensive assessment of the structural similarity index"
http://link.springer.com/article/10.1007/s11760-009-0144-1
"In this paper, it is shown, both empirically and analytically, that the index is directly related to the conventional, and often unreliable, mean squared error. In the first evaluation, the two metrics are statistically compared with one another. Then, in the second, a pair of functions that algebraically connects the two is derived. These results suggest a much closer relationship between the structural similarity index and mean squared error."
"This research, however, appears to be the first to directly consider the statistical relationships between the two methods. As well, this work develops a pair of mathematical functions that directly link the two. Given these findings, one is left to question whether the structural similarity index is ready for widespread adoption."
Interesting! I get the feeling there's more to SSIM than meets the eye. Unfortunately, this paper is behind a paywall. Another quote from the paper:
"These findings suggest a reasonably significant level of correlation between the SSIM and MSE. Values range from r = 0.6364 to r = 1.0000, with an average of r = 0.9116 and a variance of 0.007. An average this large, along with a small variance, suggests that most of the correlations are decidedly significant. Clearly, when ordering coded images, the SSIM and MSE often choose similar arrangements. Results such as this are likely a sign of a deeper relationship between the two methods."
Hmm, okay. So MSE and SSIM are highly correlated. The paper even has simple algorithms to convert between MSE<->SSIM. Perhaps I could use these algorithms to help optimize my SSIM code. (Just joking.) From the conclusion:
"Collectively, these findings suggest that the performance of the SSIM is perhaps much closer to that of the MSE than some might claim. Consequently, one is left to question the legitimacy of many of the applications of the SSIM."
Got it. Here's another interesting paper, this one not behind a paywall:

"Mystery behind similarity measures MSE and SSIM"
https://pdfs.semanticscholar.org/8a92/541e46fc4b8237c4e611401d601c8ecc6893.pdf

Some quotes:
"We see that it is based on the same sample moments and correlation coefficient as MSE. So this is the first observation/property or mystery revealed about MSE and SSIM: both measures are composed of the same parameters which are only combined in a different way."
"So the third observation for SSIM is its instability around zero point (0,0) and the fourth one – it can be used only for data of the same sign. The authors of SSIM solve these problems by introducing small constants and restricting the usage to non-negative data only, respectively."
"The fifth observation for Dice measure and thus for SSIM too is that it depends on the absolute values of input parameters. First, it is insensitive at all if one of the parameters is equal 0. Secondly, its sensitivity is decreasing by the increase of absolute parameter values."
Hmm, none of that sounds great to me. They go on to introduce their own metric they call CMSC, and claim "all proposed measures are free of drawbacks of MSE and SSIM and thus are more suitable as objective similarity/quality measures not only for the images but any signals."

John Brooks at Blue Shift experimented with using SSIM in his new ETC1/2 encoder, etc2comp. In a conversation about SSIM, he said that:
"It [SSIM] becomes insensitive in high-contrast areas. SSIM is all about matching contrast & structure. But Block Truncation Coding by its nature is increasing contrast because it posterizes color transitions to 4 selector values. This made the encoder freak out and try to reduce contrast to compensate, making the encoding look crappy. I think it might be the right tool for high-level jobs, but was a poor tool for driving low-level encoder behavior."
"BTC trades 16 shades for 4 which means sharper transitions and more contrast when measured against the original. It also usually means less structure than the original due to posterizing 16-to-4. But neither artifact can be controlled by the encoder as they are a result of the encoding, so it's very hard to navigate the encoding search space when SSIM is so outside its design parameters."
Sounds pretty reasonable to me. I'm going to be doing some testing using a ETC1 encoder optimized for SSIM very soon. Let's see what happens.

How to use crunch's GPU block encoder test vector generator

$
0
0
This option selects a different mode of operation from crunch's usual texture file conversion role. It causes the tool to crawl through a directory and load every .PNG file there. It will then randomly select a percentage of the 4x4 pixel blocks from the image and append the results into one or more 4096x4096 output images. These output images can then be used as test vectors to compare different block encoders.

crunch -corpus_gen -deep .035 -width 4096 -height 4096 -in J:\dev\test_images\*.png

You can specify multiple -in arguments, and -in @file.txt loads a textual listing file of files/directories to load or scan.

The -corpus_test option can be used to compare the different DXT encoders supported by crunch, using images generated using -corpus_gen.

Here's a very zoomed in example from the test vector generator:



Notice how the blocks are sorted by the sum of R, G's, and B's standard deviation as a key.

An interesting ETC1/2 encoding test vector

$
0
0
Here's the 4x4 test vector image (zoomed in 32X for ease of visibility), provided to me by John Brooks and Victor Reynolds at Blue Shift:

Red pixel: 255,0,0
Blue pixel: 0,0,255

Seems simple enough, right? Here's what happens with the various encoders (in non-perceptual mode if the encoder supports the flag), using up to date versions from early last week, and non-perceptual RGB avg. metrics for both PSNR and SSIM:

etcpak (PSNR: 15.612, SSIM: 0.265737):

Red pixel: 93,60,93
Blue pixel: 51,18,51

etc2comp ETC1 (PSNR: 17.471, SSIM: 0.372446):


Red pixel: 111,60,60
Blue pixel: 60,60,111

Intel ISPC (PSNR: 24.968, SSIM: 0.587142):


Red pixel: 234,47,47
Blue pixel: 47,47,234

basislib_etc1 from yesterday (PSNR: 19.987, SSIM: 0.511227):

Red pixel: 149,47,47
Blue pixel: 47,47,149

etc2comp ETC2 (PSNR: 19.779, SSIM: 0.517508):



Red pixel: 255, 0, 0
Blue pixel: 64,64,98

This is an example of an well-tuned ETC1 encoder (Intel's) holding its own vs. etc2comp in ETC2 mode.

Want a little challenge: Try to figure how how Intel's encoder produced the best output.

John Brooks, the lead on etc2comp, told me that BSI is working with that test image because it's a known low-quality encoding pattern for etc2comp. It wasn't in their test corpus, so the PSNR of 17 & 19 should improve with future etc2comp iterations.

I've improved basislib's handling of this test vector, but the results now need a optimization pass. I've prototyped a version of squish's total ordering method in ETC1, by applying the equations in the remarks in rg_etc1.cpp's code. Amazingly, it competed against rg_etc1's current algorithm for quality on my first try of the new method, but it's slower.

Viewing all 302 articles
Browse latest View live