Here's are two visualizations of the overall DXT1 encoding error due to using this table, assuming each selector is used equally (which is not always true). This is the lookup table referred to in my previous post.
Each small 32x32 pixel tile in this image visualizes a R,G slice of the 3D lattice, there are 32 tiles for B (left to right), and there are 8 rows overall. The first row of tiles is for ETC intensity table 0, the second 1, etc.
First visualization, where the max error in each individual tile is scaled to white:
Second visualization, visualizing max overall encoding error relative to all tiles:
Hmm - the last row (representing ETC1 intensity table 7) is approximated the worst in DXT1.
Each small 32x32 pixel tile in this image visualizes a R,G slice of the 3D lattice, there are 32 tiles for B (left to right), and there are 8 rows overall. The first row of tiles is for ETC intensity table 0, the second 1, etc.
First visualization, where the max error in each individual tile is scaled to white:
Second visualization, visualizing max overall encoding error relative to all tiles:
Hmm - the last row (representing ETC1 intensity table 7) is approximated the worst in DXT1.