Understanding the sRGB RRT/ODT

Thanks for all of the great posts.

Am I correct that if I create a white value in ACEScg colorspace 1,1,1 and display it through the output - sRGB the white value will actual be displayed as .8,.8,.8. and shown as gray.

Is this due to an additional “filmic” curve being applied in the sRGB RRT/ODT or, for values of white would I need to have a value of approximately 16.3? If so are values over 1.0 considered to be pushing into HDR territory? I’m trying to piece together a bunch of different posts on this to try to see about jumping from a display referred workflow in 8 bit sRGB to ACES and I seem to be missing some key pieces of information that would make the jump easier.

So I decided to do a super basic test and now I also need clarification on the proper use for the aces ref images: https://github.com/aforsythe/aces-dev/tree/doc/odt-endUser/images

I assumed, and I think incorrectly, that if I took the ACEScg synthetic color chart and ran it through an OCIO node (working in fusion) with the input set to ACEScg and the output set to sRGB and rendered to tiff I should get a tiff image that matches the synthetic color chart found in the ODT directory labeled sRGB.

Is there anyone out there that could help explain the proper use of these reference images?

A 1.0 in ACES or ACEScg is not meant to map to a 1.0 in an output encoding such as sRGB.

ACEScg [1.0] -> RRT -> sRGB ODT -> sRGB [0.812]

The reason ACEScg [1,1,1] becomes sRGB [0.8,0.8,0.8] is because it is being rendered from scene-referred encoding to an output referred encoding. It uses an S-shape curve and so, as in most imaging systems trying to make a reproduction of a scene in a different viewing environment, a 100% reflector (1.0) in the scene does not map to 100% output (1.0) on a display. This is because the S-shape curve compresses highlights to allow headroom for “over-brights”.

In SDR renderings via ACES, such as to sRGB, an sRGB output value of 1.0 corresponds to about 16.0 in scene space, which is +4 stops above 100% in the scene. (Those 4 stops get compressed into the output range 0.8-1.0)

If you want output-referred imagery to get into ACES, then you need to use an inverse ODT followed by the inverse RRT. If you go: sRGB [1.0] -> InvODT -> InvRRT -> ACEScg [16.292]
Then you can work in ACES/ACEScg and render back out to display using the forward RRT/sRGB ODT.

1 Like

If you read the README file:
syntheticChart.01_ODT.Academy.sRGB_100nits_dim.tiff is:
ODT.Academy.sRGB_100nits_dim.ctl applied to OCES/syntheticChart.01.exr

syntheticChart.01.exr is:
RRT.ctl applied to ACES/syntheticChart.01.exr

So the image in the ODT directory is the ACES (not ACEScg!) version of the chart, rendered through the RRT and sRGB ODT.

Are you using OCIO to apply the RRT or are you just converting ACEScg primaries to sRGB primaries? I assume the latter since you’re not seeing a match.

You can’t compare ACEScg and output-referred sRGB - they’re in completely different image-states.

I believe I am using the OCIO to apply the RRT. Here are the steps, I might have bungled them.

In fusion I am loading the synth chart located here https://www.dropbox.com/sh/9xcfbespknayuft/AABEYY1cMfzBim-ol2oEEwl3a/ACEScg?dl=0&subfolder_nav_tracking=1

Adding an OCIO node and pointing that to the 1.0.3 config

Setting my input to be ACEScg and my Output to Output - sRGB
(similar setups done in Krita, MrViewer, V-Ray Frame buffer)

However

For an additional test I downloaded Davinci Resolve set my Color Management settings to 1.0.3 ACEScg in, sRGB out this test was much closer to the the TIFF file in the ODT folder on dropbox

(thanks scott)

Upon further research with others (who are much smarter than I am) it appears that the problem with my results is that. The exr’s have ranges exceeding what the OCIO’s LUTs are capable of translating gracefully (outside the normalized range). If those LUTs took into account those ranges you’d probably end up with bad results (banding in your colors) . The reference images in the ODT appear to have been generated with CTL’s which don’t suffer this limitation because they work in a formulaic/analytic manner but are slower and un-needed because in a production you should be mindful of what the LUT’s are capable of gracefully translating. If you did have footage with values in excess of your LUT’s translation an additional pre-process should be used to bring those values within the proper ranges.

This is my paraphrasing and I am prone to error.