Sdr ODT nit levels

I have two questions about the implementation of sdr. The first is to better understand the sdr implementation and the second is a request for more information.

Question 1
In reviewing the sdr ODTs (sRGB, rec709, etc) I noticed cinema white and black are used as part of the tonemapping process to get to 100 nits. We were wondering why cinema white and black were chosen rather than targeting 100 nits outright (like the HDR output transform function directly targets 1000, 2000, 4000 nits). Could you explain why the use of the cinema white and black in the segmented_spline_c9_fwd function please?

Question 2
My group is interested in setting our sdr output to 200 nits rather than 100 nits. In reviewing the code I don’t see the functions required to create the parameters used by the SegmentedSplineParams_c9 struct. I found the functions to create the HDR SegmentedSplineParams_c5 struct but could not find similar functions for the SegmentedSplineParams_c9 struct. Are the functions to generate the SegmentedSplineParams_c9 parameters available somewhere else perhaps? If so, could you point me towards them? If not, could they be made available either in this thread or by updating the github project?

Thank you!

This has a slightly convoluted history, but in a sense, you’re right that the CINEMA_BLACK and CINEMA_WHITE values and how they are applied are not very extensible to other dynamic ranges, use cases, or rendering intents. Their existence is a bit confusing and whether they’re “right” or not really depends on their application and the rendering intent. In v1 the focus was very “cinema-centric” (HDR was still in its infancy and difficult to view and assess - capable displays were not as readily available). So we provided transforms that were intended to as closely as possible mimic the appearance of the basic cinema projector (but displayed on a monitor that was expected to be ~100 nit peak luminance). Rather than explictly map the tonescale from the display black to 100nits, we merely “stretch” the tonescale across the increased dynamic range (48 nits to 100 nits is not so much a perceived difference, especially when the viewing environment is also changing [note: the preceding statement is a huge hand-wave]).

I recommend using the Output Transform code used in the v1.2 HDR transforms to create an ODT directly to 200 nits. There are annoying limitations to tuning the SDR ODT (c9) spline values in sequence with the RRT (c5) spline. The v1.2 Output Transforms combine these two separate tonescales into a much more intuitive single-stage tone scale (SSTS) and provide easy to modify values to control the peak luminance, etc.

If you set this SSTS to have black at 0.02 and white at 48, you get an SDR tonescale that is not exactly similar to the 1.0 SDR system (RRT+ODT) tone scale. For regular SDR, use the provided transforms. But if you want to go to 200nits, the tonescale will be close enough rather than trying to adjust the c9 spline appropriately (very unintuitive - and, sometimes, restrictive).

The major takeaway here is that the need for easy to create “custom” ODTs is a known issue, and removing these types of inconsistencies between different Output Transform classes or types is one of the major design goals of ACES 2.0. Expect that all transforms will be transitioned to use the same base algorithm at some point (but that will change the “look” and so require a major version bump to 2.0).

If you need help or guidance adjusting the v1.2 HDR Output Transform code for your specific display, let me know and I can guide you through that.

Thank you for the response Scott! We have familiarity with the HDR output transform function so adapting it to 200 nits should be straightforward. The transform specifies min, mid, and max rather than black and white points.

What do you recommend for those values? a min of 0.02, a mid of 4.8 and a max of 200? Or should I use the HDR min of .0001, mid of 15, and max of 200?

I would say do what makes the most sense for what you’re trying to evaluate. If your display’s measured black is lower than 0.02 and you want to take advantage of that, then set it lower to the actual measured black. If you’re trying to emulate the black level of cinema, I’d leave it at 0.02.

The mid-point adjustment is really just an exposure shift (i.e. sliding the tone scale right/left along the exposure axis of an exposure vs luminance plot). It’s higher by default in HDR because in general with the additional peak brightness people using HDR seem to set their picture overall brighter. In some testing we did, 15 seemed like a round and common landing place for different types of content. Obviously by shifting the mid-point higher, the amount of headroom available for highlights decreases. With only 200 nit peak luminance, I would personally leave the midpoint at 4.8 and adjust the image brighter using exposure adjustment in color grading if I decided I want the picture brighter overall. (This is exactly equivalent to changing the mid-point if it’s a true exposure adjustment knob).

Ultimately, it’s really up to you and your needs…