Tone scale slope behavior at min and max luminance

Tags: #<Tag:0x00007f3c5e3a4200>

(Scott Dyer) #1

Wanted to start a discussion about behavior of the tone scale at maximum and minimum luminance. In ACES v1, the tonescales go to a slope of zero at the end-points. Based on this, my function currently produces system tone scales that go flat through the desired min and max luminance points. However, there is ability to change this to a non-zero value independently for both highlight and shadow regions.

What are others’ opinions on how the tone scale should behave at minimum and maximum luminance?

Should behavior be the same for highlight and shadows regions? Or should extension only happen for one?

If linear extension is favored, what would be an “appropriate” slope (in log-log space)?

Should the slope of the linear extension be the same across all dynamic ranges, or should it vary based on luminance? If so, how should these defaults be determined?

ACESnext – ODT Virtual Working Group Online Meeting - 12.12.2017
(Scott Dyer) #2

Some initial thoughts on one vs. the other:

  • Rolls off “naturally” to the maximum luminance - depending on colorist preference, could be desirable or undesirable
  • Rolls off to the minimum luminance, providing a dynamic “toe” to the minimum luminance value
  • Could impact invertibility - precision between points as slope approaches zero could be too small; any values beyond where the curve goes flat would invert as if they were the maximum luminance

Nonzero-slope approach (linear extension):

  • Potential for clipping in highlights - requires manual roll-off from a colorist. Depending on colorist preference, could be desirable or undesirable

(Kevin Wheatley) #3

I’ve certainly got colourists who don’t like the “roles off to zero in the highlights” for some looks they actually need the ability to not do that.

I’ve then also got CG artists and supervisors who strongly want the ability to invert cleanly.


(Daniele Siragusano) #4

Not sure if this is the right thread, but the combined “Tone Scale behaviour” of ACEScct + RRT + ODT is quite unideal in the shadows, lots of colourists reported that it is impossible to control blacks when grading with traditional tools in ACEScct. (CDL, FilmGrade, LGG)

When you Grade in a space you move the data along that tonecale (simple Log Exposure is an plus/minus in that space) so you slide the image up and down litterly.

If you look careful on the scopes in let’s say 1886/709 and you push exposure down in ACEScct than you see that deep shadows gets “sucked” down into negative space, pretty harshly.

You can visulays this behaviour by plotting a equaly spaced 0…1 ramp in ACEScct space and convert that to 1886 using the RRT+ODT, you will see instead of an S-Shape you get a slope change near 0.0…
This should be fixed in my opinion.

I hope some of this makes sense.

(Scott Dyer) #5

Thanks, I’ve heard similar feedback from others, which is exactly why I brought it up.

Conceptually the roll-off to max luminance seems to make sense, but in practice, it seems most colorists prefer it not level off completely. It also may affect clean inversion.

I’m Interested to hear from others as well…

(Scott Dyer) #6

@daniele, is this the same issue as the “kink in blacks” that you previously reported?

(Daniele Siragusano) #7

I think so

…more characters to reach 20…

(Chris Clark) #8

+1 for non-zero slope, based on colorist feedback I have heard over the years.

Most grading tools are not great at ‘unbending’ the flat roll-off that exists in the current ODTs, so a common complaint I hear is that it is difficult to maintain separation or contrast in the highlights.

This is especially common in HDR. I’ve even seen colorists wanting to use the 4,000 nit ODT on a 1,000 nit display precisely because the slope is non-zero at 1,000 nits!

If they prefer the highlights rolled off ‘naturally,’ desaturated, etc – they already know how to do that. They do it all the time grading display-referred (the horror!). If they want to hard-clip, it would make that easier. Doesn’t sound nice on paper, but sometimes that’s the look they want.

So, I think non-zero slopes would make for a nice clean inverse and happier colorists.


(Scott Dyer) #9

Thanks for the responses guys. I’m not attached to either method in particular, but just to play devil’s advocate here and continue the conversation, because I am curious…

I know for certain that some colorists use other non-ACES LUTs that have a highlight rolloff built in. (and of course, some others prefer to start with log images and completely “roll their own rendering” - I’m not talking about those type right now). But what do colorists who use other popular renderings that also roll off to a maximum value do with their highlights in these scenarios? Do they react as negatively? Or are they also frustrated by the roll-off in other renderings? What’s different in that scenario vs ACES?

Sometimes it seems like people hold a prejudice against ACES just because it’s ACES and is different than what they’re used to or they’ve heard “bad things” about ACES. Sure, ACES has some known issues, but specifically concerning highlight behavior, what’s the major difference from other workflows? Does this problem only exist in ACES or is this a larger something that’s just being discovered or reacted to now that an HDR grade is a much more common deliverable?


(Nick Shaw) #10

The commonly used ARRI K1S1 curve has a slight uplift at the top, which means it never goes completely flat. It also gives it the potential to be extrapolated outside the normal range, although I do not know if any implementations do this.

(Daniele Siragusano) #11

I do not think that any scene referred colourist ever complains about the Highlight Rolloff. Also many grading tools “expect” a highlight roll off in the viewing pipeline.

Some questions I have:
Why do you think you have a choice in modeling the curve directly?
What or which phenomena are you exactly trying to model?
What are the real underlaying parameter?
Why do you think an analytical inverse is something desirable?

I hope some of this makes sense.


(Thomas Mansencal) #12

Answering to this one specifically, I think the analytical inverse is important for many things. Some use cases:

  • Adding logos to images while preserving their appearance.
  • Pushing smartphones video feeds in Unreal Engine or Unity without having them affected by the tonemapper.
  • Bypass the RRT + ODT entirely and still being able to push an in-house view transform with a LMT into the ACES system.



(Daniele Siragusano) #13

Hi Thomas,

ok let me explain why I believe that all of your use cases would be better off without analytical inverse:

In General:
An analytical inverse only produces sensible scene referred data if the display referred data itself was generated by the forward transformation. Let me explain:
Let’s assume we have a forward transform that maps:

5.0 -> 0.95
10.0 -> 0.96
20.0 -> 0.97

So a display referred image (generated by another forward transform) that has some values up there would generate highly altering and unstable scene referred data (highly unstable for further processing).

what do we lose if we have an inverse that maps:

0.95 -> 4.0
0.96 -> 5.0
0.97 -> 6.0

We will see that the errors this inverse produces (when again viewed forward) are very small, but we gain much more robustness.

What I am trying to say is that an analytical inverse produces highly unstable image data and a slight colour inaccurate inverse transform can produce very robust data.

So to your use cases:

Would it not be better to engage with the logo designer and go through the process of creating a real scene referred logo that transforms correctly in all viewing condition? Or generate a scene referred Logo that is stable and then tweak the appearance slightly in post?
Proposing an inverse transformation as “no-brainer solution” will lead to “bad practice” assuming that it solves the problem but it won’t really, as the logo could be in a highly unstable state and break when an HDR Version is made, for example. So you have a ticking time bomb in your timeline waiting to explode.

This leads to AR I guess, what if you shine some light on the game scene or want to apply some post fx to the image, the smartphone feed could produce horrible artefacts if touched. Wouldn’t it be better to have a scene referred image that “almost” look like the original but can be graded, added light etc…?

This is really “bad practice” in my eyes. You will undermine the biggest goal of ACES which is the long-term archive. Encouraging people to enter ACES via the Rec709 backdoor is really dangerous in my eyes.
Wouldn’t it be better to rework those Looks to make them really “Scene Referred” and use them as proper LMTs with no dynamic range reduction?

We have made many inverse DRTs over the years, we also have inverted most ODT+RRTs (0.1.1 and 1.0) nicely I guess. We also have two models that are parametrised and have an analytical inverse, guess what: We drive the inverse with slightly different parameters, to gain robustness :slight_smile:

Don’t get me wrong: The robust inverse will produce an scene referred image that when viewed forward will look almost identical to an analytical inverse, difference are only visible if you measure the images…

I hope some of this makes sense.

(Thomas Mansencal) #14

Hi Daniele,

I’m totally inline with every single of your points, I have been there, however the reality is different and we live in a far from perfect world.

You might not be able to alter a logo at all, some clients just don’t give a damn about technicalities, they just know what they want, e.g. 7 lines all strictly perpendicular together. It is common for people working in commercials and there is a related thread about preserving the look of sRGB graphics here.

We might get that in an hypothtical future and lot of people have discussed about it with Apple but as of ARKit (and ARCore) current state you just receive a YCbCr image and have to deal with it. Compositing the feed directly behind the already tonemapped game content is an effective workaround but there are many use cases where you want to directly sample the video feed and use it as a texture, thus you are back to square one.

Same here but people have done it and will continue to do so.

I don’t advocate it and I think the opposite: it would allow non ACES shows to use ACES as a container and thus have access to ACES archival by just shipping a single LMT.

It does make sense and again I’m inline with it :slight_smile: