During the first ACES ODT Virtual Working Group Meeting we discussed an issue where the ODT b-splines are contributing to a image artifacts in gradients.
Interactive wireframe of a parametric ODT concept
ACESnext – ODT Virtual Working Group Online Meeting - 12.12.2017
I’d always been a bit skeptical when this gets brought up. Yes, perhaps the gradients aren’t perfectly smooth, but by what definition should they be?
To me, the gradient of the RRT / ODT transfer curves unto themselves seems like a somewhat arbitrary metric of quality. I personally would break this down into a number of different investigations:
- Do these non-smooth gradients results in visual artifacts?
- What definition of “smooth” will be satisfactory?
- Do smooth gradients prevent other design parameters of the RRT / ODTs?
- Do non-smooth gradients result in more difficult inversion of transforms?
- Does a smooth gradient increase or decrease computational complexity?
Is it just me, but an image artifact is present in an image, this mostly shows math gradient changes, and while I would be suspicious that they might cause a visual artifact, many of them at the bottom are showing at roughly 10 stops under mid grey (which can’t be visually significant). On the high end, there is a slope change as well in the HDR ODTs, but again hard to see if there is a corresponding visual defect. So a better example of a problematic image would be useful. It is a good notebook, so thanks for that Thomas. I think there was an awareness that curves were not smooth, sometimes we could see that on scopes. It is good to have it all in one place. I think Sean’s questions are good, which can be summed up as “how much of a problem is it” and “what part of the pipeline is effected”?
This might seems arbitrary but it is a metric that will help pushing the system toward more neutrality. The fact the derivatives are not smooth almost directly correlate with the fact the curves were built/tweaked by a human operator.
I don’t have hardware at home that allows me to assess if what I see is induced by the curves or the display itself, but the simple fact that those kinks are present should make people uncomfortable. It for sure makes me uncomfortable, I like elegance and simplicity and the current curves are almost orthogonal to elegant and simple. I would also like to point out that it is not because we don’t see artefacts today that future generations won’t be able to see them in the future. We are trying to define a future proof system so I think it is an important aspect to get right.
Baby Butt Smooth
The smoother the curves are, the easier they are to model analytically through simpler representations, and coincidentally fit and optimize. For reference the part where most fitting errors are happening is without surprise the shadow area where the ODT(RRT) curves dive super quickly into void.
The splines are individually continuous, so distortions in overall curve shape do not mean that
they are individually not smooth in the first derivative. (though the point at which the ODT and RRT curves overlap might have a little difference.) So what you are asking for is either second derivative
continuity at a certain level, or a functional expression that can replace the splines. I cannot say there if there is a functional expression that might work because there are many, however we looked at some of the simpler forms such as exponential, sigmoid, the logistics function, etc… and these did not produce appropriate visual results. It was seen in experiments that we could not easily get to a good reference with simplistic functions, thus a decision to use splines. Note that one of the comments was that the tone curve used in Kodak films had 75 years of development, and it was not a simplistic function. This was an early place that was started from, but was just a starting jump-off point. With Matlab or Mathematica, it is possible to turn spline curves into different types of functions but some visual changes occur as portions are smoothed. All that being said, I think it is recognized that a functional would be a good thing, but the amount of experimentation needed was seen as too high and the committee was trying to get something out the door. The changes to the curve shape usually occurred in reaction to a visual appearance attribute, and I do think that more work and other functions can be proposed that might work. Just a question about where this comes up in the priority of things to be solved which comes from where it has an impact. A good discussion to have.
Yes, I think this is likely a contributor to wobbliness, very hard to control two superposed curves.
Yes that is essentially it. Brian Karis from Epic Games has made a nice parametric curve that he used to fit the ACES RRT. You won’t get exactly the same result because the RRT is very hard to fit but I reckon that you can get very close with a good enough function.
I also like pointing out at the fact that Dolby recommends using a simple sigmoid curve for HDR.
An interactive version taken from Lumberyard source code is here, I haven’t investigated proper parameterisation:
Which is absolutely fair, I think that this thread is already a step in ACES Next
They only recommend a simple sigmoid for use in the TV set as part of the Dolby Vision licensed IP. Note in the diagram though they are prewarping the data, adding eye compensation, and bloom with a set of Tone Curves. So when combined, it wouldn’t surprise me to see something as complex as the RRT+ODT combo. This is in my opinion because “Tone Reproduction Operators”(Simple single curves) do not give good results in HDR whereas they were sometimes acceptable in low dynamic range sets. Where you dice these things is infinitely variable, but at the end of the day, a fair amount of manipulation is needed to get to a good starting point.
In ACES, I think you can make the RRT simpler, but then all of the intelligence needs to shift into the ODTs, and for HDR, I suspect you need an LMT to prewarp some things. When you look at the whole combination, it isn’t clear it would be good to change the assumption that the RRT creates an ideal color and ODTs just try to map that the best they can onto an ideal display. The issue we are struggling with is, in part, that there is no one creative intent that the RRT can capture. There is a different color timing applied for LDR, and yet another for HDR. At a minimum, you may have two different graded ‘ACES masters’ to keep. In the visual effects case, the Look of a shot would not be just from the RRT, but from some combo of the LMT and ODTs being used for the production. In other words, it is likely to be variable depending on what is in the pipeline.
So a simple curve may be easier to manipulate on a workstation but the VFX shots won’t represent the final look. I think this is why the committee wound up with taking shots back to scene-referred for VFX work because it was the closest solid ground it could find for everybody. As more HDR sets get to the desktop, the easier it will be to have a VFX preview for LDR and have a solid HDR rendering which is the easier of the two. It is a question of when HDR will be widely available for use at animator’s stations and on-set. It is hard to work on things you can’t see.
Yes, it very much is. And it gets even harder when trying to make curves for a dynamic range that approach the RRT (conceptually, an “ODT” tone scale for an “OCES display” - if one existed - would be a straight line/null because that is what the RRT tone scale is targeting). The B-spline for the ODT tonescale get’s more and more difficult to get the points correct, and many of the coeffiecients in the middle of the curve are inefficiently used because they are just defining a straight line.
For all of my experimental HDR work, I’ve been using a single tone scale curve. It still uses the uniform quadratic B-spline function, essentially joining two curves with a common tangent through the mid-point (18% gray). This allows the upper and lower half of the curve to be fully independent of each other (i.e. changing the x-location of the min/max point doesn’t muck up the spacing or defininition of the other half of the curve)
The advantages I’ve found:
- much, much easier and intuitive to tune
- defined such that it will pass through three specified points (min, mid, max) - many sigmoid or logistic variations do not have this ability in a straightforward manner
- min, mid, and max points are defined with an (x,y) coordinate and a slope - these can be set to match current RRT (they actually use the same function)
- defined with a minimum of coefficients - just enough to define the three key design points and allow for an adjustment of the “bend” or “sharpness” of the toe and shoulder
- can get a somewhat decent match to current RRT tone-scale and 48-nit system tone scale - but I’ve enhanced it so that a user only need to input the min luminance and max luminance of a their display to get the tone scale
Eye Compensation (which is only really camera auto-exposure) and Tone Curves are grayed and not used anymore in their HDR rendering path. Bloom in video games is just a prettifying camera effect, e.g. lens flare, that causes pixels to smear into neighbour ones, i.e. a spatial operator.
Which I’m totally fine with, I have always desired the RRT to be simple and more than anything else neutral and have more the complex appearance / reproduction work being done in ODTs (with CAM support obviously ). As we discussed it many times with @KevinJW, there is a probably a point where the RRT becomes a straight line
I tend to think the opposite, the closer we get to real world colorimetry capabilities, which we tend to get with displays adopting higher luminance and larger gamut, the less work we will have to do in respect to real world. I’m totally abstracting any creative tweaks here.
Absolutely agreed! You can only infer and extrapolate, and extrapolation is dangerous!
Even though computationally heavy, I think they are great, certainly one of the best tool (if not the best) for that type of work. I would be curious to try any updates if you have on your end.
Well, it’s sloppy code at the moment, but you can check out the CTL I’ve been using here. This is nowhere near final form since I’m just experimenting and pre-determined parameters and such are still up in the air. I haven’t modified the CTL in a while but I’ll try to find some time to make a few more commits to add some comments and clean things up a bit more…
I am currently using a simple linear interpolation between the parameters for a 10000-0.0001 tone scale (RRT) and 48-0.02 tone scale (SDR cinema) to get the “bend” and other values for anything in between.
Did you try to get it interactive? I remember we did some quick tests with Python back then but I would be keen on testing that further more. Maybe a Nuke implementation or alike.