<Range> Node

Since Doug is not presenting both points of view, I am sorry I think I have to responsd in line.

On a ‘lot of history’ – this conversation was never concluded and we ran out of time (which is why the CLF 2.0 spec was never finished and is listed as DRAFT)

Yes, matrix allows scale and offset, but it doesn’t allow extraction of a portion of a float range
and only work on that range. (the RANGE node is for that purpose and it still works. It is the implicit clamping at one end that is the problem – the workaround is to always use all 4 metadata items but the spec should not allow an implicit operation that causes the math to go off.

Again my objection was to the implicit one-side clamping, not about the presence of clamping.

on point #6 and this previous sentence, another solution is to make explicit the clamping behavior
ClampHi, ClampLo when needed. Adding another node with a different behavior is also an understandable approach but it needs to get back to the original intent on handling the full float range (not assuming float=1.0 scale) I propose deleting the objectionable text in this thread. The focus should be on that issue not the general existing of clamping – and no it is not a naming disagreement, it is about the fundamental math.

There should be a note about use of the matrix 3x4 node to scale and offset floats in an unclamped fashion. I agree that Clamping is essential.

I am sorry to say I may not be available until late in the call and request that this discussion
be deferred until I can join.

Thanks,

Jim

What if we enforced that when style is ‘noClamp’ (even when only min or max pairs are provided) then no clamp is applied. Would that work? Then nothing is implicit because you are explicitly saying not to clamp.

If we went further and listened to @Greg_Cotten, we could require that style be set, which would require explicitly stating the intended behavior.

Side note: What might be a practical use case for specifying just min or max pairs and not wanting it to clamp?

Yes I did review this but I had difficulty making sense of it for the min/max only cases when mixed with in/out bit depth differences. Perhaps if I had a practical example to help wrap my head around what the intended behavior is rather than just looking at numbers - then I might be able to better grasp the numbers to see some meaning in the current behavior.

For those interested in taking a look, here is the spreadsheet Jim is referencing: range_node_eval_v4.xlsx (53.6 KB)

It is certainly much preferable.

1 Like

Hey Jim, it looks like the website messed up the formatting of your message. It’s not clear what you’re objecting to or what your proposal is. As Scott suggested, I think it would help to have a specific example to discuss.

My proposal was to delete the text in the Range node that deals with the assumed scaling behavior if one piece is missing.

An example is for an input of 0 to 10,000 nits and an output of 0 to 1023, the scale factor that gets calculated is (1023/10000) = 0.1023 when all four parameters are present. This is the correct conversion
from a 10000.00 float to 1023. With a 5000n input, RGBout = 5000 x 0.1023 + 0 - 0 x 0.1023 = 511.5 the right answer.

Exactly per the spec, if only the max values are specified, the scale parameter becomes ( 1023/ 1.0 ) or 1023 X as a multiplier. This is 10000 Times OFF. This is the damage from assuming all floats operate in the 0 to 1.0 range.

With an RGBin of 5000n and following the spec exactly with the equations as specified for the max only case…
RGBout = MIN(1023, 5000n x 1023 + 1023 - 10000 x 1023) or MIN ( 1023, -5113977 ) gives the wrong
number in the full float input situation. (The 511 is there but just of completely the wrong magnitude).

My proposal is to delete rangescale(float)=1.0 and everything after it and require all 4 parameters with a specific style addition of ClampHI or ClampLO. This minimizes the changes to the node input but if it still seen as necessary to keep the damaged math of the RANGE node as is, create a new RANGEF node with a note about the assumption in the existing RANGE node.

Jim

Thank you Jim. When I plug in your example values I get a different number. But in any case, I’m open to prohibiting the use of only one side (max or min) in a case that does not allow a simple clamp.

We’ll discuss at the meeting today …

Doug

It took me a little while, but I now follow your example, @jim_houston, and get the same result you do. I did not realise initially that your scale=1023 was coming not from maxOutValue=1023 but rather from outBitDepth=10 which is implicit, but not explicitly stated, in your example.

It does seem to me that the maths in the current spec is just simply wrong (or at least counterintuitive) for the single ended clamping cases. I have (I believe) a simpler, float only, example which illustrates this.

maxOutValue and maxInValue are not included at all in the calculation of scale. So if inBitDepth and outBitDepth are both 'float' then:

scale = rangescalar(float) / rangescalar(float) = 1.0

If I then set maxInValue=2.0 and maxOutValue=10.0, I might expect the operation to simply multiply input values by 5, and clamp at 10.0. Therefore 0.5 input would produce 2.5. However, testing this in Python, I get:

>>> maxInValue = 2.0
>>> maxOutValue = 10.0
>>> scale = 1.0
>>> RGBin = 0.5
>>> min(maxOutValue, RGBin * scale + maxOutValue - maxInValue * scale)
8.5

Is this really the intended behaviour?

It is effectively using implicit values of minInValue=0.0 and minOutValue=8.0 which is not what I think most people would expect. Intuitively you might expect 0.0 to be used for both.

Indeed in @jim_houston’s example for the result in the second case to match that in the first, implicit minimum values of zero would have to be used. But in fact an implicit minInValue=9999.0 is used.

On a separate subject, I am not particularly keen on this sentence in the spec:
If the input and output bit depths are not the same, a conversion should take place using the range elements.

Who “should”? Does it mean that people creating CLFs should add e.g. an explicit range mapping from 0-1 to 0-1023? Or does it mean that CLF implementers should apply bit-depth based range scaling automatically? It kind of means both, which is a bit confusing.

If a CLF creator includes a Range node with inBitDepth="32f" outBitDepth="10i" if they provide only, for example, minInValue=0.0 and minOutValue=0, a normalised float to 10-bit scaling will be automatically applied (with negatives clamped). If they provide all four range values, they need to be values which correctly apply that scaling, and inBitDepth and outBitDepth are ignored in the calculation of the mapping.

But maybe it’s only me who finds this confusing…

Hi Nick,

The only scaling in the formula for the “single-sided” case is what is necessary for the bit-depth scaling. I think there is a logic to that since you would need “both sides” in order to infer both a non-default scaling and an offset. So the “single-sided” case is basically a simple offset that is corrected for any bit-depth difference. So in your example, maxInValue does get mapped to maxOutValue, which I think is the primary expectation. Your example input value of 0.5, which is 1.5 less than maxInValue, gets mapped to 8.5, which is 1.5 less than maxOutValue. So I think it is a plausible result.

But as I wrote farther up in this thread, I agree that the “single-sided” case is probably not very useful for anything other than a simple clamp. In fact, in the interest of moving this forward and trying to bring us to consensus, I’ll go ahead and make a concrete proposal:

In Scott’s PDF on OverLeaf referenced earlier in the thread, change equation 3 from:

out = MAX(minOutValue, in × scale + minOutValue − minInValue × scale)

to simply:

out = MAX(minOutValue, in × scale)

and change equation 4 to:

out = MIN(maxOutValue, in × scale)

And add a note such as

Note: If only the minimum values are specified, the values must be set such that minOutValue = minInValue × scale. Likewise, if only the maximum values are specified, the values must be set such that maxOutValue = maxInValue × scale.

(One could argue that only the minIn or minOut is necessary, but I think it is important for readability to include both.)

I think that would address the issue that you and Jim raised. (Yes?) It is a change from the v2 spec but it’s minor enough that I’m not aware of any real-world use-cases that would actually break due to that change.

Regarding your other post about scaling, I agree that sentence is confusing. Actually, I don’t think that sentence is even necessary so perhaps the best thing is just to delete it.

Doug

I think it would address the issue, as long as it is clearly documented. Currently it is slightly unclear whether the document is user-facing or developer-facing. From a user perspective it would be easy to make assumptions about expected behaviour, such as implicit minimum values of zero when omitted. It would be better if the behaviour was clearly described in the text. At the moment, it is necessary to run various numbers through the provided equations (as @jim_houston and I did) to find out what the behaviour is.

Plausible, yes, once you analyse what the maths does. But I don’t think that a pure offset, based only on the difference between in and out values is what people would necessarily intuitively expect from an operator called “Range”.

out = min(in, maxInValue) * (maxOutValue / maxInValue)

is the maths I would intuitively expect, as only scaling seems more logical to me than only offsetting.

However there is no obvious equivalent for minimums only, particularly as those could often both be zero.

1 Like

No for sure! To me and without looking at the doc I would expect it to be equivalent to those:

Cheers,

Thomas

To be clear, it is equivalent to those when all four values are given. It is the behaviour when only two are given which is perhaps not intuitive.

Argggg, I have been beaten at the explicitness game :slight_smile: What I was trying to imply is that all the functions I mentioned require explicit in/source and out/destination ranges.

Yes, it’s really just CLF’s use of only Min or Max values in the Range node as a special case to perform one-sided clamping, simply to avoid having a dedicated Clamp operator, which introduces the potential confusion, I think.

1 Like

Right, so hopefully my proposal helps since it removed the ability to use the single-sided case to do an offset.

Thanks for the links to the other examples Thomas.

Doug

" 1. Fundamentally, the spec provides two ways of applying a shift and scale operation: Matrix and Range. The difference is that the Matrix does not clamp and the Range does."

I like such simplicity.

In the middle of something else, so have not had time to check the proposal in the thread,

however, I can comment that the Spec is ‘developer-facing’ but mixed with information for a
LUT designer. So I can see where that can get unclear. Doug’s CTF web pages are a much better user
description of the nodes for example.

On the in-depth, out-depth question, the sentence is part of a paragraph, and the whole paragraph is the situation being dealt with (again for the missing end case as a default) not a problem when all four elements are defined:

" If the input and output bit depths are not the same, a conversion should take place using the range elements. If the elements defining an InValue range or OutValue range are not provided, then the default behavior is to use the full range available with the inBitDepth or outBitDepth attribute used in place of the
missing input range or missing output range, respectively, as calculated with these equations:”

I agree with Nick’s statement of what the expected behavior, and yes this is mostly
about what happens when either the minValue or maxValue is missing.

The Range Node is about mapping one range to another, but especially allowing
the ability to select a floating range including negatives and map it to a different
floating point range. so a {-1000,1000) should be able to map to a {1000,3000}.
In this case, the scale=1, but the other term is { in + 2000 } to offset to the right place
in the new range. Doug’s modified eqn would just echo the ‘in’ value and clamp it against
the MIN or MAX. This is still the wrong result. Nick’s modification also only focuses on ‘scale’
when both scale+offset are needed.

The ‘special math’ at the end for missing entries does not do this.

I still recommend

A) deleting this ‘missing elements’ section and requiring all four entries and indicating CLAMPs explicitly.
B) creating a RANGEF node that mandates only the basic equation and no missing entries for floating point.

Jim

I would note one other small bug…

the rangescaler for integers has a - 1 that should not be there.
the Pure ints 12i/10i 4096/1024 = 4 X for the scaler which is correct.
The minus 1 gives 4095/1023 = 4.002932 which is incorrect.

Jim