Could a compromise be to require the Range’s style to be defined?
<Range> Node
When all of minInValue, minOutValue, maxInValue, and maxOutValue are present, if style is not specified, the default behavior is “noClamp”.
If a minInValue/minOutValue pair is present, the result shall be clamped at the low end.
If a maxInValue/maxOutValue pair is present, the result shall be clamped at the high end.
This would appear to be contradictory. Should it not read “If only a minInValue/minOutValue pair…”?
I’m sorry, but this discussion on Clamping is off point. My dislike was of the use
of implicit clamping an one end but not the other based on whether a parameter was
missing or not. (This is not a good method) Doug brought up the same argument that
because the original spec was implemented differently than what it stated, that we should defer to
the only one implementation. I still disagree with that.
But all of this misses my main objection. The LUT format is intended to be able to create a ProcessLIST that works on full bit resolution files in their native float representation. But the calculation of the (if not present) scale factor ALWAYS set floats to a 0…1.0 assumption (scale=1) But this is only true if the LUT only works in a range of 0…1.0 on the input side and this is EXPLICITLY not so. A range of -0.005 to 222 is a valid float input.
My recommendation is to delete the following text, and always include both MIN and MAX values in the Range Node. There should be no missing float range assumption.
note: the full range available turns out to be a wrong statement (we are not using the full range of floats just the 0 to 1.0 subset. [This can be a poor processing result squeezing the whole float range into a 1.0 container range.)
is to use the full range available with the inBitDepth or outBitDepth attribute used in place of the missing input range or missing output range, respectively, as calculated with these equations:
In the formulae below, if the bit depth is integral, then rangescalar() is defined as:
rangescalar(bitDepthInteger) = 2bitDepth − 1 If the bit depth is specified as floating point, then:
rangescalar(float) = 1.0 If only minimum values are specified, the formula is:
scale = rangescalar(outBitDepth) rangescalar(inBitDepth)
RGBout = MAX(minOutValue, RGBin × scale + minOutValue − minInValue × scale) If only maximum values are specified, the formula is:
scale = rangescalar(outBitDepth) rangescalar(inBitDepth)
RGBout = MIN(maxOutValue, RGBin × scale + maxOutValue − maxInValue × scale)
Scott: please review the spreadsheets I did at that time. They were about the calculations
of floats going wrong under the implicit missing item equations.
Since Doug is not presenting both points of view, I am sorry I think I have to responsd in line.
On a ‘lot of history’ – this conversation was never concluded and we ran out of time (which is why the CLF 2.0 spec was never finished and is listed as DRAFT)
Yes, matrix allows scale and offset, but it doesn’t allow extraction of a portion of a float range
and only work on that range. (the RANGE node is for that purpose and it still works. It is the implicit clamping at one end that is the problem – the workaround is to always use all 4 metadata items but the spec should not allow an implicit operation that causes the math to go off.
Again my objection was to the implicit one-side clamping, not about the presence of clamping.
on point #6 and this previous sentence, another solution is to make explicit the clamping behavior
ClampHi, ClampLo when needed. Adding another node with a different behavior is also an understandable approach but it needs to get back to the original intent on handling the full float range (not assuming float=1.0 scale) I propose deleting the objectionable text in this thread. The focus should be on that issue not the general existing of clamping – and no it is not a naming disagreement, it is about the fundamental math.
There should be a note about use of the matrix 3x4 node to scale and offset floats in an unclamped fashion. I agree that Clamping is essential.
I am sorry to say I may not be available until late in the call and request that this discussion
be deferred until I can join.
Thanks,
Jim
What if we enforced that when style is ‘noClamp’ (even when only min or max pairs are provided) then no clamp is applied. Would that work? Then nothing is implicit because you are explicitly saying not to clamp.
If we went further and listened to @Greg_Cotten, we could require that style be set, which would require explicitly stating the intended behavior.
Side note: What might be a practical use case for specifying just min or max pairs and not wanting it to clamp?
Yes I did review this but I had difficulty making sense of it for the min/max only cases when mixed with in/out bit depth differences. Perhaps if I had a practical example to help wrap my head around what the intended behavior is rather than just looking at numbers - then I might be able to better grasp the numbers to see some meaning in the current behavior.
For those interested in taking a look, here is the spreadsheet Jim is referencing: range_node_eval_v4.xlsx (53.6 KB)
It is certainly much preferable.
Hey Jim, it looks like the website messed up the formatting of your message. It’s not clear what you’re objecting to or what your proposal is. As Scott suggested, I think it would help to have a specific example to discuss.
My proposal was to delete the text in the Range node that deals with the assumed scaling behavior if one piece is missing.
An example is for an input of 0 to 10,000 nits and an output of 0 to 1023, the scale factor that gets calculated is (1023/10000) = 0.1023 when all four parameters are present. This is the correct conversion
from a 10000.00 float to 1023. With a 5000n input, RGBout = 5000 x 0.1023 + 0 - 0 x 0.1023 = 511.5 the right answer.
Exactly per the spec, if only the max values are specified, the scale parameter becomes ( 1023/ 1.0 ) or 1023 X as a multiplier. This is 10000 Times OFF. This is the damage from assuming all floats operate in the 0 to 1.0 range.
With an RGBin of 5000n and following the spec exactly with the equations as specified for the max only case…
RGBout = MIN(1023, 5000n x 1023 + 1023 - 10000 x 1023) or MIN ( 1023, -5113977 ) gives the wrong
number in the full float input situation. (The 511 is there but just of completely the wrong magnitude).
My proposal is to delete rangescale(float)=1.0 and everything after it and require all 4 parameters with a specific style addition of ClampHI or ClampLO. This minimizes the changes to the node input but if it still seen as necessary to keep the damaged math of the RANGE node as is, create a new RANGEF node with a note about the assumption in the existing RANGE node.
Jim
Thank you Jim. When I plug in your example values I get a different number. But in any case, I’m open to prohibiting the use of only one side (max or min) in a case that does not allow a simple clamp.
We’ll discuss at the meeting today …
Doug
It took me a little while, but I now follow your example, @jim_houston, and get the same result you do. I did not realise initially that your scale=1023
was coming not from maxOutValue=1023
but rather from outBitDepth=10
which is implicit, but not explicitly stated, in your example.
It does seem to me that the maths in the current spec is just simply wrong (or at least counterintuitive) for the single ended clamping cases. I have (I believe) a simpler, float only, example which illustrates this.
maxOutValue
and maxInValue
are not included at all in the calculation of scale
. So if inBitDepth
and outBitDepth
are both 'float'
then:
scale = rangescalar(float) / rangescalar(float) = 1.0
If I then set maxInValue=2.0
and maxOutValue=10.0
, I might expect the operation to simply multiply input values by 5, and clamp at 10.0. Therefore 0.5 input would produce 2.5. However, testing this in Python, I get:
>>> maxInValue = 2.0
>>> maxOutValue = 10.0
>>> scale = 1.0
>>> RGBin = 0.5
>>> min(maxOutValue, RGBin * scale + maxOutValue - maxInValue * scale)
8.5
Is this really the intended behaviour?
It is effectively using implicit values of minInValue=0.0
and minOutValue=8.0
which is not what I think most people would expect. Intuitively you might expect 0.0 to be used for both.
Indeed in @jim_houston’s example for the result in the second case to match that in the first, implicit minimum values of zero would have to be used. But in fact an implicit minInValue=9999.0
is used.
On a separate subject, I am not particularly keen on this sentence in the spec:
“If the input and output bit depths are not the same, a conversion should take place using the range elements.”
Who “should”? Does it mean that people creating CLFs should add e.g. an explicit range mapping from 0-1 to 0-1023? Or does it mean that CLF implementers should apply bit-depth based range scaling automatically? It kind of means both, which is a bit confusing.
If a CLF creator includes a Range
node with inBitDepth="32f" outBitDepth="10i"
if they provide only, for example, minInValue=0.0
and minOutValue=0
, a normalised float to 10-bit scaling will be automatically applied (with negatives clamped). If they provide all four range values, they need to be values which correctly apply that scaling, and inBitDepth
and outBitDepth
are ignored in the calculation of the mapping.
But maybe it’s only me who finds this confusing…
Hi Nick,
The only scaling in the formula for the “single-sided” case is what is necessary for the bit-depth scaling. I think there is a logic to that since you would need “both sides” in order to infer both a non-default scaling and an offset. So the “single-sided” case is basically a simple offset that is corrected for any bit-depth difference. So in your example, maxInValue does get mapped to maxOutValue, which I think is the primary expectation. Your example input value of 0.5, which is 1.5 less than maxInValue, gets mapped to 8.5, which is 1.5 less than maxOutValue. So I think it is a plausible result.
But as I wrote farther up in this thread, I agree that the “single-sided” case is probably not very useful for anything other than a simple clamp. In fact, in the interest of moving this forward and trying to bring us to consensus, I’ll go ahead and make a concrete proposal:
In Scott’s PDF on OverLeaf referenced earlier in the thread, change equation 3 from:
out = MAX(minOutValue, in × scale + minOutValue − minInValue × scale)
to simply:
out = MAX(minOutValue, in × scale)
and change equation 4 to:
out = MIN(maxOutValue, in × scale)
And add a note such as
Note: If only the minimum values are specified, the values must be set such that minOutValue = minInValue × scale. Likewise, if only the maximum values are specified, the values must be set such that maxOutValue = maxInValue × scale.
(One could argue that only the minIn or minOut is necessary, but I think it is important for readability to include both.)
I think that would address the issue that you and Jim raised. (Yes?) It is a change from the v2 spec but it’s minor enough that I’m not aware of any real-world use-cases that would actually break due to that change.
Regarding your other post about scaling, I agree that sentence is confusing. Actually, I don’t think that sentence is even necessary so perhaps the best thing is just to delete it.
Doug
I think it would address the issue, as long as it is clearly documented. Currently it is slightly unclear whether the document is user-facing or developer-facing. From a user perspective it would be easy to make assumptions about expected behaviour, such as implicit minimum values of zero when omitted. It would be better if the behaviour was clearly described in the text. At the moment, it is necessary to run various numbers through the provided equations (as @jim_houston and I did) to find out what the behaviour is.
Plausible, yes, once you analyse what the maths does. But I don’t think that a pure offset, based only on the difference between in and out values is what people would necessarily intuitively expect from an operator called “Range”.
out = min(in, maxInValue) * (maxOutValue / maxInValue)
is the maths I would intuitively expect, as only scaling seems more logical to me than only offsetting.
However there is no obvious equivalent for minimums only, particularly as those could often both be zero.
No for sure! To me and without looking at the doc I would expect it to be equivalent to those:
- MapRange: https://docs.unrealengine.com/en-US/BlueprintAPI/Math/Float/MapRangeClamped/index.html
- SetRange: https://download.autodesk.com/us/maya/2011help/Nodes/setRange.html
- FitRange: https://www.sidefx.com/docs/houdini/nodes/vop/fit.html
- LinearConversion: https://github.com/colour-science/colour/blob/a992ee574f969619554735ff60487d78838b9cd4/colour/utilities/array.py#L848
Cheers,
Thomas
To be clear, it is equivalent to those when all four values are given. It is the behaviour when only two are given which is perhaps not intuitive.
Argggg, I have been beaten at the explicitness game What I was trying to imply is that all the functions I mentioned require explicit in/source and out/destination ranges.
Yes, it’s really just CLF’s use of only Min or Max values in the Range
node as a special case to perform one-sided clamping, simply to avoid having a dedicated Clamp
operator, which introduces the potential confusion, I think.
Right, so hopefully my proposal helps since it removed the ability to use the single-sided case to do an offset.
Thanks for the links to the other examples Thomas.
Doug