Thanks, Jim – sage words – I really appreciate your response.
When I noticed that we were calibrating our allegedly-D65 X300s and our definitely D65 non-X300s to different cie1931 xy values, I nearly had a conniption, and for a hot second, I thought some guy named Judd Vos had convinced the powers that be to calibrate all HDR stuff to his creative whitepoint. Fortunately, before I started bleeding from the ears, the engineer pointed me to this straight-forward OLED Color Matching Whitepaper from Sony.
There is another transform sometimes needed to calibrate the physical devices, and in the ACES system, it is called the ODCT - Output Device Calibration Transform
Since you bring ODCTs and acronyms and such, I was hoping I could get some clarification here, with respect to Target Conversion Transforms (TCTs) and Display Encoding Transforms (DETs), which were the subject of our most recent technical bulletin (TB-2016-013, I think?). I’ve never really seen anyone mention them. As I understand it… lets see if I can get this right…
- Within the ODT:
- The Target Conversion Transform (TCT) serves to map OCES RGB to R’G’B’ values that fit the output device’s luminance range and gamut
- I believe these are the C9 forward splines, adjusted per output-device luminance ranges and surround; followed by gamut remapping / clamping and possible chromatic adaptation transforms
- Next, the DET converts these remapped R’G’B’ code values to device-specific signal encoding (ODCES – Output Display Color Encoding Space)
- i.e., Dolby PQ / ICtCp; or Gamma 2.6 / X’Y’Z’;
- After the ODT:
- The ODCT describes a per-device adjustment made to nudge incoming idealized ODCES values into place to compensate for individual differences across units.
- For instance, a hardware ICC profile generated from probe measurements
- Or, as discussed (but not recommended), a matrix transform to perform (or compensate for?) a Judd-vos transformation.
Am I understanding TCTs / DETs / ODCTs correctly?
Would this then mean that tweaking a contrast knob on the display itself constitutes an ODCT, of sorts? Or the enabling / disabling Judd-vos corrections, or calibrating to one mode or the other?
Similarly, if a grade were produced on an X300 in HLG at some gamma targeting some peak luminance, it should not only be mathematically possible to convert to a visually identical PQ signal, but it should be possible to communicate / derive the exact settings for the appropriate X300 PQ mode. But that device mode switch… it’s not so much a transform as it is metadata, or implied metadata that dictates an adjustment.
LUTs get passed across facilities without appropriate contexts attached all the time; and communication, already difficult, is about to get much… difficult-er. Meanwhile, the most commonly used means to communicate device settings seems to be via photos of menu settings.
I think this is what I was trying to get at with my original question, without quite knowing how to ask it – how, exactly, could / would / should ACES communicate information about display-specific settings? And to what end? It doesn’t quite feel right to include device settings in the “ODCT” category – and, yet, ACES is well-positioned to serve as both a conduit for vendor-specific support, as well as a means to communicate and validate supported workflows.
I guess where this long-assed post is taking me is towards either vendor-specific CLF extensions building upon the Info/CalibrationInfo ProcessList element, or a suite of vendor / device specific documents… white papers for calibrating a given device for an ACES (or vendor?) ODT.
Similarly / alternatively, ACESclip should / could be able to communicate sufficient information about the mastering display to validate like models against. Or, at the very least, provide enough information so we can begin to pick apart where everything started to go wrong
So… ah… yeah, sorry, this kind of veered into the abstract end of things; and I’m sure vendor participation to such an end would be tricky, if viable. But I’m sure of three things:
- We do need a way to formally and unambiguously communicate and validate device settings, since we live in a world where multiple ODTs could apply to the same device, and many devices could accept the same ODT. Whether that’s within the scope of ACES, or belongs better in the BxF or IMF domains is worth further discussion, I think.
- In the end-user documentation, we should at least note the possibility of differences from expected measurements listed; and I know there isn’t a de facto Judd Vos implementation that vendors abide by, so we wouldn’t be able to list vendor-agnostic expected Judd-Vos-offset values… In any case, probably worth at least a note in an appendix or something.
- I’m still not sure if display-device settings warrant their own acronym and real estate on the ol’ block diagram, but this kind of information isn’t too far removed from SMPTE ST-2086 mastering display metadata that HDR deliverables already require, in terms of attaching a-priori knowledge of device characteristics to the pipeline, which, in turn, dictates color volume upper bounds and constraints when reproducing on other devices.