ACES with RAW footage in RESOLVE

Forgive me if I’m answering my own questions (also my first time playing with ACES in Resolve) but I couldn’t find any information online about how Resolve handles RAW footage with ACES. It seems like when you have a RAW file Resolve auto-detects it as RAW footage and will not allow the IDT to be changed.

For example I just brought in a project that was shot in RAW CinemaDNG on an Ursa Mini and when I set the ODT (without setting the IDT in the Color Management) it seemed to automatically set the image based on the RAW. Changing the IDT on the individual clips doesn’t change the image. I tried with R3D files from a different project and same thing. Then I brought in a project that was shot with Sony footage that was SLOG3 and changing the IDTs affects the image like normal.

I then did exports in EXR and different flavors of Quicktime (after removing the ODT) and brought them all back in and simply setting the ODT to bring them back to normal without needing to set an IDT again. Is this a normal workflow when working with RAW video?

That is correct. Raw footage can be decoded directly to ACES, without the need for an IDT. In this case, most raw decode parameters are disabled, as they are irrelevant. You can still alter some, such as Kelvin, Tint and ISO.

The exception seems to be Sony Raw, where Resolve does include a Sony Raw IDT. However I have found I get better results (matching other implementations) on F65 Raw media when I use my own DCTL S-Gamut3.Cine IDT. See my other post on this subject.

I noticed that you said you exported to QT without ODT and brought the footage back in. Did that work? Cause it’s not supposed to work (in theory).

Thank you Nick for the clarification. And yes, my workplace will have need some of your scripts in thw near future when we start implementing more ACES workflows!

And yes Charles, I exported a Quicktime DNXHR without the ODT, reimported the DNX and reapplied the ODT and it was fine.

If you export with no ODT, you need to export to a format which supports un-clamped floating point. If you export to a QuickTime format, you will clip all your highlight detail. Check your image again and I think you will find that highlight detail is lost when re-importing the DNxHR. If it is a low contrast scene, you may not have noticed. I am not aware of any common QuickTime format supporting un-clamped float. I don’t know if the QuickTime architecture is even capable of it.

First of all: what would be the reason for processing RED footage with a F55 IDT, or BMDCC footage with any LogC IDTs? And why exporting a QuickTime or a DNxHR file in this way?

I have the impression these tests are done treating ACES Input/Output Transforms like camera/viewing LUTs, but one has to consider the whole system and that, despite it is not a “rigid” framework, it can still be used wrong and produce poorer result than a non-ACES workflow. I mean: nothing prevents to use the system in that way (considering ACES just like LUTs), but “you might get wet”.

When an ACES-aware software processes raw footage without the possibility to explicitly set an Input Transform (IDT), that automatic process is exactly the IDT — at least if the software does it according to ACES specifications. I believe this is what DaVinci Resolve does and it’s fundamentally correct not having the ability to change an Input Transform as long as the software picks up the right one for you (by parsing the source-file metadata).

I believe (but this is just my assumption) that the reason why Resolve allows you to change the Input Transform with raw footage from the Sony Fx5 camera is because Sony distributes the IDTs in the form of CTL files (they are freely available from Sony Cinealta’s website as well as from the official ACES repositories), while other camera vendors provide IDT capabilities embedded in their own software, or in their SDK for product partners. Both RED and Sony provide the SDK way, but RED does that exclusively (i.e. it does not provide IDTs as CTL files)
So Resolve implements the Sony Fx5 Input Transforms in two ways (via external IDT CTL/DCTL and via SDK). So in case a camera vendor (e.g. Sony) changes/improves their IDTs, users can replace them by switching to the new (D)CTLs in Resolve, while for SDK-only camera vendors, the whole color-processing application must be patched or upgraded. I repeat: this is my personal opinion to the original topic question.

As goes to the DNxHR export of footage “without the ODT” to re-import in an application “without IDT”, I should really warn against running these processes unless one really understands what a particular product/appliance is doing with the color management. ACES framework allows to import and export footage without Input/Output Transorms (if it’s already nativevly ACES2065-1 at the input/output end), but the products implementing this could do other things as well to the color management if the workflow is not done the proper way. For example I am a bit worried in using DNxHR or a QuickTime container format since they don’t have (yet) proper metadata that can make a software product acknowledge the required input/output ACES colorimetry.

I think the lack of un-clamped float support is far more significant a problem than metadata when trying to store ACES 2065-1 image data in particular container.

Yes of course Nick, that is a major concern; but you had covered that already with your previous reply.

Yes, I don’t have any intention of applying IDTs to footage like it’s a LUT, I’m just trying to understand how it all works. I assumed that Resolve was not allowing IDTs due to the RAW footage I just wanted to confirm that was what was happening since I hadn’t seen it mentioned anywhere before trying it myself.

Though that is good to know about the formats. Is it recommended to only work with EXR in ACES workflows until final delivery? And are there certain formats best for delivery with ACES?

Hi Darren.
To the best of my knowledge there is not yet a delivery specification based on ACES: one would rather use some Output Transforms to get ACES images into given delivery formats (e.g. DCDM, XDCAM file structure, SVoD-ready video clip, 35mm interpositive film, etc.),
I imagine, though, that ACES-based delivery specs are soon to appear ― paricularly whereas “delivery” means “archival”, or where the elective file formats include (or are including) ACES as standardized color encoding. With the latest I am referring, for example, to the IMF SMPTE standard (Interoperable Master Format), and to delivery specs that include it.

As for your first question, nothing is preventing to use ACES with a file format different from either SMPTE2065-4 (OpenEXR sequences with specific/mandatory metadata tags) or SMPTE2065-5 (MXF container with specific metadata), but in my opinion you should pay attention to two aspects:

  • Some file formats are not completely color-managed (if not even color-managed at all), so you can still encode ACES2065-1 codevalues in their (R,G,B) color channels, but no applications will interpret as ACES-encoded images ― unless in your own specifically crafted/preconfigured/scripted/instructed workflow. What’s the point of using ACES instead of your usual “long-standing, well-paved road” then?

  • Some file formats allow you to define ACES2065-1 colorimetry, as well as those of any other ACES color-space (even those not meant for digital storage). Yet you cannot expect applications to correctly interpret ― and, for example, automatically apply IDTs/ODTs for them. Not even products from ACES Partners should be expected to do so as, by current Academy/SMPTE standards, they rely on specific set of formats with specific metadata.

  • Products from ACES Product Partners support ACES pipelines in the file formats drafted to do so, therefore as long as you stay within such interoperable applications the file formats should not even be much of a problem.

When delivering a master you can always leave the IDT on and just render to whatever you want to use in your workflow. But, never use anything else but EXR when using no ODT.

For example, at our facility I only prepare VFX plates without ODT and render them to EXR.

When the grade is done. We render to ProRes 4444XQ (with a P3 or 709 ODT) or anything else that is specified by the clients demand (DCDM, DPX, whatever…).

But once you do that you’re out of ACES. But to have a complete ACES workflow you’d need ONLINE to have the capacity to do it. And so far RESOLVE doesn’t work well for that purpose.

@walter.arrighetti: so EXR wouldn’t be a way to archive a PROJECT?

I don’t use Resolve, but I guess you could render to Quicktime without an ODT using ACEScc. The log nature of the images do fit inside the code values of Quicktime Prores, as long as the values are scaled to fit inside the “legal/video” range.

I don’t know for sure, but I would suppose, that there are still so many more codevalues when the software does the linear to ACEScc transform. In the end of the process that is reversed when output to AP0/Linear. I would think that there is no loss between those transforms when done inside a 32Bit or more floating point software… But when you go out to Prores you get a maximum of 10Bit for the whole range. So there is much information lost. Isn’t it?

@chuckyboilo Yes, OpenEXR is the preferred file format for ACES images. Just using OpenEXR, however, may lead to loss of interoperability among tools and future-proofing. First of all, the archival color space must be ACES2065-1 (i.e. AP0 primaries, linear characteristics ― cfr. the ACES’ color-spaces thread).
Furthermore, a minimum but mandatory set of metadata should be written in the EXRs (e.g. the AP0 color primaries coordinates, and a specific “ACES” flag set to ‘True’), as well as only some tiling/packing/layering and compression algorithms are allowed. This is all part of the SMPTE 2065-4 standard; it seems overcomplicated when one reads it, but logo’ed products should automatically honor all these requirements when dealing with ACES EXRs.
There’s some “analogy” with (though technically different from) requirements for TIFF sequences when used in DCDMs.
By the way: another file format for ACES storing/archival is being drafted as SMPTE 2065-5: it is basically a “clipwise” MXF container wrapping a sequence of SMPTE 2065-4 compliant frames.

@flord Also for the reasons just written above to Charles, ACEScc should not be used for rendering out in files ― just for color-grading processes (like on-set and in-theater color correction). If your color corrector tool doesn’t support ACEScc as grading/working space (thus enabling it transparently and automatically), other ways exist. For example, two LUTs may be used to convert footage from native color-space or ACES2065-1 into ACEScc, then grade over it, then use the second LUT to convert ACEScc into ACES2065-1. Just be sure the LUTs are accurate enough.

@Peter_Rixner That’s exactly why ACES pipeline should never use less than 16bits/channel (whatever linear/log characteristics is employed). As Charles said; you do all the post and finishing in ACES workflow (including archival), then simply get out of ACES pipeline for the final deliverables in more restricted color-spaces and bit-depths. This way you never loose information during any creative or technical processes in between.

i have a question about this. Using RED footage

i have my project set to ACEScct in Resolve.
the I go to color module, in camera RAW section I can select “decode using” camera metadata/ Clip / Project/ Red Default, the image changes depend on what I am using. “clip” and “metadata” looks the same, the difference it is that in Clip I can modify some parameters, as “FLUT” “Denise” “OPFL” etc, etc.

The difference in “clip” and “project” it is most about temperature, “Project” it is warmer.

In “Clip” and “camera metadata” I can see the footage closer to what it was recorded. it is ok if I set all my footage to clip? what would be the right way to set this up in order to start with the grading process? if I set it to clip do I have to set some IDT in the project settings?