Online with ACES

Hi to all,

I’m looking to get some insight on how others are dealing with ACES after color. Does anyone online in ACES? Correct me if I’m wrong but the point to ACES is to be able to use it all the way to end (AKA online/packaging all the way to archiving). Please inform me if I’m mistaking the way it should be used.

So far it’s been a great tool for color and VFX but allot of tools in Online (including plugins and effects) just don’t work in ACES (in know that allot of plugin producers have yet to get onboard). For example, in Resolve (using ACEScct), the title generator doesn’t work properly, Transfer modes don’t work as expected (probably as expected on linear footage) and any transparencies, cross fades etc… are pretty much all not working properly.

Is it expected that editors will also use ACES at some point? because as of now it’s impossible to conform properly from edit to online with all of these problems.

So… Is ACES meant to be used in packaging? Had anyone done this properly? Is it a software issue?

My personal goal is to keep a project 100% ACES so that we can skip the color grading renders and just swing our project to onliners (in ACES). From there we’d be able to simply offer a EXR archive (on top of all the other mastering formats).

Am I dreaming in color? or, is this the goal? Should I rethink the way we can offer archiving services? Should it be done before online? AKA: a color grading archive without any packaging. This seems counter productive…

Any thoughts on this?

Thanks!

1 Like

Hi Charles.
I know of a few film projects to have successfully used ACES-colorimetry footage in editorial/offline process with AVID Media Composer. I’m not sure timed-text or fades were involved or not, though.

You are correct in auspicing ACES is used for offline editorial, as that would be the perfect world.
However, not using ACES2065-1 encoded proxies during online does not necessarily break ACES ― as I will explain at the end of the post.
Not using ACES2065-1 during offline editorial is due to a couple of operational problems standing in today; if they ever cease to impede tomorrow, then rushes encoded in ACES2065-1 will be a reality on 100% of ACES projects:

  • If, as it’s often the case, you have to transcode your rushes because, for one or several reasons, the online editorial process cannot directly work on camera-native files (so there’d be “time” for a simultaneous conversion to ACES colorimetry), current editing software may not likely support a standardized, yet practically workable file format that also encodes ACES2065-1 codevalues. In fact, the de-facto standard codecs for online, ―DNxHD/R and ProRes― currently do not.
  • Even if you can “afford” online editing straight from camera-native footage, there are currently no cameras whose Raw format encodes ACES2065-1 codevalues. In that case you need an additional transcoding phase to switch to ACES colorimetry, which may not be ideal for timing/storage costs. For this reason ACES supports by design appliances working on camera-native files without mandatory, spearate step for transcoding to ACES.

However ―and this is the focal point, as anticipated at the begninning― the former situation of the two does not “break” ACES, because:

  • Separately-transcoded rushes used during online are considered to be “periferic files” in an ACES workflow; therefore their colorimetry may be correctly in an output-referred color-space. Specifically, it must be the color-space(s) used in the reference monitor(s) of online editor(s).

  • Even in the above color-pipeline, the correctness of the ACES workflow stands in as long as the transcoding of rushes to their output-referred color space always goes throughout a complete ACES pipeline, that is Input Transform (IDT), then an optional LMT, then Output Transform (RRT+ODT). Product Partners’ software allow the creation of such ACES-compliant rushes, usually in a completely transparent UX. This is the way color-critical information (like CDLs and show LUTs) can be passed to editorial in a consistent way with all other departments.

  • The rushes are “periferic” also because they are meant to be a visual reference/proxy to the original footage; no additional color-critical operations, not even any mastering, should come out of them. Ideally, rushes may even be erased after approving the final cut.

By the way, all the ACES film / TV productions that I supervised employed the above color-pipeline for the online. Paradigmatically for an SDR workflow, this means rushes transcoded ―with an ACES-compliant software and in one passage― into a proxy format (typically DNxHR or ProRes wrapped in MXF), color-encoded in Rec.709 or Rec.2020 ― cameras’ Input Transforms, then the show-LUT and/or on-set CDLs, then the Rec.709 Output Transform.

Needless to say that, were the editor to decide for additional color-grading on the rushes, those metadata carried over in the project’s “bin” or via additional CDLs would break the pipeline. This should really be avoided, both because of such a grade being applied in a (possibly SDR) output-referred color-space (Rec.709, Rec.2020, HDR10, …) rather than on top of an ACES color-space and, above all, because color-correction does not belong to the editorial department.
If, however, color-correction/pre-grading during offline is either desired or required, ACES workflow does not “impede” that. In this case rushes need to be in ACES2065-1 codevalues (as common everytime ACES content is stored, even temporarily). Color-grading will then take place, transparently to the editors, in either ACEScc or ACEScct color-space.

All in all, the common mistakes for on-location and offline departments, for any ACES projects, are:

  • pre-viewing / checking camera footage using non-ACES conversion software to on-set / near-set monitors;
  • transcoding the rushes using a color-pipeline that bypasses ACES completely (even using a Product Partner software, but not configuring it to internally pipe by ACES Input/Output Transforms, is wrong).

Any of the above will most likely end up with DI/post/VFX departments viewing something different than what was presented to DoP/editorial.

Thanks Walter! Sorry for the lateness of my reply. I was busy in the grading room!

I get what your explaining. But our main problem is keeping projects in ACES thru online so that we can offer an ACES archive to our clients. There’s no point in using ACES to archive I the final master timeline cannot be used in ACES. (IE: in resolve at the moment it’s impossible to do so since titles, transitions and other basic online tools don’t work with a linear image).

I could always archive the color graded (with no online or packaging). But that’s counter productive in the event of a “reissue” of a film. What I would want is a final film in ACES that I can sell to my clients as a fully future proof archive.

Now in your proposed workflow you suggest peripheral files created in ACES. Is this correct? This we’re already doing. When color is finished, files are rendered with the ODT to QT PR4444 where they stand as correctly display referred file. AKA they look as intended on the ONLINERS monitor. But at that point I break the ACES workflow. I cannot go back into ACES with these files. Correct?

My goal is to tell my clients: in 20 years, we’ll be able to open your archive and do a trim pass for the new projection standard…

Isn’t that the point?

Thanks!

Hi Charles.
Sorry it was my fault: I thought you were looking for ACES-encoded footage for “offline editorial” instead than “online” – and I replied accordingly.

I understand your first question now, which makes even more sense and raises an outstaning point.
Yes, “conforming” (online editorial) goes right into the postproduction phase that, together with color-correction, is usually called “finishing”. And yes: the output of finishing should really be an ACES archive, if not even a master – thus in ACES2065 color-space (as per ST2065-1).

Your main point against mastering/archiving in ACES seems to be factored down to issues implementing a color-stack with Resolve, right?
In this case an ACES pipeline is no different than traditional color-stack operations (color-picking, graphic inserts, titling, etc.) in a possibly-nonlinear, output-referred color-space, on top of a “log” space timeline. This is always being solved with either

  • the application providing options to choose the layer where the above operations take place, or
  • by applying a “reverse Output Transform” IDT to the node (or layer) that works in the color-picking color-space, so this is brought into the timeline color-space afterwards.

Either result in the application converting, in the subtitling example, the Rec.709 white-text CVs into timeline CVs for the same color, so both can be rendered into the master’s color-space (ACES2065 in your case, or into a “log”-encoded RGB space in traditional DI film workflows).

As regards the file format, again, no Apple ProRes codecs support ACES colorimetry so --at the very latest during mastering-- you have to change the file format to either OpenEXR sequences (compliant to SMPTE Standard ST2065-4), or to MXF files wrapping the above EXRs (as per ST2065-5). By the way, the latter MXFs may be instantly mastered into an ACES Interoperable Master Format (IMF) package, as soon as SMPTE Specification ST2067-50 is released; this format should be good for both studio interchange and archival.
Even if you do manage in rendering ACES2065 CVs into a ProRes file, that is not a Standard ACES file format, so ACES logo’ed applications are neither required, nor even expected to support ACES workflow loading ACES2065-encoded ProRes files. This holds for both current products, and even more for future products (for your “re-issue” concerns).

Charles, forgive me if this is very naive, but I wonder if this page from Arri may help? (Especially the last diagram) http://www.arri.com/camera/alexa_mini/learn/working_with_aces/

I believe that between each department of the pipeline, you are expected to convert from your working space, such as ACEScc, back into standard ACES2065-1 as your interchange format. So in the final DI step, you would be starting with ACES .exrs, converting them to an ACEScc workspace to do your color correction and titling, and then exporting at least two masters - one in ACES2065-1 .exr for archival, and then another output-referred master for playback on Rec709 or P3, etc.

The titling step seems like it would happen in the ACEScc workspace (or ACEScct as you mentioned), which is not linear data, so if Resolve is having issues there I suspect something else may be causing the problem.

And as far as giving your clients a final film in ACES format, I believe the idea is that in total you would be giving them 1) Playable output-referred masters, 2) Color-graded ACES2065-1 master that can be used to quickly generate more output-referred masters, and 3) Un-graded ACES2065-1 files either originating from the camera or VFX output, along with any ASC-CDL files and/or Resolve project files which can be used to reconstruct and modify what was done in the final DI sessions.

Thanks @walter.arrighetti!

It seems you have pinned pointed the problem. Very grateful for your answer.

Now, please define a color stack… Do you mean any kind of color or graphic operation applied to the footage?

You seemed to also have proposed a viable solution…

  • the application providing options to choose the layer where the above operations take place,
    or
  • by applying a “reverse Output Transform” IDT to the node (or layer) that works in the color-picking color-space, so this is brought into the timeline color-space afterwards.

Solution 1: Does this imply havinf the option of choosing “where” the operation takes place in the ACES workflow?

Solution 2: This seems to be the key. But, please give me a little bit more info (in your best layman’s terms). Would I simply have to apply a reverse ODT (as a lut? or clf?) on the graphic element? What about transfer modes? I cannot “pin point” these unfortunately… They’re “part” of the footage… If I apply an Reverse ODT to footage wont it look weird? or to have have to sandwich these between between 2 an invODT and ODT?

Thanks again for the info!

Cheers

Thanks Jonathon. Yes the main issue is that these titles cannont be managed by resolve. I would have to convert them by hand into ACES. Which is not really possible in Resolve. So yes this is in part an ACES problem and a major Resolve problem (who is not an ACES partner)…

Hi Charles.
Color stack is the virtual representation of your application’s internal color management: which technical and creative color-processing is applied, how and where/when. All the imaging apps I know of employ different color-stacks: for many it is purely layer-based (e.g. Autodesk Lustre and DigitalVision Nucoda); for a few it is completely node-based (e.g. Foundry NUKE); many have a mixed/variegated stack, like Resolve or Baselight.

Now, trying to answer your questions, in layman’s terms (although this has already been extensively discussed in ACEScentral):

  • No modifications to ordering of graphics/titling operations within the ACES pipleine: they always take place within the proper working (or timeline’s) ACES color-space: that is ACES2065-1, or it may be either ACEScg (if it’s a CG/compositing or otherwise color-linear operation), or ACEScc/ACEScct (if it’s a grading operation). Just the color-picker CVs virtually “work” in a different color-space. Not many products I know of provide such a user control over it, though.
  • You treat titles or inserts just like the compositing of images from different source color-spaces: import them to a common colorimetry (the working/timeline color-space) as first thing. So in your case import (likely?) Rec.709 or sRGB inserts by applying the Input Transform from Rec.709 (or sRGB) into ACES2065-1. Since this is the inverse of what an Output Transform does, this why it’s called a reverse Output Transform.

Detail: there’s a technical pinpoint here, small but very important for the end result: you need a “reverse Output Transform” (i.e. a reverse ODT followed by the inverse RRT), not just a “reverse ODT.” For this reason I encourage everyone thinking about ACES Input Transforms and Output Transforms, rather than in IDT/RRT/ODT terminology.

The key point is, here, knowing the exact (output-referred) colorspace of your insert, and having the appropriate transform, otherwise weird looking takes place. Again: there is at least another thread in here where this was discussed with real-world examples and transforms’ names.

Problem with titles/graphics is that, most of the times, they come from an unspecified color-encoding, and/or they are stored in a non-color-managed file format at all. In either cases any Input Transform you apply will likely produce a different result from your application’s default color-management for those unspecified/ill-posed color encodings.
This is where most “weird” colors from inverse Output Transforms usually come from: one either picks the wrong color-space for the Input (reverse Output) Transform, and the source images comes from a not (properly) color-managed format.

I hope this is clearer now.

Thank you @walter.arrighetti!

Absolutely clear. But… In the end some of the titles and transitions are created within Resolve’s “color stack”. Meaning I have no control over the way they are managed in ACES. IE: I cannot assign an IDT… So I guess this is a Resolve problem… So basically all of these elements should have a “flag” for aces… Or they should know they are created within ACES…

Conundrum… I’ll have to talk with BMD.

Thanks!