ACEScc vs ACEScct

There is only a problem if you use ACESproxy output of a camera. Since systems like LiveGrade and Prelight can apply an IDT themselves, you can take the feed from the camera in its native log format, and apply your live grade in ACEScct.

1 Like

Nick Shaw, thanks for your prompt response and the workaround. Indeed, it’s a very helpful response to my question.

Sorry for being late here


I am tasked with pulling VFX plates for a RED show and I want to make sure we export in a format that can be sent by the VFX vendors to online in the same format we are delivering them.

I need to select a software, Resolve or RED Cine X, to pull the preferred format of 4K 16bit image sequences. Then I need to select the colour management settings within that software. As I am more familiar with Resolve I have started there.

We are shooting in 6k with DragonColor + REDLogFilm as a base with an applied REDGamma2 and CDL to achieve the dailies look.

After importing the source I select the Resolve colour management settings tab and see the two ACES options of cc and cct. Within the other color management options of Resolve I can select the starting point in our colour workflow - DragonColor and REDLogFilm - which seems to make sense. Then there are the colour science options, ACEScc, cct and DaVinci YRGB.

Colour science question


If I choose one of the ACES, which of cc or cct will best achieve the desired result?

Settings options questions include:

Linear or Not?

  1. ‘Process Node LUTs’ setting can be toggled between either AP0 Linear or AP1 Timeline Colour Space?

Output Tranform to the HDR Delivery Spec Requirement or leave the image sequences in the timeline export setting of DragonColor and REDLogFilm?

  1. ‘Aces Output Transform’ which has a variety of options for P3 D65 (nit 1000-4000)? Which is our finishing colour space for the HDR 4k delivery.

Thanks

If you have applied your ASC CDL corrections in REDlogFilm, you have not been using an ACES pipeline. So the CDL values you used will not be applicable. You will need to grade from scratch if you want to finish in ACES.

The choice of ACEScc or ACEScct is a personal preference. Through the majority of the range they are identical, but the shadows respond differently to grading operations. Try both and see which you prefer the “feel” of.

This only makes a difference if you are using LUTs in any nodes. LUTs need to be constructed for the colour space that they will be applied in. Generating LUTs for use in linear spaces requires the person constructing the LUT to have technical understanding of how to construct a LUT for linear input. So if in doubt, you are probably better choosing to apply LUTs in the timeline space.

You certainly do not want to set an Output Transform that matches your source footage, unless you have a very specific non-standard workflow in mind. You should set the Output Transform to match your required deliverable, and use a monitor set to receive a signal in that format.

ACES makes it possible to work with one type of monitor with the corresponding Output Transform, and then change Output Transform to match a different format. However, when you change Output Transform you still need to review the film on the new type of monitor, and make trims to the grade if necessary. For example, you would never want to grade on an SDR monitor, and then switch Output Transforms and deliver “sight unseen” in HDR.

3 Likes

Thanks for the considered response!

Then why there is an ACESproxy ? I mean,we can use cameras’ native log format on-set and convert it into ACEScc/cct, and grading or something. I would like to know the meaning of cameras’ ACESproxy output.

If you create CDLs on set viewing ACESproxy on the camera output signal, then those same CDLs work right if you grade in ACEScc.

I haven’t figured out though, what kind of workflow would allow them to work in ACEScct as well.

When using a live grading system capable of applying a live Input Transform to a camera log input (which all modern live grading systems can) ACESproxy is redundant.

I believe it was designed to allow live grading with a more “naive” CDL grading system, with the Output Transform applied by a LUT box downstream of it.

These days live grading systems usually concatenate the Input Transform, grade, and Output Transform into a single 3D LUT, and push that to a LUT box.

Thanks for your reply and that’s useful for me.
And appreciate it again.

HelloI have another question about ACEScc.

I’ve read the ACES document which says LMTs should be used after color gradingwhich means after you convert ACEScc to ACES you apply LMT. But I’ve noticed another situation in some pictures where LMT is in ACEScc but not in ACES.

And I don’t know if we can apply 3D LUTs like some creative style in ACEScc. As we all know, ACEScc is 32bit floating point encoding.When we use LUTs ,what about LUT index and how they work to get results of each channel. I’m so confused about that.

Sorry for the delay in answering


LMTs are intended as a systematic modification of the look of scenes and not as a replacement
for a individual ‘Grade’ of a scene. LMTs are an ACES to ACES transform, but some implementations
do them as an ACEScc to ACEScc (log transform) This sometimes is more effective when working with
LUTS as a log scale on the LUT index is more perceptual and can treat different regions accurately. The difficulty is that ACEScc has different numerical bounds so the LUT has to be designed for that space directly. it is up to the user to make sure everything of interest is in the appropriate range. It is important that the LUT if used is applied in the correct color space and that is not always clear in a particular implementation. Again the main idea was that an LMT+RRT+ODT is the look of a show (or sequence) but that managed looks for individual shots are managed by the color corrector through a timeline (and are what can be baked into the data for the graded final – it is possible that the LMT can be baked in as the last step before outputting an ACES file 
 but conversion into ACES has to be part of that pipeline) Don’t know if this helps because it is a general description without knowing for sure which color corrector you are using.

1 Like

I read some of the thread and I’m not understanding the difference between ACEScc and ACEScct ?

Hey Nick! Could you please clarify at what stage ACEScc/cct gets converted and mapped again to linear ACES2065-1?

1 Like

To quote @sdyer from the top of this thread:

Thanks Nick

Although, I’m sorry to say that info didn’t really help me get what I’m trying to understand


Let me try differently


Could you or anyone else here please help me understand if and what’s wrong with this color spaces and transfer curves tracking/mapping workflow sheet? Particularly when it comes to the LMT and the ACEScc/cct encoding/decoding? Where in the process and in real terms that really happens?

Please ignore the OETF/EOTF labelling on the axis


Thanks a lot everyone!

That chart is pretty good. If I was nit-picking I would point out that the RRT is applied to linear data, as the ACES2065-1 colour space label suggests, but the curve plot suggests a perceptual space.

Graded ACEScc(t) is transformed back to ACES2065-1 to be archived as an ACES Master (graded linear AP0) which is not shown on that chart. Then it can be repurposed in future with some as yet unknown ODT for a future display technology.

The definition of an LMT specifies a transform applied to to ACES2065-1 data, before input to the RRT, which also takes ACES2065-1 input. But in a real pipeline, adjacent transforms may be combined. For example an IDT specifies the transform to ACES2065-1, but the working space for grading is ACEScct. There is no reason that has to be implemented as two steps, so the transforms may be combined into one, going straight from camera log to ACEScct. But conceptually ACES2065-1 still exists in the middle.

Likewise VFX pulls should be delivered as ACES2065-1 and converted only in the compositing system to ACEScg, then back to ACES2065-1 to be returned to DI. But many people unofficially use ACEScg EXRs for interchange.

Make sense?

Hey Nick thanks for you response and sorry my late reply!

Yeah that makes sense and I’ve updated the chart as below.

Although, could you please clarify the following: During DI, is the conversion from ACES2065-1 to ACEScc(t) a continuous seamless process inside the grading software? Meaning, in DaVinci for example, you choose ACEScc(t) as the colorscience input, but then you’re monitoring with a chosen ODT which has in the middle the conversion from ACEScc(t) to ACES2065-1, correct?

Thanks again!

Hello again!

Could anyone try to answer this question of mine?

Thanks again!

What DaVinci, or any software actually does internally is unknown, and doesn’t matter to the end user. But the net effect is the combined result of a transform of the graded image from ACEScct to ACES2065-1 followed by the RRT and then the selected ODT.

Got it! Thanks Nick!