Colour artefacts or breakup using ACES

Yes it is quite bright on that one sphere. I’m trying to test what happens when the exposure is raised on the image. As a further test I removed the other spheres and then fed in different colors to the sphere. In the image below I have rendered the single sphere with the exposure raised up to 5 to demonstrate the behavior. You can see these colors in ACEScg compared with spi-anim. The behavior of spi-anim is what I would expect to happen, and the behavior of ACEScg seems unusual. Notice for example how you get a big white :“highlight” around teal and magenta, and then yellow in the big “highlights” for the green and red (on either side of yellow). None of that happens in spi-anim (or in sRGB 2.2 gamma). So it seems to me that something odd is going on with ACES. Just trying to figure out what that may be.

Is that 5 stops? Linear mult by 5? Or something else? “Exposure raised up to 5” is ambiguous.

Part of what you are seeing is the difference between the overexposure handling of the picture rendering in the ACES RRT/ODT combo, and that in spi-anim. There is no one “right” way to map scene to display referred. If for example you use an OCIOColourSpace node to transform from ACEScg to LogC Wide Gamut with increased exposure, you will see a different result again. The ARRI LUT handles the blue ball in an arguable more aesthetically pleasing way, but creates unpleasant artefacts on the red one:

Both the above images have linear mult of 5 applied in ACEScg.

But also, I believe spi-anim expects sRGB primaries, and your image has ACEScg primaries (or at least the ACES 1.0.3 config is treating it that way). If spi-anim is then interpreting those same code values as having sRGB primaries, they will appear to be very desaturated compared to the colours they represent in ACEScg. If you apply an ACEScg to sRGB matrix to the image first (which arguably is the correct thing to do) the result of raising exposure in spi-anim is far less visually pleasing, going very quickly into clippy, saturated colours, with no visible specular.

For reference, the ACEScg to sRGB matrix, using CAT02 is:
1.70507964 -0.62423346 -0.08084618
-0.12970053 1.13846855 -0.00876802
-0.02416634 -0.12461416 1.14878050

Multiply that matrix by [0.097, 0.754, 5.473] and you get [-0.7477501, 0.797837, 6.190972]. So your CG render has produced a colour which is way outside the sRGB gamut. So it’s up for debate what the “best” way to map that to an sRGB/Rec.709 display is even before you increase the exposure.

This is ACEcg = [0.097, 0.754, 5.473] plotted in CIE xy using Colour Science for Python:

For the primaries that has to do with the input colors on the materials, right? So rather than simply picking a color for the diffuse (which I believe would mean it is an sRGB primary) the color would need to have a color space conversion applied to it to make it ACEScg. Do you know what the correct input/output color spaces would need to be for a texture color? I’m thinking input: “Input - Generic - sRGB - Texture” (sRGB to linear) and output: “role_scene_linear” (ACEScg to ACES2065-1).

I think I figured out what the problem was. I tried reading ACES from the Maya Color Management prefs. There if I load ACES with integrated SynColor and set the view transform to OCIO view transform ACES RRT v1.0, it works fine and I do not get the blown out color. If I instead load the ACES OCIO (ACES v1.0.3) I get the problem shown above. So the issue appears to be the view transform.

The problem is that while I can use SynColor in Maya, I need OCIO for Nuke. I see that SynColor appears to be using the file ACES_to_sRGB_1.0.ctf. Does anyone know how to convert this .ctf file to something OCIO can read?

1 Like

Hi Derek

I agree with what Nick wrote above, the whitened area will vary with the color of the object due to the different behavior of the various rendering transforms (ACES, spi-anim, ARRI, etc.) as colors move off the top of the gamut.

That said, although the OCIO ACES config is quite good, it sounds like you’re running into a known issue – as colors get very bright they lose accuracy.

I’m attaching an example image so others can follow what we’re talking about. This is a region of your EXR, with the exposure gained up by 5x. On the left is the result of the Academy sRGB Output Transform CTL (ground truth), in the middle is what Maya/Flame/Lustre would give (SynColor CTF), and on the right is what the ACES config in OCIO would give. (I specified the OCIO CPU renderer since the OCIO GPU renderer has other known problems.) You can see that the Maya result matches the CTL reference but the OCIO result is wrong in the blue highlights.

Regarding your question about Nuke, currently there is no easy way to apply a CTF in Nuke. You could bake it out into a LUT, but then you’d run into the same problems that the OCIO config has. The reason the CTF works better is because it is more of an “exact math” implementation of the ACES CTL code rather than baking down into a LUT-based representation.

However, the SynColor SDK is included in the Maya DevKit, so it is possible to write plug-ins that would allow you to apply CTFs (and therefore ACES transforms) at full accuracy (and high speed) in other applications. I know some facilities have done this but I realize it requires more work than one would like.

best regards,

Doug Walker
Autodesk

2 Likes

Along these lines I found this blog post from Alex Fry:

he writes: “With normally exposed images, especially from live action cameras, you’re unlikely to see any values that render differently to the OCIO config, but with CG images containing extreme intensity and saturation you will see more pleasing roll off and desaturation.”

Unfortunately he has only implemented this with the P3D60 ODT. I would need it for an sRGB monitor. So I guess I will need to wait for someone (maybe Alex?) to write that.

2 Likes

Hello Scott,

I have tried to apply the matrix you shared with some footage shot on Red camera.
The footage have bright led that when transformed from dragon color 2 gamut to aces cg results in negative blue component values.

I assume your work was strongly related to Arri wide gamut.
Could you share your process to build the matrix to fix those kind of issue for other camera color science?

Let me know if you need more details ( unfortunately I wont be able to share the footage shot on RED at this point ).

Cheers

@lucienfostier are you able to at least provide the RGB values (and which color space they’re in) for one (or a few) of the offending pixels?

I’m willing to investigate but I need something measurable to start from…

Hey Scott,

thanks for your answer!

I’ll try to give you some rgb values but it may takes a bit of time.
In the meantime, could you expand on your approach to come up with a matrix to fix this kind of issue?

Cheers

To be honest, there wasn’t much science behind it rather than plotting the chromaticities of several offending values across a variety of problematic shots, then adjusting the primaries such that the problematic chromaticities fell into a more favorable portion of the rendering primaries.

As described here, this matrix just moves the effective rendering primaries.

Most of the problematic images that I designed with came from Alexa, but this issue exists with other cameras too. I suspect that your problematic values are falling outside of my effective rendering primary space. I’d be able to confirm this with actual values from your image that I could plot. Again, image context isn’t important to me, I just need to be able to calculate the chromaticities. When you get a chance, please send me at least one [R,G,B] triplet from the problematic area to better investigate.

Hello Derek, I am wondering if one year later there was a viable solution for this. The difference between ACES RRT V1.0 and the ACES OCIO still seem an issue. I am trying to start a new topic about this… Thanks!

Hi to all,

Just did my first pass in Resolve 15 using ACES 1.0.3 and it seems that this problem is still present. Wasn’t it supposed to be fixed directly inside ACES with this version?

On another point concerning this:

I ran into a situation where the LMT did not fix the issue completely.

Here are a few specs about the footage…

Sony FS7, which is shot at higher frame rate and resized to HD (from 4K). The images are showing some heavy aliasing, artefacts most likely from all the compression, resizing and noise that is being produced.

A few examples, look at the lights on the left side of the frame:
ORIGINAL LOG IMAGE:

ZOOMED IN As we can see the artefacts are pretty bad…

MANUFACTURER LUT (Full and zoomed in. pretty bad but no catastrophic)


ACES 1.0.3 Just RRT with Highlights LMT applied. The artefacts and not removed completely. Looks pretty bad to clients eye.


And here is a comparaison of the ZOOMED in image with LMT OFF and ON:


Is there technical limitations to the type of footage we can use with ACES? Or, maybe this i caused by Resolve?

It would be great to have some feedback from @ACES.

Thanks!

UPDATE*

Just tried using @Paul_Dore’s DCTL workflow and the artefacts seem to be OK or close to how the LUT behaves. Should we consider this to be an inherent flaw of resolve?

Hm, certainly doesn’t look too pleasing in any of those cases…

Can you provide a crop of that region of interest that you highlighted so I can probe the image values around those lights and try to duplicate your issue on my end to see if this this a Resolve issue or an ACES issue?

How would you like this? I can zoom into the image in Resolve and export a DPX or EXR? Would that be OK?

I can also send a DPX frame of the original.

The DCTL workflow works much better. I’ll post an example tomorrow.

Thanks!

Full frame DPX would be great since it would eliminate any potential issues from resizing. I just suggested a crop because sometimes sharing a full frame (even just for testing) is difficult without clearances.

What encoding does the Sony FS7 record as (i.e. what IDT did you use? e.g. SLog3/SGamut3? other?)

Yes slog3-sgamut.cine.

I have no problem sharing the full frame. And, do you want a SLOG3 version also?

Thanks!

No, not necessary as long as I know what encoding is in the DPX file I get. If it’s SLog3-SGamut.Cine then that is the important information to know. Thanks!

1 Like

A side but important note, the choice of the resizing kernel is important here as you may introduce even more artefacts if you use one with negative lobes such as Lanczos or Sinc when resizing down.

@Thomas_Mansencal: what I meant there is that the camera is 4K native and for “slo mo” it resizes to HD. Which introduces some heavy aliasing. I did no resizing after that.

On a side note: @nick, do you know which resizing algorithm Resolves uses for Sharper and smoother? We don’t have as many options as VFX artist have in Nuke (per se).

Thanks!