ACEScg vs Linear sRGB/709 CG rendering

Hey y’all we just delivered our first full cg all ACES commercial and while a challenging learning process the results were pretty fantastic. Especially with the artists being able to pump more realistic amounts of light into the scenes, as well as all aspects of the asset development pipeline looking through the same LUT.

That said, in hindsight, and as I gear up on a much larger project, I wonder is there really any advantage to pre-converting all textures and colors in a full CG sequence into ACEScg color space vs keeping it all linear and just comping in ACEScg color space? Currently, we’re rendering with VRay, so we’re either pre-converting textures or using the VRAYOcio map to shift colors and lights into ACEScg color space before rendering. Then we set the frame buffer to ACEScg in and sRGB out so we can look through the ACES LUT that everyone else down the pipe is going to be looking through which is great.

I’m just wondering if its currently worth the hassle converting all of the maps, or inserting the OCIO nodes in our shading networks if we on the CG side could just set the frame buffer to Utility - Linear - Rec709 in and sRGB out. I understand the color primaries are different and in extreme cases of highly saturated lights hitting extremely saturated colors we’d be losing a little something or maybe even introducing some funky / bad math, but as we develop our pipeline I wonder if its worth the ‘risk’. I’m just trying to make my asset and character shaders’ lives a little easier until more DCCs support aces and ocio color management in an artist friendly way. They’re already struggling with understanding the effects of the LUT and why is my blue not fully bright blue anymore. With time they understand the benefits, but it can be a rough learning process for non technical people.

If we just kept all of the colors / texture maps in linear space space we’d still be able to get the nice rolloff of the ACES lut and subsequently pump more light into our scenes. We could then in Nuke, still comp in aces, but interpret the CG as linear, I think it’d save us a lot of time, but I want to make sure we’re not missing anything. I have a feeling this is fine for full CG scenes, but what about when working with plates, we could in theory convert the aces plates to linear, do the cg in linear, and still comp in the expanded color gamut. just as long as we’re all always using the aces lut right? Am I missing something?

1 Like

Hi @Colin_James,

A few things:

ACEScg is intrinsically linear, what you are referring to as linear is currently sRGB / BT.709 / Rec. 709 colourspace with a linear transfer function.

As you mentioned it, some RGB colourspaces have gamuts that are better suited for CG rendering and will get results that overall will be closer to a ground truth full spectral rendering. ACEScg / BT.2020 have been shown to produce more faithful results in that regard. Whether it matters is probably context dependent, Weta Digital has elected to go down the spectral rendering road because the company felt it mattered for what they were doing. Many other facilities did not go down that road and are still producing high quality imagery.

Yes! Reality is that you might have to go back do some keying operations in the blessed camera vendor colourspace or even so called “native camera space”.

Cheers,

Thomas

1 Like

Thanks for the reply Thomas, and yeah sorry about the misnomer, I meant sRGB with linear transform. That’s how we’ve been doing our CG for years. Before I found this forum I’d posed this question on reddit and got an in-depth reply saying that it actually really can matter and that we should be putting the colors and lights into ACES if we’re going to be comping in ACES… Since you seem to think while less accurate its fine to render in sRGB I thought I’d share the reply and get your thoughts:

"Short answer: Yes, it makes a difference.

​First, Chaos have confirmed that VrayOCIO does slow down render times, so for fastest times, you want to preconvert. I’ve automated it using the ocioconvert command line tool, as you can point it to an OCIO config and just define input and output colorspace (because that also lets me process jpeg textures as sRGB, etc.). But the ACEScg to Linear conversion is just a color matrix, so if you find a faster/better way to do this you’re not losing anything by doing so.

Also, a quick note on the definition of color gamut. You probably know this, but for others reading. I don’t make something “more blue” by adding blue - that’s luminance. I make something more blue by removing more red and green. If I’m already at 100% saturation, the only way I can add more saturation is by going into negative values on red and green. These are obviously clamped and don’t have any effect on how it’s displayed – until I start doing something with it, like grading or converting to a different colorspace. Since ACES has a wider color gamut, the definition for 100% saturation is wider (more saturated) than in 709. Which means if I convert an ACES image to 709, I am trying to make things more saturated than 100% in the 709 gamut. This means negative values. Mathematically, this is actually totally fine, as long as I color correct it or convert back to ACES before using it. Alternately, working with 709 images in ACES colorspace means that even pure saturated colors are desaturated a little bit (have some red and green added).

​Now, your question is why bother preconverting at all? Why not just render without converting and read into comp as Utility - Linear - Rec709?

​Imagine shining a 100% blue light at a 100% blue textured sphere. Here’s that approximate setup in Nuke:

https://i.imgur.com/c6QZMm7.jpg

This means red and green have the value of 0 for both. Converting to ACES, because I’m expanding the gamut, introduces a little bit of value into red and green. For practical purposes, let’s say 0.1. Blue light on blue sphere = 0, 0, 1 output. Convert to ACES equals 0.1, 0.1, 1. If I pre-convert the light and texture, each will have a value of 0.1, 0.1, 1. 0.1*0.1 = 0.01, so the final output will be 0.01, 0.01, 1. 10% of the blue that was present if I post-converted the render. This is most noticeable at extremes, but just kind of proof that the math IS different, and a little more accurate, if your textures are ACEScg from the beginning.

https://i.imgur.com/D2JKGpE.jpg

Regardless of which one you pick, you want to make sure your lighting artists are viewing the frame buffer through the ACES view LUT. So if you don’t convert, you want to make sure your input colorspace to your Frame buffer OCIO dropdown is Utility - Linear - Rec709. So you need OCIO involved either way.​

Oh! Don’t convert the ACEScg plates to Linear and back to comp - because of the gamut conversion, you’ll end up with negative values in bright colored areas. Which, then if you clamp them out, you just negate all the benefit from using ACES. If you don’t, then you’ll get wonky artifacts. Better to just stick with ACES in comp."

1 Like

Ha! Certainly not :slight_smile:

I’m definitely leaning toward models that are the best representation of reality possible. Now from a cost and pragmatic standpoint there is a reason why we are only using 3 primaries: it would be highly expensive to adopt display chains with more of them although it is the only way to optimally cover the visible spectrum and thus get a better model of reality.

Spectral rendering takes more time obviously, and if you aim to do it accurately, you ideally need to get your content creation pipeline setup for it which is more expenses.

I don’t think I follow you here! This is really a question of ratio between the components. If you want to see “more blue”, you need a higher ratio of blue light in respect to red or green, i.e. you need more higher frequency radiant energy.

Yes, the basis vectors are different and BT.2020 / ACEScg are producing better results, likely because the primaries are sharper along the fact that the basis vectors are rotated in way that reduces errors. A few people (I’m one of them) have written about that a few years ago about it. On that topic I’m re-hosting the tests Anders Langlands did in that respect: Render Color Spaces

Cheers,

Thomas

1 Like

hi colin,

what is your aces workflow in the meanwhile? do you skip the conversion process and do a “utility - linear - srgb” input colorspace in the framebuffer ocio color management?
i am asking, because i have the same thoughts concerning redshift…it does not support directly texture colorspace conversion yet (like arnold), but do support input colorspace on the ocio colormanagement in its own renderview…

Hey,
So what are you talking about is what is commonly called “Fake Aces” where you actually only used the tonemapping that ACES offers, for your render.
A complete ACES workflow means that all your textures has to be converted to ACEScg and it does make a difference (The work of Nicolas Maurel showcase this very well: https://www.artstation.com/artwork/1ndwgZ ).

For Redshift now:
This workflow is made for Maya:
• Enable OCIO through preferences as you would do with Arnold (to get the colorpicking advantage)
• Convert all your textures ColorMaps to ACEScg in your favourite DCC
• Or use the Arnold’s TX manager to convert them
• Set all of your file nodes to “Raw” to avoid errors and unwanted conversion(happens to 8bit files)
• Set the RedshiftRenderView OCIO

There is still some more steps to get a full ACEScg render where you need to convert some renderer features that are based on spectral ones as:
-Light temperature
-Physical Sky (impossible on Maya)
-Melanin in hair shader
-…
For this one you can use a simple matrix transform with node, knowing that they have sRGB primaries (so sRGB to ACEScg matrix).

hi liam,

thanx for the breakdown…
can you tell how this matrix transform is done in maya using nodes (concering physical sky, light temperatures)?
(what nodes should be use?)

Hey,
I just did the test in Maya, Redshift nodes are made a way that doesn’t allow to reuse the output RGB color they can produce so the matrix conversion is useless for Redshift.
(So there is to my knowledge no way to convert redshift spectral features to ACEScg)

For Arnold it could work but since some versions Arnold automatically convert spectral features to the colorspace primaries set in the preferences.

Anyway here it how it should look for the Arnold physical sky (but don’t do it as i said above):

Value for multiply nodes are :
R: 0.6131324224, 0.3395380158, 0.0474166960
G: 0.0701243808, 0.9163940113, 0.0134515240
B: 0.0205876575, 0.1095745716, 0.8697854040
(sRGB to ACEScg matrix)

Color correct nodes are just for merging channels.

Credit to Onouris on discord for showing me this.

hi liam,

thanx for the tutorial.
in the meanwile i posted also a thread on redshift forum, concerning using physical sky in an aces workflow.
onouris shares there his node asset (srgb to acescg) for using it in redshift.
also he shows a workaround how to use this node asset for the physical sky (assuming it has srgb primaries)…its promising and seems to work…
(this is maybe also usable for light temperatures conversions)

1 Like

Great ! Can you share the forum’s link ?

sure
https://www.redshift3d.com/forums/viewthread/31862/

1 Like

I don’t understand your chart and how to read it using the cross hair ?

Each image shows how light transport and especially indirect illumination is affected by the working space. You can then compare the RGB renders to the spectral reference.

When using spectral reference, maybe based on a spectral render; one doesn’t necessarily need ACES, OCIO yes, but not ACES ?
Therefore all textures coming into a spectral render should be linear, or ACEScg (exr) correct ?

There is not really a concept of RGB colourspace for a spectral renderer it is of interest only at the input when you recover reflectance from RGB data or after conversion to tristimulus values.

Therefore with a spectral render, you can just use ACEScg (exr) files and load them and let the spectral render do the rest ?
Although that is probably the case with RGB renders as well :slightly_smiling_face: