New ACES Working Group - Gamut Mapping

Tags: #<Tag:0x00007f58b9876068>

ACES Community,

Today we are announcing the formation of a new architecture working group to focus on gamut management within the ACES system. When out of gamut colors occur in the current ACES system the default behavior is gamut clipping. This can sometimes lead to results that appear as image artifacts. This ACES Gamut Mapping Working Group will investigate and recommend improved strategies for managing out of gamut colors. The group will be Co-Chaired by Carol Payne of Netflix and Matthias Scharfenberg of Industrial Light and Magic.

We will be announcing the exact schedule for this working group soon but in the meantime, we’d love to hear your thoughts and ideas on gamut management with the ACES system, and in particular on the Working Group Proposal posted here.

This new group, as with all the recent groups will be conducted virtually using GoToMeeting for meetings, ACESCentral for discussions between meetings and the ACES Workspaces for exchanging documents, code, etc.

Thank you
ACES Leadership

Working Group Proposal: ACES_Gamut_Mapping_Working_Group_Proposal_Approved.pdf (164.5 KB)

Document Workspace: https://aces.mp/GamutVWGDocs

7 Likes

That is great! I would be keen to participate to this one (especially having started to finally look at that in Colour), I hope the chosen time will be compatible.

1 Like

Hi Thomas! We’re working on meeting time proposals now - would love to have you participate, so will reach out soon! I’m considering rotating meeting times to accommodate time zones. Open to suggestions to make this work, as well.

2 Likes

Definately interested in this, when I’ve experimented using ADX I’ve had gamut issues going to and from ACEScg and so I’ve found it useful to nudge the primaries around so one becomes a strict subset of the other to avoid issues going back and forth.

Kevin

1 Like

Looks like fun! I’ll look forward to the discussions around this.

1 Like

Looking forward to participating also.

1 Like

Very interested in this one! We are now experimenting with different custom gamut mapping or compression methods and it would be great to find good solutions. Too much out of gamut footage is generated in commercial vfx for my taste :slight_smile:

1 Like

I think Gamut Mapping and compression is an essential part of a successful image processing pipeline.
It is needed in a different context at different parts of the processing stack:
Cameras can produce out of gamut colours because they are not meeting Luther Condition and therefore cannot be transformed into a human-centric pipeline without errors and compromises.
But also if a camera would produce only valid signals, creative processes like VFX and grading might reintroduce invalid values. So further gamut compression might be needed to prepare scene-referred data for further display rendering (very close to the end of the scene-referred stage (maybe within an LMT)).
Here we are still in scene-referred image state, and gamut compression in the scene-referred domain might require some different design to display referred gamut compression.
It needs to be:

  • simple (non-iterative)
  • quick to compute
  • slightly parameterised

After display rendering, we might need further gamut compression to produce the final image on a given display gamut. However, if the scene-referred data is already sensible, gamut mapping on the display side is not that big of a problem in our experience.

I am happy to join the group if this might help.

5 Likes

I’m interested. I can see two areas that overlap with this, both of which had been mentioned in the past as candidates for VWGs: more colorimetrically accurate IDTs, and a new rendering transform that didn’t start by hard-clipping to AP1.

If we don’t change the latter, for example, we both set a harder goal for ourselves, and we have to decide what to do with colors that could be captured (yes, I know, highly unlikely blah blah Pointer gamut blah blah) or come out of a CG renderer or a color correction system that are inside the spectral locus but outside AP1.

I presume it’s up to Carol to set the scope of her investigation, but you would be the one who knew what parallel investigations were standing in the wings.

1 Like

I would also love to be involved. Gamut mapping solutions currently deployed outside of ACES have their own individual issues and I would love to voice my concerns from a colourist perspective. The mapping of OOG colours can be very subjective but I think we can find consensus on the approach which is a collective / subjective solution, rather than an approach that is based from scientific comparisons such as difference deltas to the OOG values. I believe there are simple solutions available that take into account the OOG values to display a pleasing representation in a smaller gamut/dynamic range. Proprietary solutions have existed for some time but I think a standardised open approach could yield even better and flexible adaptations we could all use…

3 Likes

While I can only provide the point of view of a colorist, I would like to help also.

1 Like

Interested. Gamut mapping can be frustrating moving to an ACES pipeline.

Great! I’d like to help anyway I can. I’m a plugin/tools developer working with vision science researchers in tone-mapping / gamut-mapping, lately having come up against clipped colours using ACES and would love be part of this investigation to both learn more and hopefully contribute back with ideas or any testing or development that’s needed.

1 Like

Not sure if this is the right place to start this discussion… I have a suggestion for scoring the results of the gamut mapping from the encoding space to AP0 or AP1. What I got from the meeting was that there’s no current metric for doing so and that we cannot rely on the colour appearance models that display-to-display gamut mapping relies on. So if we manually graded the shots but only from the encoding space to AP1 just to get the clipped colours into where we can agree they look good before going through the RRT/ODT and use that as our ground truth, perhaps we can apply a display-referred metric as an automatic comparison against the gamut mapping + RRT/ODT? Maybe the shots can be balanced a little as well… I’m suggesting this also because, it’s hard to know what the correct colour should be based on was said about the fuzziness/inaccuracy of those out-of-gamut LED colours so it seems we might have to optimise it from people’s preference anyway. I might be off the mark here but I wanted to start this discussion :stuck_out_tongue_winking_eye:

1 Like

Hi,

I would be happy to contribute. There are some GMAs I have implemented in the past ( https://www.researchgate.net/publication/281039764_Gamut_Mapping_for_Digital_Cinema ). However, key finding of this paper in retrospective was for me that the space to perform gamut mapping in is more important that the individual Gamut Mapping Algorithm.

Best,

Jan

3 Likes

Hi Jan and Welcome!

I haven’t read your papers (and will) but quite coincidentally I was suggesting yesterday during the Working Group meeting that we might want to look at 3D LUTs (especially with CLF around) to represent complex models that are unpractical to run in realtime. Another positive point with that approach is that in the case of camera “gamut” mapping, it could also be done by the vendor himself without disclosing too much IP in the making process. The Working Group mission is to solve that particular problem but I reckon that nothing should prevent a vendor to propose their own mapping, in which case 3D LUTS would be a good candidate to represent the model.

Cheers,

Thomas

Hi Jan :wink:

I think 3D LUTs can only be used if source and destination are known (like in Jan’s paper).
With scene referred data coming from all sorts of cameras and modified by all sorts of image processes I would prefer an unbounded gamut mapping algorithm.

Not mentioning invertablity… :slight_smile:

Best regards
Daniele

1 Like

Hi,

I guess it depends where you are in the chain, in the contexts of Cameras/IDTs, the source and destinations are known, i.e. the current camera sensitivities and the Standard Observer.

Cheers,

Thomas

I’d love to participate in the group as well. This VWG just slipped away from my radar, but I’m happy to contribute.

2 Likes

It would be cool to have you in the group, lots of smart talk. Me as a humble colorist I just listen :slight_smile: If you want to cach up, you can view the recordings of the meetings and check the papers on the dropbox.