LMTs Part 1: What are they and what can they do for me?

Look Modification Transforms (LMTs) are a very powerful component of the Academy Color Encoding System and offer extraordinary flexibility in ACES-based workflows. This multi-part post will explain what LMTs are, why they are so important, and how they can be constructed and applied.

For those interested in perusing existing LMT documentation, please see Academy TB-2014-010.

So what exactly are LMTs?

LMTs provide a mechanism to apply an infinite variety of “looks” to images in ACES-based workflows. LMTs are always ACES-to-ACES transforms. In other words, ACES2065-1 data directly translated from camera-native image data via an Input Transform is manipulated by the LMT to output new ACES2065-1 data (designated in diagrams as ACES’) that is then viewed through an ACES Output Transform.

LMTs can be used to set a “starting look” that is different from the default rendering associated with an ACES Output Transform. Any adjustment away from the starting default reference rendering is considered a “look” within an ACES framework. By this definition, a “look” can be as simple as ASC CDL values preserved from set to define a starting grade.

LMTs can also be used project-wide to reduce contrast and saturation across all shots, providing what might be a more preferred starting point for colorists accustomed to flatter and more muted starting images.

LMTs can also encapsulate a preset creative look. Very complex creative looks, such as film print emulation, are more easily modeled in a systematically derived transform than by asking a colorist to manually match using only the controls in a color corrector. Having a preset for a complex creative look can make colorists’ work more efficient by allowing them to quickly get the creative look they (and the filmmaker) want, and to spend more time on shot- and/or region-specific creative color requests from clients.

Over time, a collection of LMTs may be collected into a “library of looks” that can be used to apply an initial grade or a “show LUT” look on any given project. Further grading adjustments can be made on top of the new starting point. LMT’s are image-wide; they cannot capture power windows, mattes, etc.

Rendering

Generally speaking, camera-native image files from today’s digital motion picture cameras do not look “correct” when viewed directly, as they are “unrendered.” Camera-native image files are typically encoded in a manufacturer-specific RGB space metric and log encoding, which have been designed to preserve as much of the captured scene information available from the camera’s image sensor.

For example, ARRI uses ALEXA WideGamut/Log C, Canon has Cinema Gamut/Canon Log 2, Panasonic created V-Gamut/V-Log, RED defines REDWideGamutRGB/Log3G10, Sony utilizes S-Gamut3/S-Log3, and so on.

These camera-native images are not intended to be viewed directly without some amount of processing. In order to create pleasing images, camera-native image files must be processed through a camera-specific rendering transform.

Film
For example, traditional film scans (Cineon Log) were viewed through print film emulations (PFEs), most of which were also proprietary to a particular post-production facility and may have been part of that facility’s “secret sauce.”

Let’s take a look at a Cineon-encoded DPX film scan. The image on the left appears flat and desaturated because it is presented in its native state – as an unrendered log-encoded image. But Cineon images were typically viewed through a PFE, so on the right is the same image as rendered by a PFE. Note that the rendered image has more contrast and saturation, improving our perception of the image as being a more “accurate” and pleasing reproduction of the photographed scene.

Digital
Now let’s look at a digitally acquired image. The image on the left appears flat and desaturated because it is presented in its native state – as an unrendered log-encoded image with camera-specific RGB encoding primaries. But after applying a popular manufacturer-supplied viewing LUT for Rec. 709 displays as its rendering, we get the more “accurate” and pleasing image presented on the right.

These two examples of renderings illustrate that the basic concept of a default scene-referred to display-referred transform is not new. Traditional non-video image processing pipelines have always required a rendering to create a visually pleasing image; ACES is no different in that regard. In fact, ACES reinforces the concept of preserving the original captured image data in as pristine a state as possible. ACES images must also be processed through an appropriate rendering for proper viewing.

ACES Rendering

ACES2065-4 (OpenEXR) files are also not intended to be viewed directly. They require processing through an ACES Output Transform in order to be perceived as “correct.” The ACES-supplied rendering is different than that of any of the camera manufacturers or a PFE, just as any individual camera manufacturer’s rendering differs from another. ACES is also different in that there are Input Transforms for all major camera types that convert camera-native image encodings into ACES2065-1. This means that any camera-native image can be viewed through the ACES processing path. To illustrate this point, the same two images from the previous examples are shown again, but this time as they appear when sent through the default ACES rendering.


To produce the above images, each of the native images were put through their appropriate Input Transforms to translate them to an ACES2065-1 encoding, then processed through the ACES Output Transform to Rec. 709 for display. The processing chain for each image is illustrated in the flow diagrams below.

As compared to the their “native” processing paths, the images appear quite different. Below, the pairs are shown again side-by-side for easier comparison. The native camera processing is on the left, and the ACES rendering is on the right.

That the rendered images appear different is not a surprise. The ACES rendering is not intended to match either the PFE or the manufacturer-provided viewing LUT, or any other rendering for that matter. It is intended only to be a starting point from which any desired creative look can be achieved.

Matching an Existing Look

An LMT can be inserted to the ACES rendering chain to move the starting look to match a camera-specific rendering or any other desired starting point, while still remaining within the context of an ACES-based workflow and viewing through the ACES Output Transform.

Transferring Looks

Furthermore, once in ACES, well-designed LMTs are interchangeable. This means that LMTs designed to match a camera-specific rendering can be used for other cameras, or print film emulation LMTs can be used with digital cameras. The possibilities are endless.

The best part about swapping out LMTs is that it does not require creation of any new transforms when using a different input. LMTs are compatible with any well-formed ACES images, whereas non-ACES renderings or customized LUTs are only compatible with particular inputs and outputs. For example, one can’t just apply a PFE to Log C or S-Log3 images and expect them to look anywhere near correct. Furthermore, LUTs output to a particular display device.

In ACES-based workflows, disparate camera-specific encodings are translated into a common encoding space: ACES2065-1. That enables looks to be reusable and applicable within all contexts, regardless of image source.

By now you’re hopefully seeing the power and flexibility afforded by LMTs. Over time, a library of looks can be accumulated and reused within any ACES-based production. If you already have a look library in some other format, it may take a bit of effort to recreate or translate them into ACES LMTs. But this is a process that should only need to be done once.

In the next post, we’ll take a look at what exactly goes inside a LMT, explore how they work and also how to make them.

25 Likes

Excellent read @sdyer!

Thanks a lot for a such informative article. Awaiting for part 2.

Amazing information, thank you so much! Really looking forward to part 2!!

Thank you, very intersesting. More, please :slight_smile:

The wait for part 2, oh my god, it is too much!!

Great read. I’ll be looking forward to seeing how to create them and what file format can be used for LMTs.

Thank you all for the positive feedback and eagerness expressed for Part 2.

Part 2 is necessarily a bit more in depth than Part 1 (and also isn’t the only task on my plate) so it’s taking a bit longer than I would have liked.

I just wanted to reply to assure you all that Part 2 is nearing completion and should be out soon.

2 Likes

@sdyer Do you have any PFE for ACES that you could publish? Allot of people are eager to get their hands on these…

In Resolve 14 Beta it appears that Blackmagic has added support for the Common LUT Format (CLF) and provides an example LMT based on Kodak 2383: https://www.dropbox.com/s/qn62wg07f21jydp/LMT%20Kodak%202383%20Print%20Emulation.xml?dl=0

I will also be providing a number of example LMTs with the Part 2 post (Part 2 is done but “in review” before it gets published), including the PFE look I used here in Part 1.

Wow! Ask and you shall receive… I’ll test that ASAP. Thanks @sdyer!

Is there a targeted release for part 2? It’s over 4 month since the first part and it seems an interesting if not very important topic. Especially now, that Resolve seems to support it.

Thanks!

Peter

From reading in the Academy TB-2014-010 it seems that the LMT is placed after the grade. At least the drawings show it in a box together with the grade. Does that mean it’s simply a LUT that gets ACEScc or cct data?

On the other hand Part1 of this LMT-Article says it’s always ACES linear in and out? Is both possible or which one is correct? ACES’ also sounds like it’s the cc or cct variant. At least from colorscience courses I took I know that a Prime-Symbol means that it’s no longer linear.

But if it’s alway linear, how can a LUT handle values over 1 and negative values? As far as I know 3x3 LUTs can’t do that.

And where does Resolve put the new CLF in the pipeline? Before cc / cct transform, or after?

So I am pretty confused so far :slight_smile:

Thanks for clarification.

Peter

LMTs are transforms that modify default ACES data to look different - that’s it.
Sometimes they are implemented as LUTs, but LUTs are always sampled from some mathematical transform. LMTs can be as simple as an exposure or contrast adjustment or model more complex modifications (e.g. a print film emulation). At least in my opinion, if something modifies default ACES data to look like something else then it can be considered a “look modification transform” because it’s a transform that is modifying the look!

Yes, this is how it’s drawn on diagrams and defined because it’s the most simple. Internally, there may be an ACES-to-ACEScc/ACEScct and ACEScc/ACEScct-to-ACES transform tacked on the front and back of the transform. One always needs to be careful that the input/output spaces match the expectations of a transform.

Some LUT formats are limited to only 0-1 range, but many support floating point including negatives and values over 1. Many problems can be caused by the “automatic LUT generators” that only allow for 0-1 values to be gridded and sent through a transform to generate the sampled outputs. I have seen many an occasion where improperly shaped LUTs have cut off critical values at the low and high ends. To deal with these limitations, shapers can be usually be used cleverly to sculpt input data into the 0-1 range.
Take a look at LMT ACES v0.1.1.cube in the Resolve LUT folder, and you will see that it has a 1D shaper which ranges linear ACES from 0.0000019180 to 15.7761012587 and uses a Cineon log shaper to range that to 0.0927061729 and 1.01962603533D which is the space in which the 3D LUT nodes are distributed.
Making quality LUTs is a bit of an artform and careful shaping of the lattice points can help minimize interpolation errors. But that’s another topic entirely…

I’m still in the process of testing, but this depends where you apply it. If you apply it to a clip using the contextual menu (similar to how you’d apply an IDT), then it seems to take linear in and output linear. I believe the conversion from ACES (linear/APO) to ACEScc/cct happens immediately prior to the node graph and then a reverse conversion is applied back to ACES at the end of the node graph.

I am currently trying to “reverse engineer” where all the color space conversions occur, so if anyone else has any insight on this (@nick ?) then I’d be most interested in hearing others’ experiences.

Not had an opportunity to test Resolve 14 beta yet, but my understanding of the order of operations in v12.5 matches @sdyer’s. I know that right up to the release candidate beta, v12.5 applied LUTs to linearised ACES, rather than using the working space. While this may follow the letter of the spec of LMTs, it made it hard for people to make their own look LUTs, as linear input requires a shaper, and as @sdyer says, automatic LUT generators (save grade as LUT in Resolve, for example) don’t include shapers . v12.5 reverted to applying LUTs to the working space when released.

Thanks for your explanations.

So if the CLF-LUTs are linear in and out before the nodes, that sounds like a good thing for the forthcoming LMTs in Part2 :slight_smile:

Accordingly I suppose, that LUTs that are applied within the node-tree expect the working space. That’s probably the case because thats the “normal” behaviour when you’re not using ACES in resolve.

And as we learned from the other thread (Free Series On ACES For Resolve Users) you have to make LUTs without the RRT/ODT which means to select cc / cct as “RRT/ODT” when creating own LUTs.

And if that’s all true, one cannot use node-tree-created LUTs via the context menu of the clips.

Not bad overall, as long as one knows what’s going on. :slight_smile:

Peter

Very good explanation