Diffraction grating samples, Nuke script to plot into CIE Yxy, and some questions

Tags: #<Tag:0x00007f58b6a30758> #<Tag:0x00007f58b6a30668> #<Tag:0x00007f58b6a305a0> #<Tag:0x00007f58b6a304d8>

Examples and tools:

The following link contains raw files and a Nuke script that can be used to plot colors onto a CIE-Yxy diagram. There’s also a quicktime example of the script’s output.

Source of raw image(s):

These images were captured by filming a tungsten light source through pinholes at the end of a blocking-lined tube and placing a diffraction grating against the front of the lens. (Master Prime 50mm) This method produces narrowband values, however, a better way to do this would be sample light from a monochromator at known wavelengths and bandwidths.

Notes about the Nuke script:

  • The color space chosen to debayer into was a direct result of the vendor SDKs behavior of clamping gamuts. ARRI does not clamp their ACES output. RED clamps all their gamuts including Red Wide, but it’s the largest gamut option.
  • The plotting is done by the use of Nuke’s 3D system and the PositionToPoints node so therefore is painfully slow at full resolution. There is also some sort of refresh bug that I attempted to work around. If that fix doesn’t work you have to restart Nuke. This plotting would be a relatively simple calculation as a Blink script for someone more familiar with it. It would be great to have someone contribute to this tool if they feel so inclined or provide a more accessible tool in Nuke to provide participants of this group a way to test things.
  • I may not have a complete understanding of the colorspace conversions and would be open to suggestions about methods to better go about them if errors are found. Gamuts are converted by my GamutConvert node witch chromatically adapts gamuts to a D65 XYZ space and back out. I did include an exception for the ACES gamuts which should not be adapted for the D65 white point when plotting.
  • The drawing of the spectral locus is not mathematical, I would prefer it to be but found trouble in finding a list of the x,y coordinates, or an expression to generate them.
  • I chose to convert from XYZ to Yxy with an expression node that eliminated negative values in the luminance channel for proper display.

Some topics I would like to hear discussed by participants of this group:

  • Can we say, as a group, that there is a thorough understanding or experience with published papers that propose methods for gamut mapping/compression/expansion? Below is a list of just some of the papers that have been released in the last ten years or so. I’m sure there are many more.

    • Color image enhancement with high saturation using piecewise linear gamut mapping
    • Vision Models for Wide Color Gamut Imaging in Cinema
    • High-Efficiency Image Color Gamut Mapping Based on Spherical Coordinates
    • Colour gamut mapping using vividness scale
    • Chromaticity coordinates for graphic arts based on CIE 2006 LMS with even spacing of Munsell colours
    • Compression Efficiency of High Dynamic Range and Wide Color Gamut Pixel’s Representation
    • Color gamut mapping in a hue-linearized CIELAB color space
    • Color gamut mapping between small and large color gamuts (part 1 and 2)
    • Gamut Extension for Cinema 2
    • Colour gamut conversion from Recommendation ITU-R BT.2020 to Recommendation ITU-R BT.709
    • Automatic, Fast and Perceptually Accurate Gamut Mapping Based on Vision Science Models
    • A Gamut Mapping Framework for Color-Accurate Reproduction of HDR Images
    • Variational Methods for Gamut Mapping in Cinema and Television
    • Gamut Mapping for Digital Cinema
    • A Gamut Compression Algorithm Based on the Image Spatial Characteristics
    • Gamut Mapping in Cinematography through Perceptually-based Contrast Modification
    • Creating cinematic wide gamut HDR-video for the evaluation of tone mapping operators and HDR-displays
    • Gamut relativity- A new computational approach to brightness and lightness perception
    • Perceived Image Gamut Based on Human Visual System and Wavelet Transform
    • Image Gamut Visualization Based on Ball-Pivoting Algorithm
  • Is access to these papers a limiting factor with our ability to test them? (many of them are behind paywalls)

  • Should we perhaps compile a list of papers we would like to try and convert the math into Nuke nodes for testing?

  • Would gamut mapping methods considered by this group…

    • …cause non-linearization
    • …be reversible without data loss?
    • …prevent conversion freely between gamuts as it is now by the use of 3x3 matrices?

Collection of data:

Should we as a group decide what kind of image data we would need to properly test our proposals? This could include not only monochromator samples but also some other samples that would be used by camera vendors to produce IDTs. (color charts/human models?) Should we start by creating a list of the tests we would like to see performed and the equipment needed to get that data? Would it be possible to gather the required equipment in Los Angeles and could several participants of this group attend?

Justin Johnson

2 Likes

Hi Justin,

Thanks for the images, I haven’t had time to look at them but I’m probably familiar with a few of them as we discussed about that in the past!

Don’t quote me on what follows as I’m not a decision maker :slight_smile:

I don’t think it should be and if there are papers that are seemingly promising we should source them accordingly.

Good question! While I think Nuke is a great exploratory tool, I don’t think it would be sane to do more than just small test scales with it. It is a commercial tool which means that it is not really great for reproducible research and the non-commercial version is crippled down to the point of being almost non-useful. I think the group would be better using a programming language, e.g. C/C++, R, Python, Julia (I’m obviously biased toward Python :]) Houdini is probably actually better than Nuke for reproducible research, e.g. https://iopscience.iop.org/article/10.1088/1538-3873/aa51b3

If by non-linearization you mean generating non-linear data, ideally I would say no if in the context of cameras/scene-referred data but if gamut mapping in the display-referred portion of the chain, probably does not matter as much, there will be most likely non-linear compressions happening here…

I wish we come up with a model/models that are invertible.

I’m not sure to understand your last point about the 3x3 matrices, gamut mapping could be seen as finishing the work of the 3x3 matrices.

Great idea! Maybe a DropBox paper thing for the WG, haven’t tried if you can concurrently edit a document there, Google Docs would work.

I will look at generating some spectral data for many cameras, it should be relatively trivial to generate a ton of it programmatically which I will do in the coming days. Getting pretty renders, e.g. Mitsuba, is another task but it might not be warranted at that point, except for the prettiness factor.

Cheers,

Thomas

Hi Justin,

Thanks for the interesting list of papers. As far as I can say by knowing them or by looking at the abstracts, they are based on color appearance models. Like Daniele I see great difficulties to assign color appearance correlates to scene-referred data. It’s not only Hue but even more Lightness that is impossible to fix before the image is color-graded and rendered.

When you are looking for colorimetric data, like 1931 (or other) observers, http://www.cvrl.org is your source. Click on “Chromaticity coordinates” in the left panel and scroll down. There you will find the CIE 1931 2-deg chromaticity coordinates.

Regarding your other question. Yes, we need test images and not only the edge cases. The challenge is we would need similar images from different motion-picture cameras. I think it would be necessary to produce such images (probably in LA).

You are right, those other papers would be better-suited for the display referred side of things.

I did find a paper that reviewed and implemented various methods for going from camera RGB to XYZ.

Including:

Linear Colour Correction

Polynomial Colour Correction

Root-Polynomial Colour Correction

Hue Plane Preserving Colour Correction

Colour Correction by Angular Minimisation

Homography Colour Correction

Maximum Ignorance Colour Correction and Maximum Ignorance with Positivity Colour Correction

1 Like

Thanks Justin, looks like lots of interesting read there. I did read the Root Polynomial one a lot of times, since I do think It was the method used for the old canon IDT.