Results from an IDT evaluation

Oh and would it be possible to push the report to Zenodo please?

Cheers,

Thomas

Once @sdyer has made all the data and any scripts available, I’ll DOI it all.

1 Like

Awesome, thanks @Alexander_Forsythe!

Oh and might need some form of basic header in the report too, at least a Title, Author, Date if you don’t want to adopt the standard AMPAS header.

Big take aways for me were :

  • Even with their inherent limitations, 3x3 matrices seem to do a very respectable job.
  • Camera’s tend to produce results that match each other more closely than they match scene colorimetry (i.e. the RICD), even when scene colorimetry is the target. This just shows that cameras aren’t great colorimeters.

Thank you very much for doing this! It’s always good to reaffirm our assumptions.

This post prompted me to post my slides from IBC 2019, where I looked at the other implications of using 3x3 matrices here:

I very much look forward to regenerating those diagrams with the data made available here, to validate my own assumptions and make sure I didn’t do any accidental black-box voodoo.

2 Likes

@SeanCooper: I think your last slides are correct, at least I would not expect something massively different except for the regions folding back to the illuminant above the line of purples. Anyway, this is not important compared to this question:

Is ACEScg sufficient?

My understanding is that the AMPAS intent is to have a dedicated Working Group specifically answering this question. However, the question should be more refined, sufficient for what?

  • For rendering? Probably, it tends to behave nicely as was shown a few times.
  • For compositing? Probably not as cameras will probably always tend to exceed its gamut.

Is the answer to the second question yet another gamut? My gut feeling, and I could be wrong, is no: there might always be a camera that crosses over its bound, the gamut would also likely need to be quite large, starts to remind me scRGB and this is not a good souvenir :wink:

I will try to find some spare cycles to plot Jinwei’s Spectral Database and the Raw to ACES one similarly.

Hey @Thomas_Mansencal,

I didn’t want to take over this thread, hence me creating a different post of the slides.

But, in general, I think the simple question pertaining to this IDT post, is that the present best-practice 3x3 IDT matrices and an AP1 working space don’t play nicely together for all cases, and what should or could be done to help?

This eventually public dataset could provide a very useful testing ground to analyze the nature of the 3x3 IDT for a larger variety of spectral stimuli, and allow for analysis of other solutions that may yield a more favorable “spectral hull” response.

To your points specifically:

Seeing how there is potentially a movement to add more colorspaces to the ACES definition to allow for “camera-native” grading via CDLs in AMF… I would say yes, this is a very real possibility to explore because there are people actively being “hurt” by AP1 based working spaces.

I know the reasons for the above are more complicated than just negatives in your image, but that is still a serious issue.

In my personal opinion, there are three inter-related items that could/should be explored.

  1. Alternate IDT methods that would improve colorimetric response and provide better “spectral-hull” response. Keeping in mind other image quality factors like noise.
  2. Explore and propose best practices for handling non-colorimetric data (handling negatives gracefully, gamut-mapping, etc.)
  3. Accept the present reality of out-of-spectral-locus producing 3x3 IDTs and bad handling of negative RGBs and look to accommodating them in a new Wide Gamut-esque working space.

The interesting bit is that it is entirely interdependent on your base spectral sensitivity curves, your IDT regression method, and your training data. So it would be a great topic to explore, as it could end up being not that crazy, who knows?

Plenty there to debate over :slight_smile: I’m sure no one will take it the wrong way :slight_smile:

I guess that one of the first thing to do would be to enumerate practical problems that are caused by having colours out of the spectral locus/working space, e.g. makes it hard to key anything, potentially generates nuclear colours, etc…

The best solution to any of these problems is that there are maybe different solutions and recommended approaches, e.g. go back to native Camera Space where nothing is negative per definition then key and transform back for the former, gamut map to bring back the potential nuclear colour into the spectral locus for the later.

Agreed with all the 3 points with the caveat that 3 is a race that we will never win without making huge compromises as there will eventually be a vendor that will break our assumptions. The only guaranteed way is to be able to go back to the native Camera Space, I’m obviously disregarding camera noise here. That being said , maybe a new gamut that handles 95% of all the use cases is good enough, just worried by what the 5 remaining % will cause :slight_smile:

Cheers,

Thomas

As promised. I have organized and stripped down the data to make it easier to share.

For those who just want the spectral sensitvity data, each directory has a file named “ss_CameraManufacturer_Model.txt” which is tab-delimited 380-780nm in 2nm increments (interpolated from the 5nm sampled monochromator measurements)

For those that want to duplicate the experiment, I have uploaded the camera raw files. You can process these yourself to linear TIFFs using dcraw and then do extraction of the RGB values per wavelength capture to derive the spectral sensitivities.
I also included the power measurement file from the monochromator rig as well as the lens transmission.
“sampled_rgb.txt” is the averaging each of the 91 monochromator files representing wavlengths from 350-800nm in 5nm increments.

The entire package can be browsed and selectively downloaded or downloaded as a whole from this Dropbox link


@nick @Thomas_Mansencal the python will have to wait, but there is interest in re-doing it that way for similar analysis work that likely might be done with motion picture cameras as part of the imminent IDT VWG (although that group will be focusing on the inconsistency in ISO ratings and recommendations for getting nominally exposed images to come in closer to [0.18, 0.18, 0.18] - i.e. less need for the one-time correction I mention was needed in the conclusion)

2 Likes

Thanks @sdyer! I will try to squeeze some time and see if I can put up a Google Colab notebook to reproduce your results.

Cheers,

Thomas

@Thomas_Mansencal I agree its finicky, and it only take the next camera model to throw the whole thing out the window…

I would just prefer to say to an average user of ACES that ACES actually does solve your issues, not create more, and its a bit difficult to say that straight-faced as things are right now. The RRT/ODT sided gamut-mapping that has been mentioned would be a huge win, but it only solves one communities issues (in a way).

3x3 needs to be improved. nailing exposure may be important, but we should wean RAW and ACES away from the concept of exposure. All video should use RAW or they will have burned in color science. maybe we need some sort of universal RAW color pipeline like ipp2 for all cameras.

The hue response over exposure needs to be re-calibrated in such a way that when viewing a color chart and changing the exposure, the hue lines will stay straight and not warp in the yuv vectorscope, essentially being better than the camera’s factory default transform lut.

If we can combine RAW, and de-merge ACES away from exposure, and then perfect hue response over exposure, we can then use any exposure we wish and compensate perfectly (with the desired ISO response) to get accurate hues across any choice of camera or exposure.

This will require a larger matrix and larger data sets but will be universal.

1 Like

Welcome Robert! Thanks for your first post!

Thanks for providing camera and lens data.

I have a few questions:

If I just want pure image sensor sensitivities, would it make sense to simply divide camera values by lens transmission values for each wavelength?

What kind of interpolation was used on the data?

And finally: how much do you think individual copies of the same camera will differ in terms of spectral sensitivity?

Yes. At the bottom of pg 4 of the report I mention “If one wanted an IDT exact to the camera sensitivities only (i.e. no lens), then [lens transmittance] data could theoretically be factored out of the spectral calculations.”

The data was captured in 5 nm increments from 350nm - 800 nm. I’d have to check my script but I think I did all my calculations at that increment. If I did interpolate to say 2 nm increments, I would have used simple linear interpolation as the curves here are broad and not spiky.
If I did want more precise data, it would be most accurate to capture the characterization images directly at 2 nm or even 1 nm increments - but for this type of investigation that seemed like overkill. (It already took long enough for each camera without doubling the number of measurements and images needing to be processed).

That would depend on many factors. I don’t have any empirical evidence to support a claim one way or another. I think it comes down to what’s “good enough?” - which is a question only you can answer for your particular use-case or application. One could theoretically take time to generate a custom IDT for each individual camera (and lens!..and capture SPD!) combination - but in practical use does this make enough of difference to be worth the effort required? Far more likely to make mistakes than to reap any discernable benefit. Even a basic generic IDT using best principles puts us miles beyond the starting point of trying to match cameras from scratch without color management helping us to put them in a similar encoding.

1 Like

Thanks for the response Scott!

Ah thanks! I remember reading that when the report first came out, and I was trying to find where you said that again (I was wrongly searching only in this thread)

Ok no problems with this then :slight_smile:

You’re right, this is miles ahead of nothing, even if individual cameras differ. If I can find two cameras of the same model I will try to compare how similar their spectral response is (using a star analyser 100 filter).

I think this is interesting. I plotted the entire possible gamut and spectral locus for all of these cameras:

Eyes:


5D Mark II:

5D Mark III:

Sony A7:

Nikon D810:

I used Adobe daylight matrices, but could re-do with the IDT ones included in the dropbox.

1 Like

@ilia3101 Just so I understand, did you use a method similar to Holm’s Capture Color Analysis Gamuts

Looks like I did use the same method. I was not aware of that document though. Thanks for linking it.

Interesting to see how extremely different Adobe DNG matrices are from error minimsation matrices. Slightly worrying, too. So much software uses those Adobe matrices. I wonder what reason there is for the difference.

I will upload code I used for generating these diagrams soon once it’s a bit more usable and clean.