ACES Implementation Language


(Walter Arrighetti) #10

I also completely marry Kevin’s thoughts about a Python implementation.

Nothing against CUDA/OpenGL/GLSL, but ACES really needs to be technically neutral, so a reference implementation that “lives” on a macro-family of GPU implementations is far from ideal.
Just think of how ACES is implemented in LUT boxes, monitors, camera feeds, etc…

Using a real-world language (mostly OS-agnostic), with hard references to math primitives, is to me THE solution.
One should be very very wary on the Python’s own implementation though (e.g. CPython?), in order to reproduce the finest detail of, say, bitwise floatin-point algorithmics.


(Thomas Mansencal) #11

One should be very very wary on the Python’s own implementation though

If you are going down the Python road and because you might want to use Numpy / Scipy, CPython would be the only real implementation, as a matter of fact, it is the De Facto canonical Python version. For Python, I would be more concerned about Packages/Dependencies versions, e.g. which version of MKL/Lapack/Blas and Platform are you using, e.g. Windows vs macOS vs Windows.


(Walter Arrighetti) #12

Yes Thomas, that’s what I’m saying.
I’d be tempted to add --at least for ACES Reference Implementation only-- straight Python, no third party modules. This means writing a bit of modules referring to primitives in the standard math and a few other standard modules only.


(Thomas Mansencal) #13

The inconvenience of doing so is that it would render your reference implementation pretty much useless as far as image processing goes. CTL is able to process images, a naive native Python implementation would not be, at least not in a way that is practical.


(Greg Cotten) #14

I kind of like the idea of using a meta-language or just plain math for a spec. Meta-language might be better because you could write a cross-compiler for various languages. When the spec changes, all you need to do is recompile. Would be great to compile for Metal, GLSL/HLSL, OpenCL, and CUDA without a lot of heavy lifting.


(Greg Cotten) #15

Now that I’ve thought about it, maybe keeping CTL is fine. There’s already a great interpreter. Perhaps we could leverage this interpreter’s result AST to compile to SPIR-V (or LLVM!) as Sean mentioned in the original post.


(Walter Arrighetti) #16

Thomas,
the reference implementation is just that: a reference. It can process real-world image files with the same non-production UX as ctlrender does.
There are some Python codes publicly available (not including anything but default modules) for handling EXR, TIFF, even ARRIRAW files (I wrote sone myself over the last decade).

Of course you don’t want a reference implementation to read/write Apple ProRes or MXF files, but only read/write capabilities of basic still-image files; one only needs it for color-science and benchmark image processing.
You may change the read/write modules to add file formats, if/when needed, without re-compiling.

Python was just a suggested language. Anything which is OS-agnostic and does not rely exclusively on GPU technologies/code to run may fit.

Pseudo-language is almost good – yet it’s not as one still needs an actual binary to generate the reference results for the pseudo-code.


(Thomas Mansencal) #17

Hi @walter.arrighetti,

It is not a reference, it is THE reference.

Anyway, what I was underlining is that it needs to have good processing speed, otherwise it will be very painful to implement or test anything for that matter. As the creator of the reference, you obviously do want to test what you are writing, which involves small unit test cases but also real world data processing, e.g. frames with multiple millions pixels.

That is where I don’t think a pseudo-language is suitable. You need a language that allows you to generate actual data in timely fashion.

Cheers,

Thomas


(Greg Cotten) #18

All,

What’s wrong with CTL? It has a compiler and the interpreter uses SIMD so it’s not super slow by any means… Good enough for the reference implementation. My thought was that we could leverage the CTL interpreter to create a modern cross-compiler that could spit out GPU or CPU compatible code.


(Thomas Mansencal) #19

@gregcotten: To me the two main issues with CTL are that:

  • It is an unmaintained language.
  • It is a bit convoluted to test or make a test suite for it.

There is a cross-compiler:


(Paul Dore) #20

Would it not be more practical to keep CTL as the main language and use it as the basis to convert to other, more expedient to testing and realtime playback mediums? Having done this myself with the separate translations to DCTL (Davinci Resolve specific) and CUDA (for OFX Plugins), I can attest to the validity of CTL as a viable and practical source for translating to other languages.

This thread appears to have sustained its line of enquiry seemingly oblivious to some recent and arguably quite relevant developments, though of course I may well have missed or misinterpreted some elements that would explain this.


(Greg Cotten) #21

I think we just need some brave soul to extend the capabilities of ctlcc. If we can keep CTL as the reference language and have a tool to automatically cross-compile to various other popular shader/interpreted/compiled languages (ala ctlcc) I think that should be fine. Whatever it generates would need to be MIT or 3-Clause BSD licensed.

Not really interested in having any manual translation take place, as any additions or corrections would introduce further possibilities of translation error.


(Greg Cotten) #22

Paul,

I think the issue at the moment is we have a reference implementation that is hard to implement unless you just want to use ctlrender. Yes, you could manually translate (as I assume you did?) to various shader languages, but that’s prohibitively difficult for some (there are 930 CTL files in ACES 1.0.3) and leaves a lot of room for implementation error.


(Thomas Mansencal) #23

@Paul_Dore , there have been many “manual” translation of ACES CTL to various other languages, (I have my fair share of porting chunks of it to Python for personal research or HLSL for Unity). The issue is long term maintenance doing so, any update to upstream is painful to back-port without major trauma. Not only that but most of the implementations are not unit or regression tested, as a matter of fact the CTL reference implementation itself is not!

Absolutely what I would be keen to avoid!

The point @SeanCooper raised (that we discussed many times about) is that everybody reimplements the CTL reference in a way or another which is wasted energy. Kimball’s CTLCC would probably have been the way to go but unfortunately at this stage it is also unmaintained (not that Kimball would not be able to jump back). Florian Kainz last commit to CTL was almost a decade ago, last commit to the repo was 4 years ago. This should make people nervous about using CTL, it makes me nervous :slight_smile:


(Greg Cotten) #24

Doesn’t scare me the least bit. Why should it? C99 was around for decades before C11 came in to replace it. Not to say CTL is as stable as C, but there is little evidence to say it is unstable. Unless there are some changes you’d like to make to the language, I don’t see a reason to replace it.

No matter what, we’re going to need an interpreter that can spit out an AST that a cross-compiler can use to write to various other languages. We already have a stable CTL interpreter that can give us an AST - why not try to reignite CTL cross-compilation support instead of starting over completely?


(Thomas Mansencal) #25

The major difference is that ISO/IEC etc… are steering C, preventing it to rot and ensuring it will work for the decades to come. The user base and scope are entirely different, almost orthogonal, C will never ever go away so the comparison is a bit moot to me.

Afaik, there is nobody steering CTL and vouching it will work in the future, if that was to change, it would make a lot of people happy, I for sure would be.


(Thomas Mansencal) #26

Given Kimball is probably the most knowledgable about this, I will poke him at work to get his opinion on all that.


(Greg Cotten) #27

That’s a good point. Perhaps then I would propose C or C++ as a long term base implementation language. Obviously you could create a cross-compiler from that language though finding the code entry point (main function with in/out) would work a bit differently. And arbitrary execution would rely on the end-user having a C or C++ compiler on the system.


(Alex Forsythe) #28

The one thing that is really nice about CTL is that it is interpreted. Combine this with the fact it’s got primitives for color operations and it’s well suited to the tasks we use it for. Is it slower than we might like, sure. To me the biggest flaw is a lack of an xUnit style framework for unit testing. This just further complicates the job of anyone trying to reimplement in other languages.

It’s worth noting that CTL is not ctlrender. ctlrender was built out of necessity because we needed a more generalized tool to apply CTL modules to arbitrary images. It works well enough but has bugs. A while back I approached Larry Gritz about integrating CTL support into openimageio and to retired ctlrender. He agreed it would be a good idea but for a variety of reasons that never happened.

I am 100% in favor of a CTLcc approach but it still needs a unit testing framework.


(Jslomka) #29

This discussion going on here is result of the system design expectations not being represented in reality.

I was fully on board with CTL being the default language of ACES. The idea , in my mind, was that it represents the platform independent invariant target that all other implementations strive to emulate.

The reality is that most manufacturers lack the resources in time and talent to devote developers to what is essentially multi-path development. Well meaning manufacturers and developers have done their best to take CTL and make it real time as opposed to making machine specific implementations.

Looking back it seems ultimately naive to expect that platform developers would take the time to develop their own versions. Additionally the lack of any kind of tolerance specifications makes it impossible to know if anything except the exact CTL code from the academy is acceptable.

With a decade of hindsight, to me it appears that all of the transforms should be referenced specified in a well established and supported GPU language. Any language that is well supported will enable our community to solicit help from a number of non-film industry resources. It would provide immediately usable code that runs at production speed as well as debugging tools that can be applied to final implementations.

I understand the desire for a pure math implementation, as I share it, I very much like stopping down and being able to atomically focus on just one detail of a transformation with no ‘stuff’ in the way. I see how the project has been implemented and adopted in the real world. CTL should be retired unless a gpu manufacturer want to implement it on hardware and provide a full set of debugging tools.