Yes Thomas, that’s what I’m saying.
I’d be tempted to add --at least for ACES Reference Implementation only-- straight Python, no third party modules. This means writing a bit of modules referring to primitives in the standard
math and a few other standard modules only.
Yes Thomas, that’s what I’m saying.
The inconvenience of doing so is that it would render your reference implementation pretty much useless as far as image processing goes. CTL is able to process images, a naive native Python implementation would not be, at least not in a way that is practical.
I kind of like the idea of using a meta-language or just plain math for a spec. Meta-language might be better because you could write a cross-compiler for various languages. When the spec changes, all you need to do is recompile. Would be great to compile for Metal, GLSL/HLSL, OpenCL, and CUDA without a lot of heavy lifting.
Now that I’ve thought about it, maybe keeping CTL is fine. There’s already a great interpreter. Perhaps we could leverage this interpreter’s result AST to compile to SPIR-V (or LLVM!) as Sean mentioned in the original post.
the reference implementation is just that: a reference. It can process real-world image files with the same non-production UX as
There are some Python codes publicly available (not including anything but default modules) for handling EXR, TIFF, even ARRIRAW files (I wrote sone myself over the last decade).
Of course you don’t want a reference implementation to read/write Apple ProRes or MXF files, but only read/write capabilities of basic still-image files; one only needs it for color-science and benchmark image processing.
You may change the read/write modules to add file formats, if/when needed, without re-compiling.
Python was just a suggested language. Anything which is OS-agnostic and does not rely exclusively on GPU technologies/code to run may fit.
Pseudo-language is almost good – yet it’s not as one still needs an actual binary to generate the reference results for the pseudo-code.
It is not a reference, it is THE reference.
Anyway, what I was underlining is that it needs to have good processing speed, otherwise it will be very painful to implement or test anything for that matter. As the creator of the reference, you obviously do want to test what you are writing, which involves small unit test cases but also real world data processing, e.g. frames with multiple millions pixels.
That is where I don’t think a pseudo-language is suitable. You need a language that allows you to generate actual data in timely fashion.
What’s wrong with CTL? It has a compiler and the interpreter uses SIMD so it’s not super slow by any means… Good enough for the reference implementation. My thought was that we could leverage the CTL interpreter to create a modern cross-compiler that could spit out GPU or CPU compatible code.
@gregcotten: To me the two main issues with CTL are that:
- It is an unmaintained language.
- It is a bit convoluted to test or make a test suite for it.
There is a cross-compiler:
Would it not be more practical to keep CTL as the main language and use it as the basis to convert to other, more expedient to testing and realtime playback mediums? Having done this myself with the separate translations to DCTL (Davinci Resolve specific) and CUDA (for OFX Plugins), I can attest to the validity of CTL as a viable and practical source for translating to other languages.
This thread appears to have sustained its line of enquiry seemingly oblivious to some recent and arguably quite relevant developments, though of course I may well have missed or misinterpreted some elements that would explain this.
I think we just need some brave soul to extend the capabilities of ctlcc. If we can keep CTL as the reference language and have a tool to automatically cross-compile to various other popular shader/interpreted/compiled languages (ala ctlcc) I think that should be fine. Whatever it generates would need to be MIT or 3-Clause BSD licensed.
Not really interested in having any manual translation take place, as any additions or corrections would introduce further possibilities of translation error.
I think the issue at the moment is we have a reference implementation that is hard to implement unless you just want to use ctlrender. Yes, you could manually translate (as I assume you did?) to various shader languages, but that’s prohibitively difficult for some (there are 930 CTL files in ACES 1.0.3) and leaves a lot of room for implementation error.
@Paul_Dore , there have been many “manual” translation of ACES CTL to various other languages, (I have my fair share of porting chunks of it to Python for personal research or HLSL for Unity). The issue is long term maintenance doing so, any update to upstream is painful to back-port without major trauma. Not only that but most of the implementations are not unit or regression tested, as a matter of fact the CTL reference implementation itself is not!
Absolutely what I would be keen to avoid!
The point @SeanCooper raised (that we discussed many times about) is that everybody reimplements the CTL reference in a way or another which is wasted energy. Kimball’s CTLCC would probably have been the way to go but unfortunately at this stage it is also unmaintained (not that Kimball would not be able to jump back). Florian Kainz last commit to CTL was almost a decade ago, last commit to the repo was 4 years ago. This should make people nervous about using CTL, it makes me nervous
Doesn’t scare me the least bit. Why should it? C99 was around for decades before C11 came in to replace it. Not to say CTL is as stable as C, but there is little evidence to say it is unstable. Unless there are some changes you’d like to make to the language, I don’t see a reason to replace it.
No matter what, we’re going to need an interpreter that can spit out an AST that a cross-compiler can use to write to various other languages. We already have a stable CTL interpreter that can give us an AST - why not try to reignite CTL cross-compilation support instead of starting over completely?
The major difference is that ISO/IEC etc… are steering C, preventing it to rot and ensuring it will work for the decades to come. The user base and scope are entirely different, almost orthogonal, C will never ever go away so the comparison is a bit moot to me.
Afaik, there is nobody steering CTL and vouching it will work in the future, if that was to change, it would make a lot of people happy, I for sure would be.
Given Kimball is probably the most knowledgable about this, I will poke him at work to get his opinion on all that.
That’s a good point. Perhaps then I would propose C or C++ as a long term base implementation language. Obviously you could create a cross-compiler from that language though finding the code entry point (main function with in/out) would work a bit differently. And arbitrary execution would rely on the end-user having a C or C++ compiler on the system.
The one thing that is really nice about CTL is that it is interpreted. Combine this with the fact it’s got primitives for color operations and it’s well suited to the tasks we use it for. Is it slower than we might like, sure. To me the biggest flaw is a lack of an xUnit style framework for unit testing. This just further complicates the job of anyone trying to reimplement in other languages.
It’s worth noting that CTL is not ctlrender. ctlrender was built out of necessity because we needed a more generalized tool to apply CTL modules to arbitrary images. It works well enough but has bugs. A while back I approached Larry Gritz about integrating CTL support into openimageio and to retired ctlrender. He agreed it would be a good idea but for a variety of reasons that never happened.
I am 100% in favor of a CTLcc approach but it still needs a unit testing framework.
This discussion going on here is result of the system design expectations not being represented in reality.
I was fully on board with CTL being the default language of ACES. The idea , in my mind, was that it represents the platform independent invariant target that all other implementations strive to emulate.
The reality is that most manufacturers lack the resources in time and talent to devote developers to what is essentially multi-path development. Well meaning manufacturers and developers have done their best to take CTL and make it real time as opposed to making machine specific implementations.
Looking back it seems ultimately naive to expect that platform developers would take the time to develop their own versions. Additionally the lack of any kind of tolerance specifications makes it impossible to know if anything except the exact CTL code from the academy is acceptable.
With a decade of hindsight, to me it appears that all of the transforms should be referenced specified in a well established and supported GPU language. Any language that is well supported will enable our community to solicit help from a number of non-film industry resources. It would provide immediately usable code that runs at production speed as well as debugging tools that can be applied to final implementations.
I understand the desire for a pure math implementation, as I share it, I very much like stopping down and being able to atomically focus on just one detail of a transformation with no ‘stuff’ in the way. I see how the project has been implemented and adopted in the real world. CTL should be retired unless a gpu manufacturer want to implement it on hardware and provide a full set of debugging tools.
The some degree, what tolerance specs exist are for the Logo Program and are around making
A “well established and supported GPU” language would be nice, but there is serious custom-lock-in happening with those (DirectX12, Metal, Vulcan) If we had picked something a decade ago, it might have become a backwater. We did consider CUDA at the time, but it wasn’t open enough.
In retrospect, we would have been better off building a “Pixel API” library and a "Color processing API” and writing the functions in ‘C’. That is a fair amount of work though and for a volunteer effort was a bridge too far. Even today, no company stepped up to take over CTL after ILM developed it.
The question at the moment though is, for ACESNext, what development work is needed that would facilitate growth and the future of ACES in a hardware environment that is changing each decade.
There are technology trends that are changing as well. Dynamic metadata rendering instead of static LUTs. The transition to machine-learning centric GPUs has already started. What effect would AI techniques have on colorist work and color appearance modeling? At the TV manufacturer level, they cannot do some transforms because of complexity and time when processing 4K/8K images. Could more efficient processes allow better color rendition in a TV?
These directions somewhat overlap the use of ACES in a production context, and unfortunately, the motion picture industry is not big enough to drive the technology very far (compared to gaming for example). However, in the matter of influencing useful imaging for other areas, it can – looking at the usages of ACES concepts at NVIDIA for example.
With the start of the motion picture open-source foundation, is there a project that could get support for making a more usable production tool than CTL – after all, it biggest defect is just performance which is really a solvable problem. (and yes the second biggest being debugging).
Just to play the devil advocate, HLSL and GLSL have both existed for about 15 years (DirectX 9 and OpenGL 2.0), they are not going to disappear any time soon and ACES already runs (or can run) on both.