Archiving and future proofing

You have a point! And, I hope I didn’t sound too harsh! :wink:

But still most american TV shows have allot more budget that what we have in Quebec/Canada. That being said I doubt that TV producers would like to archive to EXR. Some might. Most would find it overkill (in the current context). That will obviously change with the growth of HDR.

Thanks @jim_houston for taking the time to help us out!

Cheers

I don’t think that having “less” damage in this context is a good thing. I would rather know immediately that the data is trashed with an obvious defect than finding that we are missing some lines at the bottom when about to push a trailer out. And if it is compressed, well…, it will be faster to send back. :smiley:

From a Onset perspective as a ICG Digital Image Technician and the state of computing. The Generation of lossless OpenEXR would be no more a large data footprint then Alexa65 Arriraw OG is to the DIT Team on-set/nearset. Its just been deemed to expensive.

ACESclip as a plug for a SMPTE Standard in BMD Davinci Resolve “15” sounds like a spectacular idea for us DITs to show to our DPs especially during the prep of a show:slight_smile:

I don’t think anybody is suggesting that transcodes to EXR should be done on-set/near-set. That would mean having to transcode all of every take (or make selection decisions you wouldn’t want to be forced into in the pressured environment of a set) and do so at the highest possible quality. I don’t see that as viable or necessary for now. As available processing power, storage capabilities and transfer speeds increase, who knows.

I certainly never thought that.

EXRs are more of a deliverable to VFX in the earliest form because it keeps all of the image
quality from the camera. Only rarely do images go right from set to VFX. I am aware that some places prefer to work with the RAW files directly.

I’ve been dabbling with the DWAA compression in Resolve. It’s very efficient. It cuts down the size of frames by about 3.5 time (depending on the compression ratio)…

So… I’m wondering. I know that @ACES wants us to use uncompressed. But what would/could happen if I decided to use DWAA compressions to enable playback in 4K+.

312MB/s VS 1.2 GB/s is a HUGE difference.

At a compression ratio of 25 the images are visually lossless…

Cheers!

Cheers!

This thread is still entitled Archiving and future proofing so, despite being personally in favor of speccing ACES-compliant compressed OpenEXRs (as I already declared elsewhere in ACEScentral), from this thread’s perspective neither playback quality nor on-set operations being relevant here.

Said that, I subscribe most facilities rather keep original raw files up until the first post-production rendering (for either offline editorial, VFX plates, intermediate-grade renders, mezzanines and final masters). Despite “being allowed” in the above cases as well, ST2065-4 format (i.e. “uncompressed, 16bpc, ACES2065-1 EXRs” for short) is not mandated to be created prior to on-set or playback operations.
As per Jim’s comments, most facilities would rather do grading/playback from raw sources to keep maximum quality (* cfr. bullet-point below). If your infrastructure can sustain this (which means from Bayern-pattern, usually compressed footage), chances are ST2065-4 is well within their technological capacity as well (meant as storage, bandwidth, CPU/GPU power)

  • This is usually good practive, unless other concerns demand an early render from raw files, like Debayerning concerns or other processes now called “digital negative development”.

Thanks @walter.arrighetti!

Yes we always start from the camera source. My issue is with archiving at 50mb a frame in 4K.

Most of our clients (very independent producers) are asking us to archive to Pro-Res 4444.

I’d like to offer them the EXR option and future proof their projects. But it’s not viable with uncompressed EXR.

Thanks!

Yes I understand @chuckyboilo, there’s this problem in Italy as well – and not just for independent / low-budget productions. Been having these discussions an uncountable number of times with clients.

I think a mathematically lossless compression scheme in OpenEXR would be introduced sooner or later - for sure.

However, codecs that are less than mathematically lossless (which also mean an irreversible compression process) should never be considered viable for any archival pusposes (whether they are ACES or not). This is, also, a view that some clients do still share with the community; and, fortunately, not just big studios.

Yes I totally agree! Thanks for the feedback!

Hi to all,

I’d like to revisit some of the issues I had with the concept of archiving into ACES and see if maybe I could get some more insights from you all. Especially @walter.arrighetti who has been so helpful!

I recently attended a SMPTE 1 day workshop on HDR and the topic came up about future proofing movies for HDR “remastering”. It was implied that the best way to do this would be to produce the archive “master” once picture lock was done and that all important elements (VFX etc…) we’re conformed into the color grading session. A “flat pass” EXR could be produced into ACES-0 with all other graphic elements (titles etc…) included as “sidecar” elements (bit mapped files, PSD, TIFF etc…).

Essentially this archive master would be a per shot/folder basis (with handles and in chronological order). A reference QT (or other) and EDL or XML could be included for future conform purposes.

So essentially, this would be a “pre-grade” archive of the original “media managed” essential shots of the camera negative in EXR format.

I’m wondering is this is a good approach… Or is the point/purpose of archiving to EXR is to have a “textless” finished “start to finish” sequence (with no handles)? Would this be limiting to future remastering (for HDR or any crazy technology we’ll have in 20 years from now) to have the grade applied to the EXR? Is it better to retain the original images in ACES AP-0?

Hopefully, I’m clear enough!

Cheers!

There are multiple answers to this one. For myself,

As it stands now, a flat pass EXR merged in IMF with sound, sub-titles, and captions is a deliverable that
could be expected out of current standards. It is also true that many shows need both the ‘texted’ version (which can include the VFX versions of road signs ) and the ‘non-texted’ version.

Your ideas though about a per shot basis also are sound. Having handles later in the process could help in several situations. Having more metadata in the archive could ease repurposing. For example, having
the mattes tracked by the colorist , on say the sky, could assist in latter mapping to a different HDR range.

I would think of the pre-grade archive as most appropriate for the per shot approach. In theory this should hold all of the camera range that was available. All of the sidecar elements would apply.

It would be nice if there was a fully metadata driven color mastering process, but so far, that isn’t a sharable thing.

The post-grade version could be the IMF assembly mentioned above.

In either, I think keeping the original images in AP0 is a good idea.

Thank you @jim_houston!

And thanks for validating my idea and adding a few variables I should look into.

I’m curious to know more about the ACES IMF concept. I understand pretty well what an IMF does and it’s practical nature. But, I have no idea they could be used with EXRs… I thought it was JPEG2000 only.

At the moment It’s impossible to even think about having a completely mastered/packaged sequence when using ACES in Resolve. Some graphical elements and cross dissolves just react very badly. Probably due to where they are implemented in the ACES workflow within the software. (I touched on this above)

So the IMF workflow is not a possibility for all users of Resolve. You would need to master in a different software…

That being said, I find the IMF approach very interesting technically. But I don’t see how practical it is in real life. And maybe you could give me some more insight on this…

  • Would would want to access this type on encoding? Could you send this to a content provider like Netflix? It seems pretty heavy (in data) to manage…

  • What would be the benefits of producing a color graded master in ACES AP0? Would it be possible to open is up at a later date, apply an ODT to a future 30 000 nits screen (in the year 2100) and look at “the same movie” ? (obviously I know that this is wishful thinking) This is something that we discussed at a recent SMPTE workhop on HDR as a potential problem. IE: this is why I’m trying to desing this archive workflow “pre-grade”.

  • Would you be able to extract the EXRs from the IMF?

The complete metadata approach would be amazing. Never render anything and just have all the grade as a sidecar is a workflow geek’s dream! :wink: I guess ACES would have to work very closely with developers on this. But seeing how BMD is not really on the bandwagon is discouraging at least for the future development of this approach.

Thank you!

The completion of the ACES IMF is very recent, so implementations may not have happened yet.

I understand there are some limitations with Resolve.

Yes, the EXRs can be wrapped into MXF and then unwrapped. The amount of data is no worse than
bringing back all of the DPXs for a movie. (10TB or so for a 4K)

You could possibly send to Netflix, though if they really want the highest uncompressed version – which is not
usually on their list.

If a grade is done in ACES, and you are careful, you should have the same dynamic range of the original camera footage. Yes, you could apply other ODTs. If the range was captured, then you should be able to display the result from the camera. There are few cases where you can’t – for example, if you are using ASC CDL Power (Gamma in linear) to time then you are limiting the max ACES value to about 222.0.
I think though that 30K is about 11 (display) stops above mid-grey, so you would need a better camera than anything we have right now.

We are quite a way from a full metadata color grade, but I understand the dream.

Thanks @jim_houston!

Very insightful. It’s unfortunate that we cannot use Resolve to online a movie in ACES. On the flip side, creating a pre-grade archive is appealing to producers who don’t have the means of keeping all their source footage…

Cheers!

Hi Charles and Jim.
Thanks for starting this very refreshing topic. And thanks above all to you @chuckyboilo, for thinking about posible archival strategies.

Operational points first:

  • I think that archiving may selectively fulfill a rather wide range of objectives and, depending on each goals, the archive will be different (different assets to archive, with different encodings, file formats, naming conventions, etc.). As Jim said, we’re still far away from a fully-tagged, metadata-driven “übermaster” that completely describes itself, so we’re still at the point where tactic is heavly dependend on specific archival objectives; different facilities, or productions may thus require different types of archival copies.
    I’m not implying any project should be archived in a different way: but there may be a ver few selected standards tailored to the main families of objectives (read below two items).
    I see IMF being a technology going in the right way though, especially due to its XML-based expandability and sub-standards declined as “Applications”.
  • Archival masters for either legal/compliance, re-broadcasts, or film-heritage purposes, for example, may need neither ungraded, nor textless, nor “handle-frame” clips; their presence in the package might even be detrimental when primary goal is an asset faithful to a specific edition of the content (e.g. the original domestic theatrical release).
  • Archival copies for preservation, localization or technological re-release, instead, may very well benefit from such “open-structure” packages allowing things such as re-editing and re-grading.

Technical comments now:

  • IMF prescribe the use of MXF container format for storing the essences; each application may demand metadata and specific encodings for essence wrapped within the MXFs of the package. JPEG2000 is used for some Applications – not all.
  • Using a standard colorimetry for every master/archival copy –as is the case of ACES2065 (AP0 primaries) for ACES– is in my opinion better than preserving the original camera-native files, because of the widest gamut and because there won’t be in the future any need for software that can interpret raw file formats with possibly-obsolesced, propietary and/or undisclosed features such as Bayern pattern, camera metadata and camera-native colorimetry, etc…
  • Standard ST2067-50 IMF, Application #5: ACES, for example, prescribes to encode video as frame-wrapped, with each frame encoded as OpenEXR. The details for either the underlying EXR and the MXF structure on top for App #5, are referenced in ACES family of standards ST2065-4 and ST2065-5, respectively.
  • In IMF App#5 reference frames can be added, as individual PNGs or TIFFs, encoded in (possibly different) output-referred color-spaces. They relate to reference display(s) used during mastering of specific versions. They encode the codevalues of those frames after applying the relevant Output Transform(s), to serve as visual and numeric reference even years in the future. For this reason using at least one open-spec colorimetry (that can exist or be replicated even when no such display technologies exist any longer) is a good practive, in my opinion.

By the way: It’s not yet confirmed, but I should be giving a talk at this year’s Cinema Ritrovato festival (June 2018) about archival using either ACES and IMF from a preservationist’s perspective.

Oh, and by the way I did the talk at Il Cinema Ritrovato 2018 on June 26th and it started many interesting discussions with both students of the FIAF Summer School of film restoration, as well as with other professionals in the room.