Hi @JesseKorosi ,
sorry for the late reply, it has been a busy week.
I think both @walter.arrighetti and @jim_houston are better describing and clarifying my points. I like the
<verticalScaling> and the
<horizontalScaling> idea a lot, I think solves both unusual squeezing problems and pixel accurate scaling.
However, I don’t think it’s enough still. Please don’t hate me if I try once more (and one last time, I promise) to express my doubts.
Having @jim_houston mentioning “active image area” on his last email, gives me the chance to argue again on the need of two additional (optional) nodes to add to the framing tree.
My argument is mainly based on trying to clarify (mostly to myself) what is the aim of this framing metadata and what we refer to when we talk about extraction. I might be stating the obvious here and I’m sure you will all have considered the following points and I’m just overkilling this topic but, for the sake of trying to clarify what I have in mind, I’ll go further.
I think it’s very important to make a difference between how an extraction guide is used on set and how it is used in post.
I think we all agree that extraction guide lines designed for on-set purposes want to specify what the operator has to frame for on-set and not more., especially if the we aim to be able to translate those numbers into a camera-compatible frame-line file. In 99.99% of the cases operators only want to see what they need to frame for. Too many lines in the viewfinder make their life impossible, it distracts them.
The post-production framing, on the other hand, in my personal experience, specifies what the working image area should be, or -in other words- how a given frame should be processed to fit the current post-production stage. Most of the time (I would say 80% of the time) the extraction isn’t the target frame, but instead is how the image needs to be cropped and adjusted to, from which the target area is obtained after and within it. In other words, in post production we never crop for the target area, but rather for a larger working area.
I know that different AMF will be generated for different stages of production, therefore the concept of extraction can vary from set to post from what is the target area to what needs to be pulled for VFX or DI, but still I think that one single instruction to define a frame and a working area isn’t enough. I strongly believe there is a need of a multi-layer system.
If this framing metadata wants to automate post-workflows I think it needs to account for these instructions:
TARGET AREA (V+H)
ACTIVE AREA (V+H)
We got the first three nailed down, allow me to argue that we need the forth one to make the whole thing working and possibly the fifth one to account for every scenario I ever had to deal with.
TARGET AREA: I previously referred to it as “blanking” instruction, but after reading Jim and Walter’s post I think we could refer to it as “target area”. Conceptually they are two different approaches to get to the same result: once an image has been cropped and scaled, we use this instruction to tell the software what portion of the frame matches with what has been framed on set. Implementations will then leave the software/user to decide what to do with it (blank it, make a reference line, create a different canvas). This is what will be used on set to calculate the frame lines as well.
ACTIVE AREA would mostly be used for VFX when the workflow requires the vendor to receive and deliver back to post an image different (most of the time bigger) than the rendered GC area. To elaborate: what happens 99% of the times on my projects is that we have to account for a 5/10% extra area outside the target area to allow post production to stabilise, reframe, make a 3D release, make an IMAX release and so on. For these reasons, VFX CG needs to be rendered outside the main target frame, so that once VFX pulls go back to DI, VFX shots will have the same extra room of drama shots and the CG has been rendered to account for all needs, like different releases (ie IMAX) or further post-production stages (ie 3D post conversion). You don’t want to have to render twice, right?
I reckon that by adding those two extra nodes we could really account for every need both on set and in post.
I have a bunch of projects I can mention and provide documentation for it is required. But I’m sure you all know what I’m talking about here…
Sorry for the long email.