the whole subject is a bit more subtle:
The ideal space for an operation depends on what you want to simulate.
If you are after modelling physical phenomena, then some sort of scene linear is probably the right domain. If you want to model perceptual phenomena, then typically a perceptual space (like a quasi-log or opponent space) is the right thing.
- If you want to do white balance/exposure, multiplication in the linear domain is the right thing to do.
- If you are after simulating an optical blur also linear light is the right space.
- If you want to sharpen an image, then log is the right domain because there is no physical process of sharpening a scene. (Sharpening is probably happening in our neural pathway)
It becomes tricky if you do things that incorporate several things at the same time. Scaling, for example, has both a physical component (anti-aliasing) and a perceptual component (sharpening). So there is not really a right or wrong way. Most of the time, you scale in quasi-log because the disadvantages of the sharpening components produce artefacts in linear (overshoots).
Modern colour correcter tend to develop into the direction that each operator is operating in the space appropriate for its design goals.
For the argument of addition in ACEScc vs multiplication in linear:
The result might be identical for the operation of a white balance or exposure change. But you should not forget that ACEScc will put the image into a state which overemphasises the energy in the shadows. Subsequent operations like saturation changes or scaling might not work correctly in ACEScc with lots of “dark energy” :-).
So ideally you do white balance in linear and then convert to a quasi log representation for things like sharpening or scaling.
I think generally ACEScc is quite difficult to use because it is not a physical nor a perceptual space.
Then legacy tools have lots of “assumptions” build into them.
FilmGrade (Offset/Contrast) assumes that the image is encoded in a cineon like quasi-log space. This is for what this particular tool was developed for.
Same is true for legacy keyers and other tools. The same way most of the compositing tools were designed with the assumption that the image data is presented in linear light.
I hope this helps.