On Fri, Aug 13, 2021 at 10:42:12AM +0530, Sharma, Shashank wrote:
Hello Brian, (+Uma in cc)
Thanks for your comments, Let me try to fill-in for Harry to keep the design discussion going. Please find my comments inline.
On 8/2/2021 10:00 PM, Brian Starkey wrote:
-- snip --
Android doesn't blend in linear space, so any API shouldn't be built around an assumption of linear blending.
If I am not wrong, we still need linear buffers for accurate Gamut transformation (SRGB -> BT2020 or other way around) isn't it ?
Yeah, you need to transform the buffer to linear for color gamut conversions, but then back to non-linear (probably sRGB or gamma 2.2) for actual blending.
This is why I'd like to have the per-plane "OETF/GAMMA" separate from tone-mapping, so that the composition transfer function is independent.
...
+Tonemapping in this case could be a simple nits value or `EDR`_ to describe +how to scale the :ref:`SDR luminance`.
+Tonemapping could also include the ability to use a 3D LUT which might be +accompanied by a 1D shaper LUT. The shaper LUT is required in order to +ensure a 3D LUT with limited entries (e.g. 9x9x9, or 17x17x17) operates +in perceptual (non-linear) space, so as to evenly spread the limited +entries evenly across the perceived space.
Some terminology care may be needed here - up until this point, I think you've been talking about "tonemapping" being luminance adjustment, whereas I'd expect 3D LUTs to be used for gamut adjustment.
IMO, what harry wants to say here is that, which HW block gets picked and how tone mapping is achieved can be a very driver/HW specific thing, where one driver can use a 1D/Fixed function block, whereas another one can choose more complex HW like a 3D LUT for the same.
DRM layer needs to define only the property to hook the API with core driver, and the driver can decide which HW to pick and configure for the activity. So when we have a tonemapping property, we might not have a separate 3D-LUT property, or the driver may fail the atomic_check() if both of them are programmed for different usages.
I still think that directly exposing the HW blocks and their capabilities is the right approach, rather than a "magic" tonemapping property.
Yes, userspace would need to have a good understanding of how to use that hardware, but if the pipeline model is standardised that's the kind of thing a cross-vendor library could handle.
It would definitely be good to get some compositor opinions here.
Cheers, -Brian