On Wed, 03 Jun 2020 20:20:44 +0000 (UTC) Jonas Karlman jonas@kwiboo.se wrote:
Hi,
On 2020-06-03 11:12, Pekka Paalanen wrote:
On Wed, 3 Jun 2020 10:50:28 +0530 Yogish Kulkarni yogishkulkarni@gmail.com wrote:
...
The primary reason for why content producer masters video/gfx content as limited range is for compatibility with sinks which only support limited range, and not because video decoders are not capable of decoding full-range content.
What I was asking is, even if the video content is limited range, why would one not decode it into full-range pixels always and if the sink need limited range, then convert again in hardware? When done right, it makes no difference in output compared to using limited range through-out if both content and sink use limited range.
For the Allwinner/Amlogic/Rockchip arm devices I mainly play with the video decoder does not support range conversion (to my knowledge) and will produce NV12/YU12 framebuffers in the range the video was encoded in.
These devices typically lack a high-performance GPU/3D-accelerator and may have limited CSC capabilities in the display controller. The HDMI block can usually do simple RGB/YUV and full/limited conversions, but using these conversions typically produce banding effects.
Being able to passthrough decoded framebuffers in the entire pipeline from decoder, display controller and hdmi block typically produce best results.
This is very helpful. It means I really do need to take range into account in Wayland protocol and make sure it can be communicated.
Thanks, pq