We are looking to enable HDR support for a couple of single-plane and multi-plane scenarios. To do this effectively we recommend new interfaces to drm_plane. The first patch gives a bit of background on HDR and why we propose these interfaces.
This update is only changing the documentation, not the code. We feel that we are not close to anything resembling a consensus for the DRM/KMS API to support (multi-plane) HDR and would like to further the discussion on this.
The most important bits in the RFC document are probably the sections on defining HW details and defining SW intentions. We are worried defining intricate HW details on the DRM/KMS level leads to a lot of complexity for compositors which can be avoided by rather defining SW intentions.
I will be off for the entire month of August with little time to follow this thread but would like to get my updated thoughts out for discussion anyways. Shashank Sharma will help support this discussion.
v3: * Only doc updates (patch 1) * Add sections on single-plane and multi-plane HDR * Describe approach to define HW details vs approach to define SW intentions * Link Jeremy Cline's excellent HDR summaries * Outline intention behind overly verbose doc * Describe FP16 use-case * Clean up links
v2: * Moved RFC from cover letter to kernel doc (Daniel Vetter) * Created new color space property instead of abusing color_encoding property (Ville) * Elaborated on need for named transfer functions * Expanded on reason for SDR luminance definition * Dropped 'color' from transfer function naming * Added output_transfer_function on crtc
Bhawanpreet Lakha (3): drm/color: Add transfer functions for HDR/SDR on drm_plane drm/color: Add sdr boost property drm/color: Add color space plane property
Harry Wentland (3): drm/doc: Color Management and HDR10 RFC drm/color: Add output transfer function to crtc drm/amd/display: reformat YCbCr-RGB conversion matrix
Documentation/gpu/rfc/color_intentions.drawio | 1 + Documentation/gpu/rfc/color_intentions.svg | 3 + Documentation/gpu/rfc/colorpipe | 1 + Documentation/gpu/rfc/colorpipe.svg | 3 + Documentation/gpu/rfc/hdr-wide-gamut.rst | 580 ++++++++++++++++++ Documentation/gpu/rfc/index.rst | 1 + .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 17 +- drivers/gpu/drm/amd/display/dc/inc/hw/dpp.h | 28 +- .../gpu/drm/arm/display/komeda/komeda_crtc.c | 7 +- .../gpu/drm/arm/display/komeda/komeda_plane.c | 6 +- drivers/gpu/drm/arm/malidp_crtc.c | 7 +- drivers/gpu/drm/arm/malidp_planes.c | 6 +- drivers/gpu/drm/armada/armada_crtc.c | 5 +- drivers/gpu/drm/armada/armada_overlay.c | 6 +- .../gpu/drm/atmel-hlcdc/atmel_hlcdc_crtc.c | 7 +- drivers/gpu/drm/drm_atomic_uapi.c | 8 + drivers/gpu/drm/drm_color_mgmt.c | 177 +++++- drivers/gpu/drm/i915/display/intel_color.c | 11 +- drivers/gpu/drm/i915/display/intel_color.h | 2 +- drivers/gpu/drm/i915/display/intel_crtc.c | 4 +- drivers/gpu/drm/i915/display/intel_sprite.c | 6 +- .../drm/i915/display/skl_universal_plane.c | 6 +- drivers/gpu/drm/ingenic/ingenic-drm-drv.c | 9 +- drivers/gpu/drm/mediatek/mtk_drm_crtc.c | 8 +- drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c | 9 +- drivers/gpu/drm/nouveau/dispnv04/overlay.c | 6 +- drivers/gpu/drm/nouveau/dispnv50/head.c | 13 +- drivers/gpu/drm/omapdrm/omap_crtc.c | 10 +- drivers/gpu/drm/omapdrm/omap_plane.c | 6 +- drivers/gpu/drm/rcar-du/rcar_du_crtc.c | 7 +- drivers/gpu/drm/rockchip/rockchip_drm_vop.c | 5 +- drivers/gpu/drm/stm/ltdc.c | 8 +- drivers/gpu/drm/sun4i/sun8i_vi_layer.c | 10 +- drivers/gpu/drm/tidss/tidss_crtc.c | 9 +- drivers/gpu/drm/tidss/tidss_plane.c | 10 +- drivers/gpu/drm/vc4/vc4_crtc.c | 16 +- include/drm/drm_color_mgmt.h | 49 +- include/drm/drm_crtc.h | 20 + include/drm/drm_plane.h | 47 +- 39 files changed, 1074 insertions(+), 60 deletions(-) create mode 100644 Documentation/gpu/rfc/color_intentions.drawio create mode 100644 Documentation/gpu/rfc/color_intentions.svg create mode 100644 Documentation/gpu/rfc/colorpipe create mode 100644 Documentation/gpu/rfc/colorpipe.svg create mode 100644 Documentation/gpu/rfc/hdr-wide-gamut.rst
-- 2.32.0
Use the new DRM RFC doc section to capture the RFC previously only described in the cover letter at https://patchwork.freedesktop.org/series/89506/
v3: * Add sections on single-plane and multi-plane HDR * Describe approach to define HW details vs approach to define SW intentions * Link Jeremy Cline's excellent HDR summaries * Outline intention behind overly verbose doc * Describe FP16 use-case * Clean up links
v2: create this doc
v1: n/a
Signed-off-by: Harry Wentland harry.wentland@amd.com --- Documentation/gpu/rfc/color_intentions.drawio | 1 + Documentation/gpu/rfc/color_intentions.svg | 3 + Documentation/gpu/rfc/colorpipe | 1 + Documentation/gpu/rfc/colorpipe.svg | 3 + Documentation/gpu/rfc/hdr-wide-gamut.rst | 580 ++++++++++++++++++ Documentation/gpu/rfc/index.rst | 1 + 6 files changed, 589 insertions(+) create mode 100644 Documentation/gpu/rfc/color_intentions.drawio create mode 100644 Documentation/gpu/rfc/color_intentions.svg create mode 100644 Documentation/gpu/rfc/colorpipe create mode 100644 Documentation/gpu/rfc/colorpipe.svg create mode 100644 Documentation/gpu/rfc/hdr-wide-gamut.rst
diff --git a/Documentation/gpu/rfc/color_intentions.drawio b/Documentation/gpu/rfc/color_intentions.drawio new file mode 100644 index 000000000000..d62f3b24e1ec --- /dev/null +++ b/Documentation/gpu/rfc/color_intentions.drawio @@ -0,0 +1 @@ +<mxfile host="Electron" modified="2021-07-27T17:06:00.446Z" agent="5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) draw.io/14.6.13 Chrome/91.0.4472.124 Electron/13.1.7 Safari/537.36" etag="5FhBvRxDzJPI4Jsj_73y" version="14.6.13" type="device"><diagram id="5iAH_SKWfpT4d7aif5q5" name="Page-1">7VhJl9owDP41OTIvC2E5DjDQvravtMybGY6GmMR9Tpw6Dkt/fZXEWUwgw1CWSy8QfbZkWfokBTRr6G8nHIXeN+Zgqpm6s9WskWaatt2HzwTYZYBhtK0McTlxJFYCM/IHS1CXaEwcHCkbBWNUkFAFlywI8FIoGOKcbdRtK0bVU0Pk4howWyJaR1+JI7z8GrpeLnzCxPXk0T1bLvgo3yyByEMO21Qg60mzhpwxkT352yGmSfDyuGR64yOrhWMcB+IUhcfJ5y/0ZUKC53AxfYnmr31j1TJkNtaIxvLG0luxy0OAHYiIFAMWwNeAszhwcGJYB4lx4TGXBYh+ZSwE0ADwFxZiJ/OJYsEA8oRP5Sr4zHdvif4DxCwH5gC09Ae9REZbeUYm7arSFHPiY4G5BDPHE2+PRkhCEYv5EjeFRTINcReLhn3dIo9QAJiBN3wHehxTJMha9QNJJrrFvkJ1ygh4aOqyavqSMXnN5AzKLWRuSaUy4/BQ8aKEUh58hBMHKNGhEIaBQ9YKNTq/44S9A4G3ooUocQPNekyKEQIOaSnW4cmV36mdKERBjo058vEiXq1SBZptG1d3VODUARU9AMV0H6GkyYWfk0Hv/bPrNpqtRmD2HKsA1i+wf/e9ElXrceMRgWchSim+gb6s1p5M1IjiVWlsjTlksblw6kSXCm2VsLm4qTTMvF961V6pH68MhdQfZbDVvk0T2xKR9TBbSvPKStm6EiHvXBdsUtdpPoatJrOWpSt3n26t+wwoDhwSuP/G+svT3OjVeV5M+yrPO9fiudG7+/Tudrv709vu3Ht69+86vot313vN736NFQeG7IKfNjZPGPfpIKkN+6mexOGd8Xm+9R/Xsz14NnWz2fe9IX3nxmR0zhvABXj5CWzXODgiUUhR4vX3WISxuBkR4T3sPxNvxUT9yNvDNagIYvnzOeue5Z8Q1tNf</diagram></mxfile> \ No newline at end of file diff --git a/Documentation/gpu/rfc/color_intentions.svg b/Documentation/gpu/rfc/color_intentions.svg new file mode 100644 index 000000000000..2f6b5f5813a3 --- /dev/null +++ b/Documentation/gpu/rfc/color_intentions.svg @@ -0,0 +1,3 @@ +<?xml version="1.0" encoding="UTF-8"?> +<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd"> +<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" version="1.1" width="242px" height="362px" viewBox="-0.5 -0.5 242 362" content="<mxfile host="Electron" modified="2021-07-27T15:46:56.623Z" agent="5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) draw.io/14.6.13 Chrome/91.0.4472.124 Electron/13.1.7 Safari/537.36" etag="yRHplrf8g5DRVJVrDlPI" version="14.6.13" type="device"><diagram id="5iAH_SKWfpT4d7aif5q5" name="Page-1">7Vhbl5owEP41PLqHy+LlcdXV9rQ9td09u+tjlAjpCYSGoNhf3wHCJaKsa7289EWYL5nJMPPNDKhZIz+ZchR635iDqWbqTqJZY800bXsAvymwzYG+ZeeAy4mTQ0YFPJE/WIK6RGPi4EjZKBijgoQquGRBgJdCwRDnbKNuWzGqnhoiFzeApyWiTfSVOMKTqKHr1cInTFxPHt235YKPis0SiDzksE0Nsh41a8QZE/mdn4wwTWNXxCXXmxxYLR3jOBDHKDxMP3+hL1MSPIeL2Us0fx0Yq45h5WbWiMbyiaW3YluEADsQESkGLIDLkLM4cHBqWAeJceExlwWIfmUsBNAA8BcWYivziWLBAPKET+Uq+My3b6n+HcSsAOYAdPQ7vULGiTwjl7Z1aYY58bHAXIK546m3ByMkoYjFfInbwiKZhriLRcu+XplH4D9m4A3fgh7HFAmyVv1Akoluua9UnTECHpq6LJqBZIwsGaNgUGEhd0sqVRmHm5oXFZTx4COc2EOJLoUwDB2yVqjR/R2n7B0KnIgOosQNNOshLUYIOKSlXIc7V14zO1GIggKbcOTjRbxaZQo03zap76jBmQMqugeK6S5CSZsLP6fD/vtnN220W43A7ClWAWw+wO6z75SoWo8bjwj8FKKM4htoy2rtyUSNKV5VxtaYQxbbC6dJdKlwrxK2EDe1hln0S6/eK/XDlaGQ+qMMtu6v08QSIvIeZktpXlupWlcqFJ3rjE3qMs3HsNVkNrJ04e7Ta3SfIcWBQwL331h/fpob/SbPy2lf53n3Ujw3+jef3r1eb3d6291bT+/BTcd3+e56q/k9aLBiz5Bd8OPG5hHjPhskjWE/09M4vDM+T7f+43K2h8+mbrb7vjOkb9yYjO5pA7gEzz+B7QYHxyQKKUq9/h6LMBZXIyK8h/1n4rWYqB94e7gEFUGsPp/z7ln9B2E9/gU=</diagram></mxfile>" style="background-color: rgb(255, 255, 255);"><defs/><g><path d="M 58.07 88 L 58.15 139.95" fill="none" stroke="#000000" stroke-miterlimit="10" pointer-events="stroke"/><path d="M 58.16 145.2 L 54.65 138.21 L 58.15 139.95 L 61.65 138.2 Z" fill="#000000" stroke="#000000" stroke-miterlimit="10" pointer-events="all"/><rect x="8" y="8" width="100" height="80" fill="#ffffff" stroke="#000000" pointer-events="all"/><g transform="translate(-0.5 -0.5)"><switch><foreignObject style="overflow: visible; text-align: left;" pointer-events="none" width="100%" height="100%" requiredFeatures="http://www.w3.org/TR/SVG11/feature#Extensibility"><div xmlns="http://www.w3.org/1999/xhtml" style="display: flex; align-items: unsafe center; justify-content: unsafe flex-start; width: 98px; height: 1px; padding-top: 48px; margin-left: 10px;"><div style="box-sizing: border-box; font-size: 0; text-align: left; "><div style="display: inline-block; font-size: 12px; font-family: Helvetica; color: #000000; line-height: 1.2; pointer-events: all; white-space: normal; word-wrap: normal; "><div style="text-align: center"><span>Framebuffer</span></div><div><ul><li><span>RGB8</span></li><li><span>sRGB</span></li></ul></div></div></div></div></foreignObject><text x="10" y="52" fill="#000000" font-family="Helvetica" font-size="12px">Framebuffer...</text></switch></g><path d="M 118 208 L 118 241.63" fill="none" stroke="#000000" stroke-miterlimit="10" pointer-events="stroke"/><path d="M 118 246.88 L 114.5 239.88 L 118 241.63 L 121.5 239.88 Z" fill="#000000" stroke="#000000" stroke-miterlimit="10" pointer-events="all"/><rect x="8" y="148" width="220" height="60" fill="#ffffff" stroke="#000000" pointer-events="all"/><g transform="translate(-0.5 -0.5)"><switch><foreignObject style="overflow: visible; text-align: left;" pointer-events="none" width="100%" height="100%" requiredFeatures="http://www.w3.org/TR/SVG11/feature#Extensibility"><div xmlns="http://www.w3.org/1999/xhtml" style="display: flex; align-items: unsafe center; justify-content: unsafe center; width: 218px; height: 1px; padding-top: 178px; margin-left: 9px;"><div style="box-sizing: border-box; font-size: 0; text-align: center; "><div style="display: inline-block; font-size: 12px; font-family: Helvetica; color: #000000; line-height: 1.2; pointer-events: all; white-space: normal; word-wrap: normal; ">Blending</div></div></div></foreignObject><text x="118" y="182" fill="#000000" font-family="Helvetica" font-size="12px" text-anchor="middle">Blending</text></switch></g><path d="M 178.54 108 L 178.87 138.27" fill="none" stroke="#000000" stroke-miterlimit="10" pointer-events="stroke"/><path d="M 178.93 143.52 L 175.35 136.56 L 178.87 138.27 L 182.35 136.48 Z" fill="#000000" stroke="#000000" stroke-miterlimit="10" pointer-events="all"/><rect x="128" y="8" width="100" height="100" fill="#ffffff" stroke="#000000" pointer-events="all"/><g transform="translate(-0.5 -0.5)"><switch><foreignObject style="overflow: visible; text-align: left;" pointer-events="none" width="100%" height="100%" requiredFeatures="http://www.w3.org/TR/SVG11/feature#Extensibility"><div xmlns="http://www.w3.org/1999/xhtml" style="display: flex; align-items: unsafe center; justify-content: unsafe center; width: 98px; height: 1px; padding-top: 58px; margin-left: 129px;"><div style="box-sizing: border-box; font-size: 0; text-align: center; "><div style="display: inline-block; font-size: 12px; font-family: Helvetica; color: #000000; line-height: 1.2; pointer-events: all; white-space: normal; word-wrap: normal; ">Framebuffer<br /><ul><li style="text-align: left">P010</li><li style="text-align: left">PQ</li><li style="text-align: left">BT2020</li></ul></div></div></div></foreignObject><text x="178" y="62" fill="#000000" font-family="Helvetica" font-size="12px" text-anchor="middle">Framebuffer...</text></switch></g><rect x="68" y="248" width="100" height="100" fill="#ffffff" stroke="#000000" pointer-events="all"/><g transform="translate(-0.5 -0.5)"><switch><foreignObject style="overflow: visible; text-align: left;" pointer-events="none" width="100%" height="100%" requiredFeatures="http://www.w3.org/TR/SVG11/feature#Extensibility"><div xmlns="http://www.w3.org/1999/xhtml" style="display: flex; align-items: unsafe center; justify-content: unsafe center; width: 98px; height: 1px; padding-top: 298px; margin-left: 69px;"><div style="box-sizing: border-box; font-size: 0; text-align: center; "><div style="display: inline-block; font-size: 12px; font-family: Helvetica; color: #000000; line-height: 1.2; pointer-events: all; white-space: normal; word-wrap: normal; ">Display Output<br /><ul><li style="text-align: left">RGB10</li><li style="text-align: left">PQ</li><li style="text-align: left">BT2020</li></ul></div></div></div></foreignObject><text x="118" y="302" fill="#000000" font-family="Helvetica" font-size="12px" text-anchor="middle">Display Output...</text></switch></g></g><switch><g requiredFeatures="http://www.w3.org/TR/SVG11/feature#Extensibility"/><a transform="translate(0,-5)" xlink:href="https://www.diagrams.net/doc/faq/svg-export-text-problems" target="_blank"><text text-anchor="middle" font-size="10px" x="50%" y="100%">Viewer does not support full SVG 1.1</text></a></switch></svg> \ No newline at end of file diff --git a/Documentation/gpu/rfc/colorpipe b/Documentation/gpu/rfc/colorpipe new file mode 100644 index 000000000000..2d12490eddec --- /dev/null +++ b/Documentation/gpu/rfc/colorpipe @@ -0,0 +1 @@ +<mxfile host="Electron" modified="2021-07-27T14:17:36.119Z" agent="5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) draw.io/14.6.13 Chrome/91.0.4472.124 Electron/13.1.7 Safari/537.36" etag="Gs2TTWVtoGmka55_Cxxx" version="14.6.13" type="device"><diagram id="5iAH_SKWfpT4d7aif5q5" name="Page-1">7ZpdU+IwFIZ/DTPuhUyb9AMuFZXdWZ1xBtTlMtLQdqdtOiEI7K/f1Cb9ICrdUUkclgtoTptwes7zpsmBHhylmzFFeXRDApz0gBVsevCiB4DrDvl7YdiWhgF0S0NI46A02bVhEv/BwmgJ6yoO8LJ1ISMkYXHeNs5JluE5a9kQpWTdvmxBkva35ijEimEyR4lqfYgDFgmrbVn1ie84DiPx1QNXnEiRvFgYlhEKyLphgpc9OKKEsPIo3YxwUsROxqXsd/XK2coxijPWpcPZ+MfP5H4cZ9P88fZ+OXsY2otTG5bDPKFkJe5YeMu2MgQ44BERzYxk/OOcklUW4GJgi7cIZREJSYaSa0JybrS58TdmbCvyiVaMcFPE0kSc5T7T7a+if9+VzZkY7rlxsWm1tqJVOld49GoUhGlJVnSO37p1QROiIWZvXAeqXHHGMUkx94f3ozhBLH5q+4EEbWF1XZ0QfiBy8i/5UdJzRVGKH1eLBaZKptppWUcxw5McPUdhzdXZToEYGVOGN28HU7152UGyLcRtO6K9bkhFKiVqqESK5OPj5R4tz6Ajz45OnoGSngCfzu7uywmTxhvjmK7m731MO5/GtHO0TDsdmXZ1Mq2m54RDPUZpir71gJdwx88f+WTthcXR9d3UPMaH2hn3DoP0JmYNonlr1jhT81w0tk249crA7SgDT6cM1CfvaHrTK0i74u+jycg47KGnHXv/K2DPA9ACv2/Zzh74n1u3mMY8Tnyh+uGK8DoqwtepCE9RxJQnNEV5HmehcWpw5M5cmxqg8xXUoOsh4HdEHlovp/0wzPsK8+cJzgITgXctdfqvVvtN4L1Pm/4HSrQOowD9NNvDjji/ltQD1V+GSoZMKsBUc7ExBRipnyNEuqpP70Ua6kRaDmxwDWYXa/01GEnxMWINu2KttQoD1F8yTC/DKJhrL8OA/yvwvYB3UYLWbScwvxKzS77+Sgz4EgVI3/dNq8SArvvS92pCdL0lMXexJmkw7INBCya+r+sPHKt6yd8E5aClp2KcHWwqx95BkroFNqnssys9/WUfoO6xjmVtBa2O4oFA5xMFqpu6kwbThi6v7MFOvUf78graSlAMfMhoEwPoKgat+2fppsHLq13yPe3LK/jC7szkrdluBP1PfEryZv0vwHIVUv+VEl7+BQ==</diagram></mxfile> \ No newline at end of file diff --git a/Documentation/gpu/rfc/colorpipe.svg b/Documentation/gpu/rfc/colorpipe.svg new file mode 100644 index 000000000000..f6b8ece2499d --- /dev/null +++ b/Documentation/gpu/rfc/colorpipe.svg @@ -0,0 +1,3 @@ +<?xml version="1.0" encoding="UTF-8"?> +<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd"> +<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" version="1.1" width="241px" height="657px" viewBox="-0.5 -0.5 241 657" content="<mxfile host="Electron" modified="2021-07-27T14:17:39.137Z" agent="5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) draw.io/14.6.13 Chrome/91.0.4472.124 Electron/13.1.7 Safari/537.36" etag="ffS0jb-Ry8iliDxn_T30" version="14.6.13" type="device"><diagram id="5iAH_SKWfpT4d7aif5q5" name="Page-1">7ZpdU+IwFIZ/DTPuhUyb9AMuFZXdWZ1xBtTlMtLQdqdtOiEI7K/f1Cb9ICrdUUkclgtoTptwes7zpsmBHhylmzFFeXRDApz0gBVsevCiB4DrDvl7YdiWhgF0S0NI46A02bVhEv/BwmgJ6yoO8LJ1ISMkYXHeNs5JluE5a9kQpWTdvmxBkva35ijEimEyR4lqfYgDFgmrbVn1ie84DiPx1QNXnEiRvFgYlhEKyLphgpc9OKKEsPIo3YxwUsROxqXsd/XK2coxijPWpcPZ+MfP5H4cZ9P88fZ+OXsY2otTG5bDPKFkJe5YeMu2MgQ44BERzYxk/OOcklUW4GJgi7cIZREJSYaSa0JybrS58TdmbCvyiVaMcFPE0kSc5T7T7a+if9+VzZkY7rlxsWm1tqJVOld49GoUhGlJVnSO37p1QROiIWZvXAeqXHHGMUkx94f3ozhBLH5q+4EEbWF1XZ0QfiBy8i/5UdJzRVGKH1eLBaZKptppWUcxw5McPUdhzdXZToEYGVOGN28HU7152UGyLcRtO6K9bkhFKiVqqESK5OPj5R4tz6Ajz45OnoGSngCfzu7uywmTxhvjmK7m731MO5/GtHO0TDsdmXZ1Mq2m54RDPUZpir71gJdwx88f+WTthcXR9d3UPMaH2hn3DoP0JmYNonlr1jhT81w0tk249crA7SgDT6cM1CfvaHrTK0i74u+jycg47KGnHXv/K2DPA9ACv2/Zzh74n1u3mMY8Tnyh+uGK8DoqwtepCE9RxJQnNEV5HmehcWpw5M5cmxqg8xXUoOsh4HdEHlovp/0wzPsK8+cJzgITgXctdfqvVvtN4L1Pm/4HSrQOowD9NNvDjji/ltQD1V+GSoZMKsBUc7ExBRipnyNEuqpP70Ua6kRaDmxwDWYXa/01GEnxMWINu2KttQoD1F8yTC/DKJhrL8OA/yvwvYB3UYLWbScwvxKzS77+Sgz4EgVI3/dNq8SArvvS92pCdL0lMXexJmkw7INBCya+r+sPHKt6yd8E5aClp2KcHWwqx95BkroFNqnssys9/WUfoO6xjmVtBa2O4oFA5xMFqpu6kwbThi6v7MFOvUf78graSlAMfMhoEwPoKgat+2fppsHLq13yPe3LK/jC7szkrdluBP1PfEryZv0vwHIVUv+VEl7+BQ==</diagram></mxfile>" style="background-color: rgb(255, 255, 255);"><defs/><g><path d="M 58 58 L 58 81.63" fill="none" stroke="#000000" stroke-miterlimit="10" pointer-events="stroke"/><path d="M 58 86.88 L 54.5 79.88 L 58 81.63 L 61.5 79.88 Z" fill="#000000" stroke="#000000" stroke-miterlimit="10" pointer-events="all"/><rect x="8" y="8" width="100" height="50" fill="#ffffff" stroke="#000000" pointer-events="all"/><g transform="translate(-0.5 -0.5)"><switch><foreignObject style="overflow: visible; text-align: left;" pointer-events="none" width="100%" height="100%" requiredFeatures="http://www.w3.org/TR/SVG11/feature#Extensibility"><div xmlns="http://www.w3.org/1999/xhtml" style="display: flex; align-items: unsafe center; justify-content: unsafe center; width: 98px; height: 1px; padding-top: 33px; margin-left: 9px;"><div style="box-sizing: border-box; font-size: 0; text-align: center; "><div style="display: inline-block; font-size: 12px; font-family: Helvetica; color: #000000; line-height: 1.2; pointer-events: all; white-space: normal; word-wrap: normal; ">Framebuffer</div></div></div></foreignObject><text x="58" y="37" fill="#000000" font-family="Helvetica" font-size="12px" text-anchor="middle">Framebuffer</text></switch></g><path d="M 58 128 L 58 151.63" fill="none" stroke="#000000" stroke-miterlimit="10" pointer-events="stroke"/><path d="M 58 156.88 L 54.5 149.88 L 58 151.63 L 61.5 149.88 Z" fill="#000000" stroke="#000000" stroke-miterlimit="10" pointer-events="all"/><rect x="8" y="88" width="100" height="40" fill="#ffffff" stroke="#000000" pointer-events="all"/><g transform="translate(-0.5 -0.5)"><switch><foreignObject style="overflow: visible; text-align: left;" pointer-events="none" width="100%" height="100%" requiredFeatures="http://www.w3.org/TR/SVG11/feature#Extensibility"><div xmlns="http://www.w3.org/1999/xhtml" style="display: flex; align-items: unsafe center; justify-content: unsafe center; width: 98px; height: 1px; padding-top: 108px; margin-left: 9px;"><div style="box-sizing: border-box; font-size: 0; text-align: center; "><div style="display: inline-block; font-size: 12px; font-family: Helvetica; color: #000000; line-height: 1.2; pointer-events: all; white-space: normal; word-wrap: normal; ">de-YUV matrix</div></div></div></foreignObject><text x="58" y="112" fill="#000000" font-family="Helvetica" font-size="12px" text-anchor="middle">de-YUV matrix</text></switch></g><path d="M 58 198 L 58 221.63" fill="none" stroke="#000000" stroke-miterlimit="10" pointer-events="stroke"/><path d="M 58 226.88 L 54.5 219.88 L 58 221.63 L 61.5 219.88 Z" fill="#000000" stroke="#000000" stroke-miterlimit="10" pointer-events="all"/><rect x="8" y="158" width="100" height="40" fill="#ffffff" stroke="#000000" pointer-events="all"/><g transform="translate(-0.5 -0.5)"><switch><foreignObject style="overflow: visible; text-align: left;" pointer-events="none" width="100%" height="100%" requiredFeatures="http://www.w3.org/TR/SVG11/feature#Extensibility"><div xmlns="http://www.w3.org/1999/xhtml" style="display: flex; align-items: unsafe center; justify-content: unsafe center; width: 98px; height: 1px; padding-top: 178px; margin-left: 9px;"><div style="box-sizing: border-box; font-size: 0; text-align: center; "><div style="display: inline-block; font-size: 12px; font-family: Helvetica; color: #000000; line-height: 1.2; pointer-events: all; white-space: normal; word-wrap: normal; ">(de-Gamma)<br />LUT</div></div></div></foreignObject><text x="58" y="182" fill="#000000" font-family="Helvetica" font-size="12px" text-anchor="middle">(de-Gamma)...</text></switch></g><path d="M 58 268 L 58 296.63" fill="none" stroke="#000000" stroke-miterlimit="10" pointer-events="stroke"/><path d="M 58 301.88 L 54.5 294.88 L 58 296.63 L 61.5 294.88 Z" fill="#000000" stroke="#000000" stroke-miterlimit="10" pointer-events="all"/><rect x="8" y="228" width="100" height="40" fill="#ffffff" stroke="#000000" pointer-events="all"/><g transform="translate(-0.5 -0.5)"><switch><foreignObject style="overflow: visible; text-align: left;" pointer-events="none" width="100%" height="100%" requiredFeatures="http://www.w3.org/TR/SVG11/feature#Extensibility"><div xmlns="http://www.w3.org/1999/xhtml" style="display: flex; align-items: unsafe center; justify-content: unsafe center; width: 98px; height: 1px; padding-top: 248px; margin-left: 9px;"><div style="box-sizing: border-box; font-size: 0; text-align: center; "><div style="display: inline-block; font-size: 12px; font-family: Helvetica; color: #000000; line-height: 1.2; pointer-events: all; white-space: normal; word-wrap: normal; ">CTM / CSC</div></div></div></foreignObject><text x="58" y="252" fill="#000000" font-family="Helvetica" font-size="12px" text-anchor="middle">CTM / CSC</text></switch></g><path d="M 58 343 L 57.46 362.47" fill="none" stroke="#000000" stroke-miterlimit="10" pointer-events="stroke"/><path d="M 57.31 367.72 L 54.01 360.63 L 57.46 362.47 L 61 360.82 Z" fill="#000000" stroke="#000000" stroke-miterlimit="10" pointer-events="all"/><rect x="8" y="303" width="100" height="40" fill="#ffffff" stroke="#000000" pointer-events="all"/><g transform="translate(-0.5 -0.5)"><switch><foreignObject style="overflow: visible; text-align: left;" pointer-events="none" width="100%" height="100%" requiredFeatures="http://www.w3.org/TR/SVG11/feature#Extensibility"><div xmlns="http://www.w3.org/1999/xhtml" style="display: flex; align-items: unsafe center; justify-content: unsafe center; width: 98px; height: 1px; padding-top: 323px; margin-left: 9px;"><div style="box-sizing: border-box; font-size: 0; text-align: center; "><div style="display: inline-block; font-size: 12px; font-family: Helvetica; color: #000000; line-height: 1.2; pointer-events: all; white-space: normal; word-wrap: normal; ">Tonemapping</div></div></div></foreignObject><text x="58" y="327" fill="#000000" font-family="Helvetica" font-size="12px" text-anchor="middle">Tonemapping</text></switch></g><path d="M 118 428 L 118 451.63" fill="none" stroke="#000000" stroke-miterlimit="10" pointer-events="stroke"/><path d="M 118 456.88 L 114.5 449.88 L 118 451.63 L 121.5 449.88 Z" fill="#000000" stroke="#000000" stroke-miterlimit="10" pointer-events="all"/><rect x="8" y="368" width="220" height="60" fill="#ffffff" stroke="#000000" pointer-events="all"/><g transform="translate(-0.5 -0.5)"><switch><foreignObject style="overflow: visible; text-align: left;" pointer-events="none" width="100%" height="100%" requiredFeatures="http://www.w3.org/TR/SVG11/feature#Extensibility"><div xmlns="http://www.w3.org/1999/xhtml" style="display: flex; align-items: unsafe center; justify-content: unsafe center; width: 218px; height: 1px; padding-top: 398px; margin-left: 9px;"><div style="box-sizing: border-box; font-size: 0; text-align: center; "><div style="display: inline-block; font-size: 12px; font-family: Helvetica; color: #000000; line-height: 1.2; pointer-events: all; white-space: normal; word-wrap: normal; ">Blending</div></div></div></foreignObject><text x="118" y="402" fill="#000000" font-family="Helvetica" font-size="12px" text-anchor="middle">Blending</text></switch></g><path d="M 178 58 L 178 81.63" fill="none" stroke="#000000" stroke-miterlimit="10" pointer-events="stroke"/><path d="M 178 86.88 L 174.5 79.88 L 178 81.63 L 181.5 79.88 Z" fill="#000000" stroke="#000000" stroke-miterlimit="10" pointer-events="all"/><rect x="128" y="8" width="100" height="50" fill="#ffffff" stroke="#000000" pointer-events="all"/><g transform="translate(-0.5 -0.5)"><switch><foreignObject style="overflow: visible; text-align: left;" pointer-events="none" width="100%" height="100%" requiredFeatures="http://www.w3.org/TR/SVG11/feature#Extensibility"><div xmlns="http://www.w3.org/1999/xhtml" style="display: flex; align-items: unsafe center; justify-content: unsafe center; width: 98px; height: 1px; padding-top: 33px; margin-left: 129px;"><div style="box-sizing: border-box; font-size: 0; text-align: center; "><div style="display: inline-block; font-size: 12px; font-family: Helvetica; color: #000000; line-height: 1.2; pointer-events: all; white-space: normal; word-wrap: normal; ">Framebuffer</div></div></div></foreignObject><text x="178" y="37" fill="#000000" font-family="Helvetica" font-size="12px" text-anchor="middle">Framebuffer</text></switch></g><path d="M 178 128 L 178 151.63" fill="none" stroke="#000000" stroke-miterlimit="10" pointer-events="stroke"/><path d="M 178 156.88 L 174.5 149.88 L 178 151.63 L 181.5 149.88 Z" fill="#000000" stroke="#000000" stroke-miterlimit="10" pointer-events="all"/><rect x="128" y="88" width="100" height="40" fill="#ffffff" stroke="#000000" pointer-events="all"/><g transform="translate(-0.5 -0.5)"><switch><foreignObject style="overflow: visible; text-align: left;" pointer-events="none" width="100%" height="100%" requiredFeatures="http://www.w3.org/TR/SVG11/feature#Extensibility"><div xmlns="http://www.w3.org/1999/xhtml" style="display: flex; align-items: unsafe center; justify-content: unsafe center; width: 98px; height: 1px; padding-top: 108px; margin-left: 129px;"><div style="box-sizing: border-box; font-size: 0; text-align: center; "><div style="display: inline-block; font-size: 12px; font-family: Helvetica; color: #000000; line-height: 1.2; pointer-events: all; white-space: normal; word-wrap: normal; ">de-YUV matrix</div></div></div></foreignObject><text x="178" y="112" fill="#000000" font-family="Helvetica" font-size="12px" text-anchor="middle">de-YUV matrix</text></switch></g><path d="M 178 198 L 178 221.63" fill="none" stroke="#000000" stroke-miterlimit="10" pointer-events="stroke"/><path d="M 178 226.88 L 174.5 219.88 L 178 221.63 L 181.5 219.88 Z" fill="#000000" stroke="#000000" stroke-miterlimit="10" pointer-events="all"/><rect x="128" y="158" width="100" height="40" fill="#ffffff" stroke="#000000" pointer-events="all"/><g transform="translate(-0.5 -0.5)"><switch><foreignObject style="overflow: visible; text-align: left;" pointer-events="none" width="100%" height="100%" requiredFeatures="http://www.w3.org/TR/SVG11/feature#Extensibility"><div xmlns="http://www.w3.org/1999/xhtml" style="display: flex; align-items: unsafe center; justify-content: unsafe center; width: 98px; height: 1px; padding-top: 178px; margin-left: 129px;"><div style="box-sizing: border-box; font-size: 0; text-align: center; "><div style="display: inline-block; font-size: 12px; font-family: Helvetica; color: #000000; line-height: 1.2; pointer-events: all; white-space: normal; word-wrap: normal; ">(de-Gamma)<br />LUT</div></div></div></foreignObject><text x="178" y="182" fill="#000000" font-family="Helvetica" font-size="12px" text-anchor="middle">(de-Gamma)...</text></switch></g><path d="M 178 268 L 178 296.63" fill="none" stroke="#000000" stroke-miterlimit="10" pointer-events="stroke"/><path d="M 178 301.88 L 174.5 294.88 L 178 296.63 L 181.5 294.88 Z" fill="#000000" stroke="#000000" stroke-miterlimit="10" pointer-events="all"/><rect x="128" y="228" width="100" height="40" fill="#ffffff" stroke="#000000" pointer-events="all"/><g transform="translate(-0.5 -0.5)"><switch><foreignObject style="overflow: visible; text-align: left;" pointer-events="none" width="100%" height="100%" requiredFeatures="http://www.w3.org/TR/SVG11/feature#Extensibility"><div xmlns="http://www.w3.org/1999/xhtml" style="display: flex; align-items: unsafe center; justify-content: unsafe center; width: 98px; height: 1px; padding-top: 248px; margin-left: 129px;"><div style="box-sizing: border-box; font-size: 0; text-align: center; "><div style="display: inline-block; font-size: 12px; font-family: Helvetica; color: #000000; line-height: 1.2; pointer-events: all; white-space: normal; word-wrap: normal; ">CTM / CSC</div></div></div></foreignObject><text x="178" y="252" fill="#000000" font-family="Helvetica" font-size="12px" text-anchor="middle">CTM / CSC</text></switch></g><path d="M 178 343 L 178.71 362.48" fill="none" stroke="#000000" stroke-miterlimit="10" pointer-events="stroke"/><path d="M 178.9 367.72 L 175.15 360.85 L 178.71 362.48 L 182.14 360.6 Z" fill="#000000" stroke="#000000" stroke-miterlimit="10" pointer-events="all"/><rect x="128" y="303" width="100" height="40" fill="#ffffff" stroke="#000000" pointer-events="all"/><g transform="translate(-0.5 -0.5)"><switch><foreignObject style="overflow: visible; text-align: left;" pointer-events="none" width="100%" height="100%" requiredFeatures="http://www.w3.org/TR/SVG11/feature#Extensibility"><div xmlns="http://www.w3.org/1999/xhtml" style="display: flex; align-items: unsafe center; justify-content: unsafe center; width: 98px; height: 1px; padding-top: 323px; margin-left: 129px;"><div style="box-sizing: border-box; font-size: 0; text-align: center; "><div style="display: inline-block; font-size: 12px; font-family: Helvetica; color: #000000; line-height: 1.2; pointer-events: all; white-space: normal; word-wrap: normal; ">Tonemapping</div></div></div></foreignObject><text x="178" y="327" fill="#000000" font-family="Helvetica" font-size="12px" text-anchor="middle">Tonemapping</text></switch></g><path d="M 118 498 L 118 521.63" fill="none" stroke="#000000" stroke-miterlimit="10" pointer-events="stroke"/><path d="M 118 526.88 L 114.5 519.88 L 118 521.63 L 121.5 519.88 Z" fill="#000000" stroke="#000000" stroke-miterlimit="10" pointer-events="all"/><rect x="68" y="458" width="100" height="40" fill="#ffffff" stroke="#000000" pointer-events="all"/><g transform="translate(-0.5 -0.5)"><switch><foreignObject style="overflow: visible; text-align: left;" pointer-events="none" width="100%" height="100%" requiredFeatures="http://www.w3.org/TR/SVG11/feature#Extensibility"><div xmlns="http://www.w3.org/1999/xhtml" style="display: flex; align-items: unsafe center; justify-content: unsafe center; width: 98px; height: 1px; padding-top: 478px; margin-left: 69px;"><div style="box-sizing: border-box; font-size: 0; text-align: center; "><div style="display: inline-block; font-size: 12px; font-family: Helvetica; color: #000000; line-height: 1.2; pointer-events: all; white-space: normal; word-wrap: normal; ">(Tonemapping)<br />LUT</div></div></div></foreignObject><text x="118" y="482" fill="#000000" font-family="Helvetica" font-size="12px" text-anchor="middle">(Tonemapping)...</text></switch></g><path d="M 118 568 L 118 596.63" fill="none" stroke="#000000" stroke-miterlimit="10" pointer-events="stroke"/><path d="M 118 601.88 L 114.5 594.88 L 118 596.63 L 121.5 594.88 Z" fill="#000000" stroke="#000000" stroke-miterlimit="10" pointer-events="all"/><rect x="68" y="528" width="100" height="40" fill="#ffffff" stroke="#000000" pointer-events="all"/><g transform="translate(-0.5 -0.5)"><switch><foreignObject style="overflow: visible; text-align: left;" pointer-events="none" width="100%" height="100%" requiredFeatures="http://www.w3.org/TR/SVG11/feature#Extensibility"><div xmlns="http://www.w3.org/1999/xhtml" style="display: flex; align-items: unsafe center; justify-content: unsafe center; width: 98px; height: 1px; padding-top: 548px; margin-left: 69px;"><div style="box-sizing: border-box; font-size: 0; text-align: center; "><div style="display: inline-block; font-size: 12px; font-family: Helvetica; color: #000000; line-height: 1.2; pointer-events: all; white-space: normal; word-wrap: normal; ">CTM / CSC</div></div></div></foreignObject><text x="118" y="552" fill="#000000" font-family="Helvetica" font-size="12px" text-anchor="middle">CTM / CSC</text></switch></g><rect x="68" y="603" width="100" height="40" fill="#ffffff" stroke="#000000" pointer-events="all"/><g transform="translate(-0.5 -0.5)"><switch><foreignObject style="overflow: visible; text-align: left;" pointer-events="none" width="100%" height="100%" requiredFeatures="http://www.w3.org/TR/SVG11/feature#Extensibility"><div xmlns="http://www.w3.org/1999/xhtml" style="display: flex; align-items: unsafe center; justify-content: unsafe center; width: 98px; height: 1px; padding-top: 623px; margin-left: 69px;"><div style="box-sizing: border-box; font-size: 0; text-align: center; "><div style="display: inline-block; font-size: 12px; font-family: Helvetica; color: #000000; line-height: 1.2; pointer-events: all; white-space: normal; word-wrap: normal; ">(Gamma)<br />LUT</div></div></div></foreignObject><text x="118" y="627" fill="#000000" font-family="Helvetica" font-size="12px" text-anchor="middle">(Gamma)...</text></switch></g></g><switch><g requiredFeatures="http://www.w3.org/TR/SVG11/feature#Extensibility"/><a transform="translate(0,-5)" xlink:href="https://www.diagrams.net/doc/faq/svg-export-text-problems" target="_blank"><text text-anchor="middle" font-size="10px" x="50%" y="100%">Viewer does not support full SVG 1.1</text></a></switch></svg> \ No newline at end of file diff --git a/Documentation/gpu/rfc/hdr-wide-gamut.rst b/Documentation/gpu/rfc/hdr-wide-gamut.rst new file mode 100644 index 000000000000..e463670191ab --- /dev/null +++ b/Documentation/gpu/rfc/hdr-wide-gamut.rst @@ -0,0 +1,580 @@ +============================== +HDR & Wide Color Gamut Support +============================== + +.. role:: wy-text-strike + +ToDo +==== + +* :wy-text-strike:`Reformat as RST kerneldoc` - done +* :wy-text-strike:`Don't use color_encoding for color_space definitions` - done +* :wy-text-strike:`Update SDR luminance description and reasoning` - done +* :wy-text-strike:`Clarify 3D LUT required for some color space transformations` - done +* :wy-text-strike:`Highlight need for named color space and EOTF definitions` - done +* :wy-text-strike:`Define transfer function API` - done +* :wy-text-strike:`Draft upstream plan` - done +* :wy-text-strike:`Reference to wayland plan` - done +* Reference to Chrome plans +* Sketch view of HW pipeline for couple of HW implementations + + +Upstream Plan +============= + +* Reach consensus on DRM/KMS API +* Implement support in amdgpu +* Implement IGT tests +* Add API support to Weston, ChromiumOS, or other canonical open-source project interested in HDR +* Merge user-space +* Merge kernel patches + + +History +======= + +v3: + +* Add sections on single-plane and multi-plane HDR +* Describe approach to define HW details vs approach to define SW intentions +* Link Jeremy Cline's excellent HDR summaries +* Outline intention behind overly verbose doc +* Describe FP16 use-case +* Clean up links + +v2: create this doc + +v1: n/a + + +Introduction +============ + +We are looking to enable HDR support for a couple of single-plane and +multi-plane scenarios. To do this effectively we recommend new interfaces +to drm_plane. Below I'll give a bit of background on HDR and why we +propose these interfaces. + +As an RFC doc this document is more verbose than what we would want from +an eventual uAPI doc. This is intentional in order to ensure interested +parties are all on the same page and to facilitate discussion if there +is disagreement on aspects of the intentions behind the proposed uAPI. + + +Overview and background +======================= + +I highly recommend you read `Jeremy Cline's HDR primer`_ + +Jeremy Cline did a much better job describing this. I highly recommend +you read it at [1]: + +.. _Jeremy Cline's HDR primer: https://www.jcline.org/blog/fedora/graphics/hdr/2021/05/07/hdr-in-linux-p1.h... + +Defining a pixel's luminance +---------------------------- + +The luminance space of pixels in a framebuffer/plane presented to the +display is not well defined in the DRM/KMS APIs. It is usually assumed to +be in a 2.2 or 2.4 gamma space and has no mapping to an absolute luminance +value; it is interpreted in relative terms. + +Luminance can be measured and described in absolute terms as candela +per meter squared, or cd/m2, or nits. Even though a pixel value can be +mapped to luminance in a linear fashion to do so without losing a lot of +detail requires 16-bpc color depth. The reason for this is that human +perception can distinguish roughly between a 0.5-1% luminance delta. A +linear representation is suboptimal, wasting precision in the highlights +and losing precision in the shadows. + +A gamma curve is a decent approximation to a human's perception of +luminance, but the `PQ (perceptual quantizer) function`_ improves on +it. It also defines the luminance values in absolute terms, with the +highest value being 10,000 nits and the lowest 0.0005 nits. + +Using a content that's defined in PQ space we can approximate the real +world in a much better way. + +Here are some examples of real-life objects and their approximate +luminance values: + + +.. _PQ (perceptual quantizer) function: https://en.wikipedia.org/wiki/High-dynamic-range_video#Perceptual_Quantizer + +.. flat-table:: + :header-rows: 1 + + * - Object + - Luminance in nits + + * - Fluorescent light + - 10,000 + + * - Highlights + - 1,000 - sunlight + + * - White Objects + - 250 - 1,000 + + * - Typical Objects + - 1 - 250 + + * - Shadows + - 0.01 - 1 + + * - Ultra Blacks + - 0 - 0.0005 + + +Transfer functions +------------------ + +Traditionally we used the terms gamma and de-gamma to describe the +encoding of a pixel's luminance value and the operation to transfer from +a linear luminance space to the non-linear space used to encode the +pixels. Since some newer encodings don't use a gamma curve I suggest +we refer to non-linear encodings using the terms `EOTF, and OETF`_, or +simply as transfer function in general. + +The EOTF (Electro-Optical Transfer Function) describes how to transfer +from an electrical signal to an optical signal. This was traditionally +done by the de-gamma function. + +The OETF (Opto Electronic Transfer Function) describes how to transfer +from an optical signal to an electronic signal. This was traditionally +done by the gamma function. + +More generally we can name the transfer function describing the transform +between scanout and blending space as the **input transfer function**, and +the transfer function describing the transform from blending space to the +output space as **output transfer function**. + + +.. _EOTF, and OETF: https://en.wikipedia.org/wiki/Transfer_functions_in_imaging + +Mastering Luminances +-------------------- + +Even though we are able to describe the absolute luminance of a pixel +using the PQ 2084 EOTF we are presented with physical limitations of the +display technologies on the market today. Here are a few examples of +luminance ranges of displays. + +.. flat-table:: + :header-rows: 1 + + * - Display + - Luminance range in nits + + * - Typical PC display + - 0.3 - 200 + + * - Excellent LCD HDTV + - 0.3 - 400 + + * - HDR LCD w/ local dimming + - 0.05 - 1,500 + +Since no display can currently show the full 0.0005 to 10,000 nits +luminance range of PQ the display will need to tone-map the HDR content, +i.e to fit the content within a display's capabilities. To assist +with tone-mapping HDR content is usually accompanied by a metadata +that describes (among other things) the minimum and maximum mastering +luminance, i.e. the maximum and minimum luminance of the display that +was used to master the HDR content. + +The HDR metadata is currently defined on the drm_connector via the +hdr_output_metadata blob property. + +It might be useful to define per-plane hdr metadata, as different planes +might have been mastered differently. + +.. _SDR Luminance: + +SDR Luminance +------------- + +Traditional SDR content's maximum white luminance is not well defined. +Some like to define it at 80 nits, others at 200 nits. It also depends +to a large extent on the environmental viewing conditions. In practice +this means that we need to define the maximum SDR white luminance, either +in nits, or as a ratio. + +`One Windows API`_ defines it as a ratio against 80 nits. + +`Another Windows API`_ defines it as a nits value. + +The `Wayland color management proposal`_ uses Apple's definition of EDR as a +ratio of the HDR range vs SDR range. + +If a display's maximum HDR white level is correctly reported it is trivial +to convert between all of the above representations of SDR white level. If +it is not, defining SDR luminance as a nits value, or a ratio vs a fixed +nits value is preferred, assuming we are blending in linear space. + +It is our experience that many HDR displays do not report maximum white +level correctly + +.. _One Windows API: https://docs.microsoft.com/en-us/windows-hardware/drivers/ddi/dispmprt/ns-di... +.. _Another Windows API: https://docs.microsoft.com/en-us/uwp/api/windows.graphics.display.advancedco... +.. _Wayland color management proposal: https://gitlab.freedesktop.org/swick/wayland-protocols/-/blob/color/unstable... + +Let There Be Color +------------------ + +So far we've only talked about luminance, ignoring colors altogether. Just +like in the luminance space, traditionally the color space of display +outputs has not been well defined. Similar to how an EOTF defines a +mapping of pixel data to an absolute luminance value, the color space +maps color information for each pixel onto the CIE 1931 chromaticity +space. This can be thought of as a mapping to an absolute, real-life, +color value. + +A color space is defined by its primaries and white point. The primaries +and white point are expressed as coordinates in the CIE 1931 color +space. Think of the red primary as the reddest red that can be displayed +within the color space. Same for green and blue. + +Examples of color spaces are: + +.. flat-table:: + :header-rows: 1 + + * - Color Space + - Description + + * - BT 601 + - similar to BT 709 + + * - BT 709 + - used by sRGB content; ~53% of BT 2020 + + * - DCI-P3 + - used by most HDR displays; ~72% of BT 2020 + + * - BT 2020 + - standard for most HDR content + + + +Color Primaries and White Point +------------------------------- + +Just like displays can currently not represent the entire 0.0005 - +10,000 nits HDR range of the PQ 2084 EOTF, they are currently not capable +of representing the entire BT.2020 color Gamut. For this reason video +content will often specify the color primaries and white point used to +master the video, in order to allow displays to be able to map the image +as best as possible onto the display's gamut. + + +Displays and Tonemapping +------------------------ + +External displays are able to do their own tone and color mapping, based +on the mastering luminance, color primaries, and white space defined in +the HDR metadata. + +Some internal panels might not include the complex HW to do tone and color +mapping on their own and will require the display driver to perform +appropriate mapping. + + +How are we solving the problem? +=============================== + +Single-plane +------------ + +If a single drm_plane is used no further work is required. The compositor +will provide one HDR plane alongside a drm_connector's hdr_output_metadata +and the display HW will output this plane without further processing if +no CRTC LUTs are provided. + +If desired a compositor can use the CRTC LUTs for HDR content but without +support for PWL or multi-segmented LUTs the quality of the operation is +expected to be subpar for HDR content. + + +Multi-plane +----------- + +In multi-plane configurations we need to solve the problem of blending +HDR and SDR content. This blending should be done in linear space and +therefore requires framebuffer data that is presented in linear space +or a way to convert non-linear data to linear space. Additionally +we need a way to define the luminance of any SDR content in relation +to the HDR content. + +In order to present framebuffer data in linear space without losing a +lot of precision it needs to be presented using 16 bpc precision. + + +Defining HW Details +------------------- + +One way to take full advantage of modern HW's color pipelines is by +defining a "generic" pipeline that matches all capable HW. Something +like this, which I took `from Uma Shankar`_ and expanded on: + +.. _from Uma Shankar: https://patchwork.freedesktop.org/series/90826/ + +.. kernel-figure:: colorpipe.svg + +I intentionally put de-Gamma, and Gamma in parentheses in my graph +as they describe the intention of the block but not necessarily a +strict definition of how a userspace implementation is required to +use them. + +De-Gamma and Gamma blocks are named LUT, but they could be non-programmable +LUTs in some HW implementations with no programmable LUT available. See +the definitions for AMD's `latest dGPU generation`_ as an example. + +.. _latest dGPU generation: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/tree/driver... + +I renamed the "Plane Gamma LUT" and "CRTC De-Gamma LUT" to "Tonemapping" +as we generally don't want to re-apply gamma before blending, or do +de-gamma post blending. These blocks tend generally to be intended for +tonemapping purposes. + +Tonemapping in this case could be a simple nits value or `EDR`_ to describe +how to scale the :ref:`SDR luminance`. + +Tonemapping could also include the ability to use a 3D LUT which might be +accompanied by a 1D shaper LUT. The shaper LUT is required in order to +ensure a 3D LUT with limited entries (e.g. 9x9x9, or 17x17x17) operates +in perceptual (non-linear) space, so as to evenly spread the limited +entries evenly across the perceived space. + +.. _EDR: https://gitlab.freedesktop.org/swick/wayland-protocols/-/blob/color/unstable... + +Creating a model that is flexible enough to define color pipelines for +a wide variety of HW is challenging, though not impossible. Implementing +support for such a flexible definition in userspace, though, amounts +to essentially writing color pipeline drivers for each HW. + + +Defining SW Intentions +---------------------- + +An alternative to describing the HW color pipeline in enough detail to +be useful for color management and HDR purposes is to instead define +SW intentions. + +.. kernel-figure:: color_intentions.svg + +This greatly simplifies the API and lets the driver do what a driver +does best: figure out how to program the HW to achieve the desired +effect. + +The above diagram could include white point, primaries, and maximum +peak and average white levels in order to facilitate tone mapping. + +At this point I suggest to keep tonemapping (other than an SDR luminance +adjustment) out of the current DRM/KMS API. Most HDR displays are capable +of tonemapping. If for some reason tonemapping is still desired on +a plane, a shader might be a better way of doing that instead of relying +on display HW. + +In some ways this mirrors how various userspace APIs treat HDR: + * Gstreamer's `GstVideoTransferFunction`_ + * EGL's `EGL_EXT_gl_colorspace_bt2020_pq`_ extension + * Vulkan's `VkColorSpaceKHR`_ + +.. _GstVideoTransferFunction: https://gstreamer.freedesktop.org/documentation/video/video-color.html?gi-la... +.. _EGL_EXT_gl_colorspace_bt2020_pq: https://www.khronos.org/registry/EGL/extensions/EXT/EGL_EXT_gl_colorspace_bt... +.. _VkColorSpaceKHR: https://www.khronos.org/registry/vulkan/specs/1.2-extensions/html/vkspec.htm... + + +A hybrid approach to the API +---------------------------- + +Our current approach attempts a hybrid approach, defining API to specify +input and output transfer functions, as well as an SDR boost, and a +input color space definition. + +We would like to solicit feedback and encourage discussion around the +merits and weaknesses of these approaches. This question is at the core +of defining a good API and we'd like to get it right. + + +Input and Output Transfer functions +----------------------------------- + +We define an input transfer function on drm_plane to describe the +transform from framebuffer to blending space. + +We define an output transfer function on drm_crtc to describe the +transform from blending space to display space. + +The transfer function can be a pre-defined function, such as PQ EOTF, or +a custom LUT. A driver will be able to specify support for specific +transfer functions, including custom ones. + +Defining the transfer function in this way allows us to support in on HW +that uses ROMs to support these transforms, as well as on HW that use +LUT definitions that are complex and don't map easily onto a standard LUT +definition. + +We will not define per-plane LUTs in this patchset as the scope of our +current work only deals with pre-defined transfer functions. This API has +the flexibility to add custom 1D or 3D LUTs at a later date. + +In order to support the existing 1D de-gamma and gamma LUTs on the drm_crtc +we will include a "custom 1D" enum value to indicate that the custom gamma and +de-gamma 1D LUTs should be used. + +Possible transfer functions: + +.. flat-table:: + :header-rows: 1 + + * - Transfer Function + - Description + + * - Gamma 2.2 + - a simple 2.2 gamma function + + * - sRGB + - 2.4 gamma with small initial linear section + + * - PQ 2084 + - SMPTE ST 2084; used for HDR video and allows for up to 10,000 nit support + + * - Linear + - Linear relationship between pixel value and luminance value + + * - Custom 1D + - Custom 1D de-gamma and gamma LUTs; one LUT per color + + * - Custom 3D + - Custom 3D LUT (to be defined) + + +Describing SDR Luminance +------------------------------ + +Since many displays do no correctly advertise the HDR white level we +propose to define the SDR white level in nits. + +We define a new drm_plane property to specify the white level of an SDR +plane. + + +Defining the color space +------------------------ + +We propose to add a new color space property to drm_plane to define a +plane's color space. + +While some color space conversions can be performed with a simple color +transformation matrix (CTM) others require a 3D LUT. + + +Defining mastering color space and luminance +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +ToDo + + + +Pixel Formats +~~~~~~~~~~~~~ + +The pixel formats, such as ARGB8888, ARGB2101010, P010, or FP16 are +unrelated to color space and EOTF definitions. HDR pixels can be formatted +in different ways but in order to not lose precision HDR content requires +at least 10 bpc precision. For this reason ARGB2101010, P010, and FP16 are +the obvious candidates for HDR. ARGB2101010 and P010 have the advantage +of requiring only half the bandwidth as FP16, while FP16 has the advantage +of enough precision to operate in a linear space, i.e. without EOTF. + + +Use Cases +========= + +RGB10 HDR plane - composited HDR video & desktop +------------------------------------------------ + +A single, composited plane of HDR content. The use-case is a video player +on a desktop with the compositor owning the composition of SDR and HDR +content. The content shall be PQ BT.2020 formatted. The drm_connector's +hdr_output_metadata shall be set. + + +P010 HDR video plane + RGB8 SDR desktop plane +--------------------------------------------- +A normal 8bpc desktop plane, with a P010 HDR video plane underlayed. The +HDR plane shall be PQ BT.2020 formatted. The desktop plane shall specify +an SDR boost value. The drm_connector's hdr_output_metadata shall be set. + + +One XRGB8888 SDR Plane - HDR output +----------------------------------- + +In order to support a smooth transition we recommend an OS that supports +HDR output to provide the hdr_output_metadata on the drm_connector to +configure the output for HDR, even when the content is only SDR. This will +allow for a smooth transition between SDR-only and HDR content. In this +use-case the SDR max luminance value should be provided on the drm_plane. + +In DCN we will de-PQ or de-Gamma all input in order to blend in linear +space. For SDR content we will also apply any desired boost before +blending. After blending we will then re-apply the PQ EOTF and do RGB +to YCbCr conversion if needed. + +FP16 HDR linear planes +---------------------- + +These will require a transformation into the display's encoding (e.g. PQ) +using the CRTC LUT. Current CRTC LUTs are lacking the precision in the +dark areas to do the conversion without losing detail. + +One of the newly defined output transfer functions or a PWL or `multi-segmented +LUT`_ can be used to facilitate the conversion to PQ, HLG, or another +encoding supported by displays. + +.. _multi-segmented LUT: https://patchwork.freedesktop.org/series/90822/ + + +User Space +========== + +Gnome & GStreamer +----------------- + +See Jeremy Cline's `HDR in Linux: Part 2`_. + +.. _HDR in Linux: Part 2: https://www.jcline.org/blog/fedora/graphics/hdr/2021/06/28/hdr-in-linux-p2.h... + + +Wayland +------- + +See `Wayland Color Management and HDR Design Goals`_. + +.. _Wayland Color Management and HDR Design Goals: https://gitlab.freedesktop.org/swick/wayland-protocols/-/blob/color/unstable... + + +ChromeOS Ozone +-------------- + +ToDo + + +HW support +========== + +ToDo, describe pipeline on a couple different HW platforms + + +Further Reading +=============== + +* https://gitlab.freedesktop.org/swick/wayland-protocols/-/blob/color/unstable... +* http://downloads.bbc.co.uk/rd/pubs/whp/whp-pdf-files/WHP309.pdf +* https://app.spectracal.com/Documents/White%20Papers/HDR_Demystified.pdf +* https://www.jcline.org/blog/fedora/graphics/hdr/2021/05/07/hdr-in-linux-p1.h... +* https://www.jcline.org/blog/fedora/graphics/hdr/2021/06/28/hdr-in-linux-p2.h... + + diff --git a/Documentation/gpu/rfc/index.rst b/Documentation/gpu/rfc/index.rst index 05670442ca1b..8d8430cfdde1 100644 --- a/Documentation/gpu/rfc/index.rst +++ b/Documentation/gpu/rfc/index.rst @@ -19,3 +19,4 @@ host such documentation: .. toctree::
i915_gem_lmem.rst + hdr-wide-gamut.rst
Hi,
Thanks for having a stab at this, it's a massive complex topic to solve.
Do you have the the HTML rendered somewhere for convenience?
On Fri, Jul 30, 2021 at 04:41:29PM -0400, Harry Wentland wrote:
Use the new DRM RFC doc section to capture the RFC previously only described in the cover letter at https://patchwork.freedesktop.org/series/89506/
v3:
- Add sections on single-plane and multi-plane HDR
- Describe approach to define HW details vs approach to define SW intentions
- Link Jeremy Cline's excellent HDR summaries
- Outline intention behind overly verbose doc
- Describe FP16 use-case
- Clean up links
v2: create this doc
v1: n/a
Signed-off-by: Harry Wentland harry.wentland@amd.com
Documentation/gpu/rfc/color_intentions.drawio | 1 + Documentation/gpu/rfc/color_intentions.svg | 3 + Documentation/gpu/rfc/colorpipe | 1 + Documentation/gpu/rfc/colorpipe.svg | 3 + Documentation/gpu/rfc/hdr-wide-gamut.rst | 580 ++++++++++++++++++ Documentation/gpu/rfc/index.rst | 1 + 6 files changed, 589 insertions(+) create mode 100644 Documentation/gpu/rfc/color_intentions.drawio create mode 100644 Documentation/gpu/rfc/color_intentions.svg create mode 100644 Documentation/gpu/rfc/colorpipe create mode 100644 Documentation/gpu/rfc/colorpipe.svg create mode 100644 Documentation/gpu/rfc/hdr-wide-gamut.rst
-- snip --
+Mastering Luminances +--------------------
+Even though we are able to describe the absolute luminance of a pixel +using the PQ 2084 EOTF we are presented with physical limitations of the +display technologies on the market today. Here are a few examples of +luminance ranges of displays.
+.. flat-table::
- :header-rows: 1
- Display
- Luminance range in nits
- Typical PC display
- 0.3 - 200
- Excellent LCD HDTV
- 0.3 - 400
- HDR LCD w/ local dimming
- 0.05 - 1,500
+Since no display can currently show the full 0.0005 to 10,000 nits +luminance range of PQ the display will need to tone-map the HDR content, +i.e to fit the content within a display's capabilities. To assist +with tone-mapping HDR content is usually accompanied by a metadata +that describes (among other things) the minimum and maximum mastering +luminance, i.e. the maximum and minimum luminance of the display that +was used to master the HDR content.
+The HDR metadata is currently defined on the drm_connector via the +hdr_output_metadata blob property.
+It might be useful to define per-plane hdr metadata, as different planes +might have been mastered differently.
I think this only applies to the approach where all the processing is decided in the kernel right?
If we directly expose each pipeline stage, and userspace controls everything, there's no need for the kernel to know the mastering luminance of any of the input content. The kernel would only need to know the eventual *output* luminance range, which might not even match any of the input content!
...
+How are we solving the problem? +===============================
+Single-plane +------------
+If a single drm_plane is used no further work is required. The compositor +will provide one HDR plane alongside a drm_connector's hdr_output_metadata +and the display HW will output this plane without further processing if +no CRTC LUTs are provided.
+If desired a compositor can use the CRTC LUTs for HDR content but without +support for PWL or multi-segmented LUTs the quality of the operation is +expected to be subpar for HDR content.
+Multi-plane +-----------
+In multi-plane configurations we need to solve the problem of blending +HDR and SDR content. This blending should be done in linear space and +therefore requires framebuffer data that is presented in linear space +or a way to convert non-linear data to linear space. Additionally +we need a way to define the luminance of any SDR content in relation +to the HDR content.
Android doesn't blend in linear space, so any API shouldn't be built around an assumption of linear blending.
+In order to present framebuffer data in linear space without losing a +lot of precision it needs to be presented using 16 bpc precision.
+Defining HW Details +-------------------
+One way to take full advantage of modern HW's color pipelines is by +defining a "generic" pipeline that matches all capable HW. Something +like this, which I took `from Uma Shankar`_ and expanded on:
+.. _from Uma Shankar: https://patchwork.freedesktop.org/series/90826/
+.. kernel-figure:: colorpipe.svg
I don't think this pipeline is expressive enough, in part because of Android's non-linear blending as I mentioned above, but also because the "tonemapping" block is a bit of a monolithic black-box.
I'd be in favour of splitting what you've called "Tonemapping" to separate luminance adjustment (I've seen that called OOTF) and pre-blending OETF (GAMMA); with similar post-blending as well:
Before blending:
FB --> YUV-to-RGB --> EOTF (DEGAMMA) --> CTM/CSC (and/or 3D LUT) --> OOTF --> OETF (GAMMA) --> To blending
After blending:
From blending --> EOTF (DEGAMMA) --> CTM/CSC (and/or 3D LUT) --> OOTF --> OETF (GAMMA) --> RGB-to-YUV --> To cable
This separates the logical pipeline stages a bit better to me.
+I intentionally put de-Gamma, and Gamma in parentheses in my graph +as they describe the intention of the block but not necessarily a +strict definition of how a userspace implementation is required to +use them.
+De-Gamma and Gamma blocks are named LUT, but they could be non-programmable +LUTs in some HW implementations with no programmable LUT available. See +the definitions for AMD's `latest dGPU generation`_ as an example.
+.. _latest dGPU generation: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/tree/driver...
+I renamed the "Plane Gamma LUT" and "CRTC De-Gamma LUT" to "Tonemapping" +as we generally don't want to re-apply gamma before blending, or do +de-gamma post blending. These blocks tend generally to be intended for +tonemapping purposes.
Sorry for repeating myself (again) - but I don't think this is true in Android.
+Tonemapping in this case could be a simple nits value or `EDR`_ to describe +how to scale the :ref:`SDR luminance`.
+Tonemapping could also include the ability to use a 3D LUT which might be +accompanied by a 1D shaper LUT. The shaper LUT is required in order to +ensure a 3D LUT with limited entries (e.g. 9x9x9, or 17x17x17) operates +in perceptual (non-linear) space, so as to evenly spread the limited +entries evenly across the perceived space.
Some terminology care may be needed here - up until this point, I think you've been talking about "tonemapping" being luminance adjustment, whereas I'd expect 3D LUTs to be used for gamut adjustment.
+.. _EDR: https://gitlab.freedesktop.org/swick/wayland-protocols/-/blob/color/unstable...
+Creating a model that is flexible enough to define color pipelines for +a wide variety of HW is challenging, though not impossible. Implementing +support for such a flexible definition in userspace, though, amounts +to essentially writing color pipeline drivers for each HW.
Without this, it seems like it would be hard/impossible for a general-purpose compositor use the display hardware.
There will always be cases where compositing needs to fall back to a GPU pass instead of using HW. If userspace has no idea what the kernel/hardware is doing, it has no hope of matching the processing and there will be significant visual differences between the two paths.
This is perhaps less relevant for post-blending stuff, which I expect would be applied by HW in both cases.
+Defining SW Intentions +----------------------
+An alternative to describing the HW color pipeline in enough detail to +be useful for color management and HDR purposes is to instead define +SW intentions.
+.. kernel-figure:: color_intentions.svg
+This greatly simplifies the API and lets the driver do what a driver +does best: figure out how to program the HW to achieve the desired +effect.
+The above diagram could include white point, primaries, and maximum +peak and average white levels in order to facilitate tone mapping.
+At this point I suggest to keep tonemapping (other than an SDR luminance +adjustment) out of the current DRM/KMS API. Most HDR displays are capable +of tonemapping. If for some reason tonemapping is still desired on +a plane, a shader might be a better way of doing that instead of relying +on display HW.
+In some ways this mirrors how various userspace APIs treat HDR:
- Gstreamer's `GstVideoTransferFunction`_
- EGL's `EGL_EXT_gl_colorspace_bt2020_pq`_ extension
- Vulkan's `VkColorSpaceKHR`_
+.. _GstVideoTransferFunction: https://gstreamer.freedesktop.org/documentation/video/video-color.html?gi-la... +.. _EGL_EXT_gl_colorspace_bt2020_pq: https://www.khronos.org/registry/EGL/extensions/EXT/EGL_EXT_gl_colorspace_bt... +.. _VkColorSpaceKHR: https://www.khronos.org/registry/vulkan/specs/1.2-extensions/html/vkspec.htm...
These (at least the Khronos ones) are application-facing APIs, rather than APIs that a compositor would use. They only communicate content hints to "the platform" so that the compositor can do-the-right-thing.
I think that this enum approach makes sense for an app, but not for implementing a compositor, which would want direct, explicit control.
+A hybrid approach to the API +----------------------------
+Our current approach attempts a hybrid approach, defining API to specify +input and output transfer functions, as well as an SDR boost, and a +input color space definition.
+We would like to solicit feedback and encourage discussion around the +merits and weaknesses of these approaches. This question is at the core +of defining a good API and we'd like to get it right.
+Input and Output Transfer functions +-----------------------------------
+We define an input transfer function on drm_plane to describe the +transform from framebuffer to blending space.
+We define an output transfer function on drm_crtc to describe the +transform from blending space to display space.
+The transfer function can be a pre-defined function, such as PQ EOTF, or +a custom LUT. A driver will be able to specify support for specific +transfer functions, including custom ones.
+Defining the transfer function in this way allows us to support in on HW +that uses ROMs to support these transforms, as well as on HW that use +LUT definitions that are complex and don't map easily onto a standard LUT +definition.
+We will not define per-plane LUTs in this patchset as the scope of our +current work only deals with pre-defined transfer functions. This API has +the flexibility to add custom 1D or 3D LUTs at a later date.
+In order to support the existing 1D de-gamma and gamma LUTs on the drm_crtc +we will include a "custom 1D" enum value to indicate that the custom gamma and +de-gamma 1D LUTs should be used.
+Possible transfer functions:
+.. flat-table::
- :header-rows: 1
- Transfer Function
- Description
- Gamma 2.2
- a simple 2.2 gamma function
- sRGB
- 2.4 gamma with small initial linear section
- PQ 2084
- SMPTE ST 2084; used for HDR video and allows for up to 10,000 nit support
- Linear
- Linear relationship between pixel value and luminance value
- Custom 1D
- Custom 1D de-gamma and gamma LUTs; one LUT per color
- Custom 3D
- Custom 3D LUT (to be defined)
+Describing SDR Luminance +------------------------------
+Since many displays do no correctly advertise the HDR white level we +propose to define the SDR white level in nits.
+We define a new drm_plane property to specify the white level of an SDR +plane.
+Defining the color space +------------------------
+We propose to add a new color space property to drm_plane to define a +plane's color space.
What is this used/useful for?
+While some color space conversions can be performed with a simple color +transformation matrix (CTM) others require a 3D LUT.
+Defining mastering color space and luminance +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ToDo
I don't think this is necessary at all (in the kernel API) if we expose the full pipeline.
Cheers, -Brian
Hello Brian, (+Uma in cc)
Thanks for your comments, Let me try to fill-in for Harry to keep the design discussion going. Please find my comments inline.
On 8/2/2021 10:00 PM, Brian Starkey wrote:
Hi,
Thanks for having a stab at this, it's a massive complex topic to solve.
Do you have the the HTML rendered somewhere for convenience?
On Fri, Jul 30, 2021 at 04:41:29PM -0400, Harry Wentland wrote:
Use the new DRM RFC doc section to capture the RFC previously only described in the cover letter at https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fpatchwork....
v3:
- Add sections on single-plane and multi-plane HDR
- Describe approach to define HW details vs approach to define SW intentions
- Link Jeremy Cline's excellent HDR summaries
- Outline intention behind overly verbose doc
- Describe FP16 use-case
- Clean up links
v2: create this doc
v1: n/a
Signed-off-by: Harry Wentland harry.wentland@amd.com
Documentation/gpu/rfc/color_intentions.drawio | 1 + Documentation/gpu/rfc/color_intentions.svg | 3 + Documentation/gpu/rfc/colorpipe | 1 + Documentation/gpu/rfc/colorpipe.svg | 3 + Documentation/gpu/rfc/hdr-wide-gamut.rst | 580 ++++++++++++++++++ Documentation/gpu/rfc/index.rst | 1 + 6 files changed, 589 insertions(+) create mode 100644 Documentation/gpu/rfc/color_intentions.drawio create mode 100644 Documentation/gpu/rfc/color_intentions.svg create mode 100644 Documentation/gpu/rfc/colorpipe create mode 100644 Documentation/gpu/rfc/colorpipe.svg create mode 100644 Documentation/gpu/rfc/hdr-wide-gamut.rst
-- snip --
+Mastering Luminances +--------------------
+Even though we are able to describe the absolute luminance of a pixel +using the PQ 2084 EOTF we are presented with physical limitations of the +display technologies on the market today. Here are a few examples of +luminance ranges of displays.
+.. flat-table::
- :header-rows: 1
- Display
- Luminance range in nits
- Typical PC display
- 0.3 - 200
- Excellent LCD HDTV
- 0.3 - 400
- HDR LCD w/ local dimming
- 0.05 - 1,500
+Since no display can currently show the full 0.0005 to 10,000 nits +luminance range of PQ the display will need to tone-map the HDR content, +i.e to fit the content within a display's capabilities. To assist +with tone-mapping HDR content is usually accompanied by a metadata +that describes (among other things) the minimum and maximum mastering +luminance, i.e. the maximum and minimum luminance of the display that +was used to master the HDR content.
+The HDR metadata is currently defined on the drm_connector via the +hdr_output_metadata blob property.
+It might be useful to define per-plane hdr metadata, as different planes +might have been mastered differently.
I think this only applies to the approach where all the processing is decided in the kernel right?
If we directly expose each pipeline stage, and userspace controls everything, there's no need for the kernel to know the mastering luminance of any of the input content. The kernel would only need to know the eventual *output* luminance range, which might not even match any of the input content!
Yes, you are right. In an approach where a compositor controls everything, we might not need this property, as the compositor can directly prepare the color correction pipeline with the required matrices and kernel can just follow it.
The reason why we introduced this property here is that there may be a hardware which implements a fixed function degamma HW unit or tone mapping unit, and this property might make it easier for their drivers to implement.
So the whole idea was to plan a seed for thoughts for those drivers, and see if it makes sense to have such a property.
...
+How are we solving the problem? +===============================
+Single-plane +------------
+If a single drm_plane is used no further work is required. The compositor +will provide one HDR plane alongside a drm_connector's hdr_output_metadata +and the display HW will output this plane without further processing if +no CRTC LUTs are provided.
+If desired a compositor can use the CRTC LUTs for HDR content but without +support for PWL or multi-segmented LUTs the quality of the operation is +expected to be subpar for HDR content.
+Multi-plane +-----------
+In multi-plane configurations we need to solve the problem of blending +HDR and SDR content. This blending should be done in linear space and +therefore requires framebuffer data that is presented in linear space +or a way to convert non-linear data to linear space. Additionally +we need a way to define the luminance of any SDR content in relation +to the HDR content.
Android doesn't blend in linear space, so any API shouldn't be built around an assumption of linear blending.
If I am not wrong, we still need linear buffers for accurate Gamut transformation (SRGB -> BT2020 or other way around) isn't it ?
+In order to present framebuffer data in linear space without losing a +lot of precision it needs to be presented using 16 bpc precision.
+Defining HW Details +-------------------
+One way to take full advantage of modern HW's color pipelines is by +defining a "generic" pipeline that matches all capable HW. Something +like this, which I took `from Uma Shankar`_ and expanded on:
+.. _from Uma Shankar: https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fpatchwork....
+.. kernel-figure:: colorpipe.svg
I don't think this pipeline is expressive enough, in part because of Android's non-linear blending as I mentioned above, but also because the "tonemapping" block is a bit of a monolithic black-box.
I'd be in favour of splitting what you've called "Tonemapping" to separate luminance adjustment (I've seen that called OOTF) and pre-blending OETF (GAMMA); with similar post-blending as well:
Before blending:
FB --> YUV-to-RGB --> EOTF (DEGAMMA) --> CTM/CSC (and/or 3D LUT) --> OOTF --> OETF (GAMMA) --> To blending
After blending:
From blending --> EOTF (DEGAMMA) --> CTM/CSC (and/or 3D LUT) --> OOTF --> OETF (GAMMA) --> RGB-to-YUV --> To cable
This separates the logical pipeline stages a bit better to me.
I agree, seems like a good logical separation, and also provides rooms for flexible color correction.
+I intentionally put de-Gamma, and Gamma in parentheses in my graph +as they describe the intention of the block but not necessarily a +strict definition of how a userspace implementation is required to +use them.
+De-Gamma and Gamma blocks are named LUT, but they could be non-programmable +LUTs in some HW implementations with no programmable LUT available. See +the definitions for AMD's `latest dGPU generation`_ as an example.
+.. _latest dGPU generation: https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgit.kernel...
+I renamed the "Plane Gamma LUT" and "CRTC De-Gamma LUT" to "Tonemapping" +as we generally don't want to re-apply gamma before blending, or do +de-gamma post blending. These blocks tend generally to be intended for +tonemapping purposes.
Sorry for repeating myself (again) - but I don't think this is true in Android.
Same as above
+Tonemapping in this case could be a simple nits value or `EDR`_ to describe +how to scale the :ref:`SDR luminance`.
+Tonemapping could also include the ability to use a 3D LUT which might be +accompanied by a 1D shaper LUT. The shaper LUT is required in order to +ensure a 3D LUT with limited entries (e.g. 9x9x9, or 17x17x17) operates +in perceptual (non-linear) space, so as to evenly spread the limited +entries evenly across the perceived space.
Some terminology care may be needed here - up until this point, I think you've been talking about "tonemapping" being luminance adjustment, whereas I'd expect 3D LUTs to be used for gamut adjustment.
IMO, what harry wants to say here is that, which HW block gets picked and how tone mapping is achieved can be a very driver/HW specific thing, where one driver can use a 1D/Fixed function block, whereas another one can choose more complex HW like a 3D LUT for the same.
DRM layer needs to define only the property to hook the API with core driver, and the driver can decide which HW to pick and configure for the activity. So when we have a tonemapping property, we might not have a separate 3D-LUT property, or the driver may fail the atomic_check() if both of them are programmed for different usages.
+.. _EDR: https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgitlab.fre...
+Creating a model that is flexible enough to define color pipelines for +a wide variety of HW is challenging, though not impossible. Implementing +support for such a flexible definition in userspace, though, amounts +to essentially writing color pipeline drivers for each HW.
Without this, it seems like it would be hard/impossible for a general-purpose compositor use the display hardware.
Agree
There will always be cases where compositing needs to fall back to a GPU pass instead of using HW. If userspace has no idea what the kernel/hardware is doing, it has no hope of matching the processing and there will be significant visual differences between the two paths.
Indeed, I find this another interesting and complex problem to solve. Need many more inputs from compositor developers as well (considering I am not an actual one :)).
This is perhaps less relevant for post-blending stuff, which I expect would be applied by HW in both cases.
+Defining SW Intentions +----------------------
+An alternative to describing the HW color pipeline in enough detail to +be useful for color management and HDR purposes is to instead define +SW intentions.
+.. kernel-figure:: color_intentions.svg
+This greatly simplifies the API and lets the driver do what a driver +does best: figure out how to program the HW to achieve the desired +effect.
+The above diagram could include white point, primaries, and maximum +peak and average white levels in order to facilitate tone mapping.
+At this point I suggest to keep tonemapping (other than an SDR luminance +adjustment) out of the current DRM/KMS API. Most HDR displays are capable +of tonemapping. If for some reason tonemapping is still desired on +a plane, a shader might be a better way of doing that instead of relying +on display HW.
+In some ways this mirrors how various userspace APIs treat HDR:
- Gstreamer's `GstVideoTransferFunction`_
- EGL's `EGL_EXT_gl_colorspace_bt2020_pq`_ extension
- Vulkan's `VkColorSpaceKHR`_
+.. _GstVideoTransferFunction: https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgstreamer.... +.. _EGL_EXT_gl_colorspace_bt2020_pq: https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.khrono... +.. _VkColorSpaceKHR: https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.khrono...
These (at least the Khronos ones) are application-facing APIs, rather than APIs that a compositor would use. They only communicate content hints to "the platform" so that the compositor can do-the-right-thing.
I think that this enum approach makes sense for an app, but not for implementing a compositor, which would want direct, explicit control.
Agree, we can fine tune this part and come back with something else.
+A hybrid approach to the API +----------------------------
+Our current approach attempts a hybrid approach, defining API to specify +input and output transfer functions, as well as an SDR boost, and a +input color space definition.
+We would like to solicit feedback and encourage discussion around the +merits and weaknesses of these approaches. This question is at the core +of defining a good API and we'd like to get it right.
+Input and Output Transfer functions +-----------------------------------
+We define an input transfer function on drm_plane to describe the +transform from framebuffer to blending space.
+We define an output transfer function on drm_crtc to describe the +transform from blending space to display space.
+The transfer function can be a pre-defined function, such as PQ EOTF, or +a custom LUT. A driver will be able to specify support for specific +transfer functions, including custom ones.
+Defining the transfer function in this way allows us to support in on HW +that uses ROMs to support these transforms, as well as on HW that use +LUT definitions that are complex and don't map easily onto a standard LUT +definition.
+We will not define per-plane LUTs in this patchset as the scope of our +current work only deals with pre-defined transfer functions. This API has +the flexibility to add custom 1D or 3D LUTs at a later date.
+In order to support the existing 1D de-gamma and gamma LUTs on the drm_crtc +we will include a "custom 1D" enum value to indicate that the custom gamma and +de-gamma 1D LUTs should be used.
+Possible transfer functions:
+.. flat-table::
- :header-rows: 1
- Transfer Function
- Description
- Gamma 2.2
- a simple 2.2 gamma function
- sRGB
- 2.4 gamma with small initial linear section
- PQ 2084
- SMPTE ST 2084; used for HDR video and allows for up to 10,000 nit support
- Linear
- Linear relationship between pixel value and luminance value
- Custom 1D
- Custom 1D de-gamma and gamma LUTs; one LUT per color
- Custom 3D
- Custom 3D LUT (to be defined)
+Describing SDR Luminance +------------------------------
+Since many displays do no correctly advertise the HDR white level we +propose to define the SDR white level in nits.
+We define a new drm_plane property to specify the white level of an SDR +plane.
+Defining the color space +------------------------
+We propose to add a new color space property to drm_plane to define a +plane's color space.
What is this used/useful for?
+While some color space conversions can be performed with a simple color +transformation matrix (CTM) others require a 3D LUT.
+Defining mastering color space and luminance +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ToDo
I don't think this is necessary at all (in the kernel API) if we expose the full pipeline.
As you can observe, both colorspace and mastering luminance properties get introduced as a part of the hybrid approach, where the compositor need not to set the whole color pipeline for HDR blending, but can just set the target/current color space of a plane being flipped, and the driver can internally prepare the pipeline for blending.
This would be in order to reduce the complexity for compositor, and offload some work on driver. At the same time, I agree that it would be something difficult to design at first.
- Shashank
Cheers, -Brian
On Fri, Aug 13, 2021 at 10:42:12AM +0530, Sharma, Shashank wrote:
Hello Brian, (+Uma in cc)
Thanks for your comments, Let me try to fill-in for Harry to keep the design discussion going. Please find my comments inline.
On 8/2/2021 10:00 PM, Brian Starkey wrote:
-- snip --
Android doesn't blend in linear space, so any API shouldn't be built around an assumption of linear blending.
If I am not wrong, we still need linear buffers for accurate Gamut transformation (SRGB -> BT2020 or other way around) isn't it ?
Yeah, you need to transform the buffer to linear for color gamut conversions, but then back to non-linear (probably sRGB or gamma 2.2) for actual blending.
This is why I'd like to have the per-plane "OETF/GAMMA" separate from tone-mapping, so that the composition transfer function is independent.
...
+Tonemapping in this case could be a simple nits value or `EDR`_ to describe +how to scale the :ref:`SDR luminance`.
+Tonemapping could also include the ability to use a 3D LUT which might be +accompanied by a 1D shaper LUT. The shaper LUT is required in order to +ensure a 3D LUT with limited entries (e.g. 9x9x9, or 17x17x17) operates +in perceptual (non-linear) space, so as to evenly spread the limited +entries evenly across the perceived space.
Some terminology care may be needed here - up until this point, I think you've been talking about "tonemapping" being luminance adjustment, whereas I'd expect 3D LUTs to be used for gamut adjustment.
IMO, what harry wants to say here is that, which HW block gets picked and how tone mapping is achieved can be a very driver/HW specific thing, where one driver can use a 1D/Fixed function block, whereas another one can choose more complex HW like a 3D LUT for the same.
DRM layer needs to define only the property to hook the API with core driver, and the driver can decide which HW to pick and configure for the activity. So when we have a tonemapping property, we might not have a separate 3D-LUT property, or the driver may fail the atomic_check() if both of them are programmed for different usages.
I still think that directly exposing the HW blocks and their capabilities is the right approach, rather than a "magic" tonemapping property.
Yes, userspace would need to have a good understanding of how to use that hardware, but if the pipeline model is standardised that's the kind of thing a cross-vendor library could handle.
It would definitely be good to get some compositor opinions here.
Cheers, -Brian
On 2021-08-16 7:10 a.m., Brian Starkey wrote:
On Fri, Aug 13, 2021 at 10:42:12AM +0530, Sharma, Shashank wrote:
Hello Brian, (+Uma in cc)
Thanks for your comments, Let me try to fill-in for Harry to keep the design discussion going. Please find my comments inline.
Thanks, Shashank. I'm back at work now. Had to cut my trip short due to rising Covid cases and concern for my kids.
On 8/2/2021 10:00 PM, Brian Starkey wrote:
-- snip --
Android doesn't blend in linear space, so any API shouldn't be built around an assumption of linear blending.
This seems incorrect but I guess ultimately the OS is in control of this. If we want to allow blending in non-linear space with the new API we would either need to describe the blending space or the pre/post-blending gamma/de-gamma.
Any idea if this blending behavior in Android might get changed in the future?
If I am not wrong, we still need linear buffers for accurate Gamut transformation (SRGB -> BT2020 or other way around) isn't it ?
Yeah, you need to transform the buffer to linear for color gamut conversions, but then back to non-linear (probably sRGB or gamma 2.2) for actual blending.
This is why I'd like to have the per-plane "OETF/GAMMA" separate from tone-mapping, so that the composition transfer function is independent.
...
+Tonemapping in this case could be a simple nits value or `EDR`_ to describe +how to scale the :ref:`SDR luminance`.
+Tonemapping could also include the ability to use a 3D LUT which might be +accompanied by a 1D shaper LUT. The shaper LUT is required in order to +ensure a 3D LUT with limited entries (e.g. 9x9x9, or 17x17x17) operates +in perceptual (non-linear) space, so as to evenly spread the limited +entries evenly across the perceived space.
Some terminology care may be needed here - up until this point, I think you've been talking about "tonemapping" being luminance adjustment, whereas I'd expect 3D LUTs to be used for gamut adjustment.
IMO, what harry wants to say here is that, which HW block gets picked and how tone mapping is achieved can be a very driver/HW specific thing, where one driver can use a 1D/Fixed function block, whereas another one can choose more complex HW like a 3D LUT for the same.
DRM layer needs to define only the property to hook the API with core driver, and the driver can decide which HW to pick and configure for the activity. So when we have a tonemapping property, we might not have a separate 3D-LUT property, or the driver may fail the atomic_check() if both of them are programmed for different usages.
I still think that directly exposing the HW blocks and their capabilities is the right approach, rather than a "magic" tonemapping property.
Yes, userspace would need to have a good understanding of how to use that hardware, but if the pipeline model is standardised that's the kind of thing a cross-vendor library could handle.
One problem with cross-vendor libraries is that they might struggle to really be cross-vendor when it comes to unique HW behavior. Or they might pick sub-optimal configurations as they're not aware of the power impact of a configuration. What's an optimal configuration might differ greatly between different HW.
We're seeing this problem with "universal" planes as well.
It would definitely be good to get some compositor opinions here.
For this we'll probably have to wait for Pekka's input when he's back from his vacation.
Cheers, -Brian
On 2021-08-16 14:40, Harry Wentland wrote:
On 2021-08-16 7:10 a.m., Brian Starkey wrote:
On Fri, Aug 13, 2021 at 10:42:12AM +0530, Sharma, Shashank wrote:
Hello Brian, (+Uma in cc)
Thanks for your comments, Let me try to fill-in for Harry to keep the design discussion going. Please find my comments inline.
Thanks, Shashank. I'm back at work now. Had to cut my trip short due to rising Covid cases and concern for my kids.
On 8/2/2021 10:00 PM, Brian Starkey wrote:
-- snip --
Android doesn't blend in linear space, so any API shouldn't be built around an assumption of linear blending.
This seems incorrect but I guess ultimately the OS is in control of this. If we want to allow blending in non-linear space with the new API we would either need to describe the blending space or the pre/post-blending gamma/de-gamma.
Any idea if this blending behavior in Android might get changed in the future?
There is lots of software which blends in sRGB space and designers adjusted to the incorrect blending in a way that the result looks right. Blending in linear space would result in incorrectly looking images.
If I am not wrong, we still need linear buffers for accurate Gamut transformation (SRGB -> BT2020 or other way around) isn't it ?
Yeah, you need to transform the buffer to linear for color gamut conversions, but then back to non-linear (probably sRGB or gamma 2.2) for actual blending.
This is why I'd like to have the per-plane "OETF/GAMMA" separate from tone-mapping, so that the composition transfer function is independent.
...
+Tonemapping in this case could be a simple nits value or `EDR`_ to describe +how to scale the :ref:`SDR luminance`.
+Tonemapping could also include the ability to use a 3D LUT which might be +accompanied by a 1D shaper LUT. The shaper LUT is required in order to +ensure a 3D LUT with limited entries (e.g. 9x9x9, or 17x17x17) operates +in perceptual (non-linear) space, so as to evenly spread the limited +entries evenly across the perceived space.
Some terminology care may be needed here - up until this point, I think you've been talking about "tonemapping" being luminance adjustment, whereas I'd expect 3D LUTs to be used for gamut adjustment.
IMO, what harry wants to say here is that, which HW block gets picked and how tone mapping is achieved can be a very driver/HW specific thing, where one driver can use a 1D/Fixed function block, whereas another one can choose more complex HW like a 3D LUT for the same.
DRM layer needs to define only the property to hook the API with core driver, and the driver can decide which HW to pick and configure for the activity. So when we have a tonemapping property, we might not have a separate 3D-LUT property, or the driver may fail the atomic_check() if both of them are programmed for different usages.
I still think that directly exposing the HW blocks and their capabilities is the right approach, rather than a "magic" tonemapping property.
Yes, userspace would need to have a good understanding of how to use that hardware, but if the pipeline model is standardised that's the kind of thing a cross-vendor library could handle.
One problem with cross-vendor libraries is that they might struggle to really be cross-vendor when it comes to unique HW behavior. Or they might pick sub-optimal configurations as they're not aware of the power impact of a configuration. What's an optimal configuration might differ greatly between different HW.
We're seeing this problem with "universal" planes as well.
I'm repeating what has been said before but apparently it has to be said again: if a property can't be replicated exactly in a shader the property is useless. If your hardware is so unique that it can't give us the exact formula we expect you cannot expose the property.
Maybe my view on power consumption is simplistic but I would expect enum < 1d lut < 3d lut < shader. Is there more to it?
Either way if the fixed KMS pixel pipeline is not sufficient to expose the intricacies of real hardware the right move would be to make the KMS pixel pipeline more dynamic, expose more hardware specifics and create a hardware specific user space like mesa. Moving the whole compositing with all its policies and decision making into the kernel is exactly the wrong way to go.
Laurent Pinchart put this very well: https://lists.freedesktop.org/archives/dri-devel/2021-June/311689.html
It would definitely be good to get some compositor opinions here.
For this we'll probably have to wait for Pekka's input when he's back from his vacation.
Cheers, -Brian
-----Original Message----- From: sebastian@sebastianwick.net sebastian@sebastianwick.net Sent: Monday, August 16, 2021 7:07 PM To: Harry Wentland harry.wentland@amd.com Cc: Brian Starkey brian.starkey@arm.com; Sharma, Shashank shashank.sharma@amd.com; amd-gfx@lists.freedesktop.org; dri- devel@lists.freedesktop.org; ppaalanen@gmail.com; mcasas@google.com; jshargo@google.com; Deepak.Sharma@amd.com; Shirish.S@amd.com; Vitaly.Prosyak@amd.com; aric.cyr@amd.com; Bhawanpreet.Lakha@amd.com; Krunoslav.Kovac@amd.com; hersenxs.wu@amd.com; Nicholas.Kazlauskas@amd.com; laurentiu.palcu@oss.nxp.com; ville.syrjala@linux.intel.com; nd@arm.com; Shankar, Uma uma.shankar@intel.com Subject: Re: [RFC PATCH v3 1/6] drm/doc: Color Management and HDR10 RFC
On 2021-08-16 14:40, Harry Wentland wrote:
On 2021-08-16 7:10 a.m., Brian Starkey wrote:
On Fri, Aug 13, 2021 at 10:42:12AM +0530, Sharma, Shashank wrote:
Hello Brian, (+Uma in cc)
Thanks Shashank for cc'ing me. Apologies for being late here. Now seems all stakeholders are back so we can resume the UAPI discussion on color.
Thanks for your comments, Let me try to fill-in for Harry to keep the design discussion going. Please find my comments inline.
Thanks, Shashank. I'm back at work now. Had to cut my trip short due to rising Covid cases and concern for my kids.
On 8/2/2021 10:00 PM, Brian Starkey wrote:
-- snip --
Android doesn't blend in linear space, so any API shouldn't be built around an assumption of linear blending.
This seems incorrect but I guess ultimately the OS is in control of this. If we want to allow blending in non-linear space with the new API we would either need to describe the blending space or the pre/post-blending gamma/de-gamma.
Any idea if this blending behavior in Android might get changed in the future?
There is lots of software which blends in sRGB space and designers adjusted to the incorrect blending in a way that the result looks right. Blending in linear space would result in incorrectly looking images.
I feel we should just leave it to userspace to decide rather than forcing linear or non Linear blending in driver.
If I am not wrong, we still need linear buffers for accurate Gamut transformation (SRGB -> BT2020 or other way around) isn't it ?
Yeah, you need to transform the buffer to linear for color gamut conversions, but then back to non-linear (probably sRGB or gamma 2.2) for actual blending.
This is why I'd like to have the per-plane "OETF/GAMMA" separate from tone-mapping, so that the composition transfer function is independent.
...
+Tonemapping in this case could be a simple nits value or `EDR`_ +to describe +how to scale the :ref:`SDR luminance`.
+Tonemapping could also include the ability to use a 3D LUT which might be +accompanied by a 1D shaper LUT. The shaper LUT is required in order to +ensure a 3D LUT with limited entries (e.g. 9x9x9, or 17x17x17) operates +in perceptual (non-linear) space, so as to evenly spread the limited +entries evenly across the perceived space.
Some terminology care may be needed here - up until this point, I think you've been talking about "tonemapping" being luminance adjustment, whereas I'd expect 3D LUTs to be used for gamut adjustment.
IMO, what harry wants to say here is that, which HW block gets picked and how tone mapping is achieved can be a very driver/HW specific thing, where one driver can use a 1D/Fixed function block, whereas another one can choose more complex HW like a 3D LUT for the same.
DRM layer needs to define only the property to hook the API with core driver, and the driver can decide which HW to pick and configure for the activity. So when we have a tonemapping property, we might not have a separate 3D-LUT property, or the driver may fail the atomic_check() if both of them are programmed for different usages.
I still think that directly exposing the HW blocks and their capabilities is the right approach, rather than a "magic" tonemapping property.
Yes, userspace would need to have a good understanding of how to use that hardware, but if the pipeline model is standardised that's the kind of thing a cross-vendor library could handle.
One problem with cross-vendor libraries is that they might struggle to really be cross-vendor when it comes to unique HW behavior. Or they might pick sub-optimal configurations as they're not aware of the power impact of a configuration. What's an optimal configuration might differ greatly between different HW.
We're seeing this problem with "universal" planes as well.
I'm repeating what has been said before but apparently it has to be said again: if a property can't be replicated exactly in a shader the property is useless. If your hardware is so unique that it can't give us the exact formula we expect you cannot expose the property.
Maybe my view on power consumption is simplistic but I would expect enum < 1d lut < 3d lut < shader. Is there more to it?
Either way if the fixed KMS pixel pipeline is not sufficient to expose the intricacies of real hardware the right move would be to make the KMS pixel pipeline more dynamic, expose more hardware specifics and create a hardware specific user space like mesa. Moving the whole compositing with all its policies and decision making into the kernel is exactly the wrong way to go.
I agree here, we can give flexibility to userspace to decide how it wants to use the hardware blocks. So exposing the hardware capability to userspace and then servicing on its behalf would be the right way to go for driver I believe. Any compositor or userspace can define its own policy and drive the hardware.
We already have done that with crtc level color properties. We can do the same for plane color. HDR will be just be an extension that way.
Laurent Pinchart put this very well: https://lists.freedesktop.org/archives/dri-devel/2021-June/311689.html
It would definitely be good to get some compositor opinions here.
For this we'll probably have to wait for Pekka's input when he's back from his vacation.
Yeah, Pekka's input would be really useful here.
We can work together Harry to come up with unified UAPI's which caters to general purpose color hardware pipeline. Just floated a RFC series with a UAPI proposal, link below: https://patchwork.freedesktop.org/series/90826/
Please check and share your feedback.
Regards, Uma Shankar
Cheers, -Brian
On Mon, 16 Aug 2021 15:37:23 +0200 sebastian@sebastianwick.net wrote:
On 2021-08-16 14:40, Harry Wentland wrote:
On 2021-08-16 7:10 a.m., Brian Starkey wrote:
On Fri, Aug 13, 2021 at 10:42:12AM +0530, Sharma, Shashank wrote:
Hello Brian, (+Uma in cc)
Thanks for your comments, Let me try to fill-in for Harry to keep the design discussion going. Please find my comments inline.
Thanks, Shashank. I'm back at work now. Had to cut my trip short due to rising Covid cases and concern for my kids.
On 8/2/2021 10:00 PM, Brian Starkey wrote:
-- snip --
Android doesn't blend in linear space, so any API shouldn't be built around an assumption of linear blending.
This seems incorrect but I guess ultimately the OS is in control of this. If we want to allow blending in non-linear space with the new API we would either need to describe the blending space or the pre/post-blending gamma/de-gamma.
Any idea if this blending behavior in Android might get changed in the future?
There is lots of software which blends in sRGB space and designers adjusted to the incorrect blending in a way that the result looks right. Blending in linear space would result in incorrectly looking images.
Hi,
yes, and I'm guilty of that too, at least by negligence. ;-)
All Wayland compositors do it, since that's what everyone has always been doing, more or less. It's still physically wrong, but when all you have is sRGB and black window shadows and rounded corners as the only use case, you don't mind.
When you start blending with colors other than black (gradients!), when you go to wide gamut, or especially with HDR, I believe the problems start to become painfully obvious.
But as long as you're stuck with sRGB only, people expect the "wrong" result and deviating from that is a regression.
Similarly, once Weston starts doing color management and people turn it on and install monitor profiles, I expect to get reports saying "all old apps look really dull now". That's how sRGB is defined to look like, they've been looking at something else for all that time. :-)
Maybe we need a sRGB "gamut boost" similar to SDR luminance boost. ;-)
I still think that directly exposing the HW blocks and their capabilities is the right approach, rather than a "magic" tonemapping property.
Yes, userspace would need to have a good understanding of how to use that hardware, but if the pipeline model is standardised that's the kind of thing a cross-vendor library could handle.
One problem with cross-vendor libraries is that they might struggle to really be cross-vendor when it comes to unique HW behavior. Or they might pick sub-optimal configurations as they're not aware of the power impact of a configuration. What's an optimal configuration might differ greatly between different HW.
We're seeing this problem with "universal" planes as well.
I'm repeating what has been said before but apparently it has to be said again: if a property can't be replicated exactly in a shader the property is useless. If your hardware is so unique that it can't give us the exact formula we expect you cannot expose the property.
From desktop perspective, yes, but I'm nowadays less adamant about it. If kernel developers are happy to maintain multiple alternative UAPIs, then I'm not going to try to NAK that - I'll just say when I can and cannot make use of them. Also everything is always up to some precision, and ultimately here it is a question of whether people can see the difference.
Entertainment end user audience is also much more forgiving than professional color management audience. For the latter, I'd hesitate to use non-primary KMS planes at all.
Either way if the fixed KMS pixel pipeline is not sufficient to expose the intricacies of real hardware the right move would be to make the KMS pixel pipeline more dynamic, expose more hardware specifics and create a hardware specific user space like mesa. Moving the whole compositing with all its policies and decision making into the kernel is exactly the wrong way to go.
Laurent Pinchart put this very well: https://lists.freedesktop.org/archives/dri-devel/2021-June/311689.html
Thanks for digging that up, saved me the trouble. :-)
Thanks, pq
On 2021-09-15 10:36, Pekka Paalanen wrote:
On Mon, 16 Aug 2021 15:37:23 +0200 sebastian@sebastianwick.net wrote:
On 2021-08-16 14:40, Harry Wentland wrote:
On 2021-08-16 7:10 a.m., Brian Starkey wrote:
On Fri, Aug 13, 2021 at 10:42:12AM +0530, Sharma, Shashank wrote:
Hello Brian, (+Uma in cc)
Thanks for your comments, Let me try to fill-in for Harry to keep the design discussion going. Please find my comments inline.
Thanks, Shashank. I'm back at work now. Had to cut my trip short due to rising Covid cases and concern for my kids.
On 8/2/2021 10:00 PM, Brian Starkey wrote:
-- snip --
Android doesn't blend in linear space, so any API shouldn't be built around an assumption of linear blending.
This seems incorrect but I guess ultimately the OS is in control of this. If we want to allow blending in non-linear space with the new API we would either need to describe the blending space or the pre/post-blending gamma/de-gamma.
Any idea if this blending behavior in Android might get changed in the future?
There is lots of software which blends in sRGB space and designers adjusted to the incorrect blending in a way that the result looks right. Blending in linear space would result in incorrectly looking images.
Hi,
yes, and I'm guilty of that too, at least by negligence. ;-)
All Wayland compositors do it, since that's what everyone has always been doing, more or less. It's still physically wrong, but when all you have is sRGB and black window shadows and rounded corners as the only use case, you don't mind.
When you start blending with colors other than black (gradients!), when you go to wide gamut, or especially with HDR, I believe the problems start to become painfully obvious.
But as long as you're stuck with sRGB only, people expect the "wrong" result and deviating from that is a regression.
Similarly, once Weston starts doing color management and people turn it on and install monitor profiles, I expect to get reports saying "all old apps look really dull now". That's how sRGB is defined to look like, they've been looking at something else for all that time. :-)
Maybe we need a sRGB "gamut boost" similar to SDR luminance boost. ;-)
I wonder how other OSes deal with this change in expectations.
I also have a Chromebook with a nice HDR OLED panel but an OS that doesn't really do HDR and seems to output to the full gamut (I could be wrong on this) and luminance range of the display. It makes content seem really vibrant but I'm equally worried how users will perceive it if there's ever proper color management.
I still think that directly exposing the HW blocks and their capabilities is the right approach, rather than a "magic" tonemapping property.
Yes, userspace would need to have a good understanding of how to use that hardware, but if the pipeline model is standardised that's the kind of thing a cross-vendor library could handle.
One problem with cross-vendor libraries is that they might struggle to really be cross-vendor when it comes to unique HW behavior. Or they might pick sub-optimal configurations as they're not aware of the power impact of a configuration. What's an optimal configuration might differ greatly between different HW.
We're seeing this problem with "universal" planes as well.
I'm repeating what has been said before but apparently it has to be said again: if a property can't be replicated exactly in a shader the property is useless. If your hardware is so unique that it can't give us the exact formula we expect you cannot expose the property.
From desktop perspective, yes, but I'm nowadays less adamant about it. If kernel developers are happy to maintain multiple alternative UAPIs, then I'm not going to try to NAK that - I'll just say when I can and cannot make use of them. Also everything is always up to some precision, and ultimately here it is a question of whether people can see the difference.
Entertainment end user audience is also much more forgiving than professional color management audience. For the latter, I'd hesitate to use non-primary KMS planes at all.
Either way if the fixed KMS pixel pipeline is not sufficient to expose the intricacies of real hardware the right move would be to make the KMS pixel pipeline more dynamic, expose more hardware specifics and create a hardware specific user space like mesa. Moving the whole compositing with all its policies and decision making into the kernel is exactly the wrong way to go.
Laurent Pinchart put this very well: https://lists.freedesktop.org/archives/dri-devel/2021-June/311689.html
Thanks for digging that up, saved me the trouble. :-)
Really good summary. I can see the parallel to the camera subsystem. Maybe now is a good time for libdisplay, or a "mesa" for display HW.
Btw, I fully agree on the need to have clear ground rules (like the newly formalized requirement for driver properties) to keep this from becoming an unmaintainable mess.
Harry
Thanks, pq
On Fri, 30 Jul 2021 16:41:29 -0400 Harry Wentland harry.wentland@amd.com wrote:
Use the new DRM RFC doc section to capture the RFC previously only described in the cover letter at https://patchwork.freedesktop.org/series/89506/
v3:
- Add sections on single-plane and multi-plane HDR
- Describe approach to define HW details vs approach to define SW intentions
- Link Jeremy Cline's excellent HDR summaries
- Outline intention behind overly verbose doc
- Describe FP16 use-case
- Clean up links
v2: create this doc
v1: n/a
Signed-off-by: Harry Wentland harry.wentland@amd.com
Hi Harry,
I finally managed to go through this, comments below. Excellent to have pictures included. I wrote this reply over several days, sorry if it's not quite coherent.
Documentation/gpu/rfc/color_intentions.drawio | 1 + Documentation/gpu/rfc/color_intentions.svg | 3 + Documentation/gpu/rfc/colorpipe | 1 + Documentation/gpu/rfc/colorpipe.svg | 3 + Documentation/gpu/rfc/hdr-wide-gamut.rst | 580 ++++++++++++++++++ Documentation/gpu/rfc/index.rst | 1 + 6 files changed, 589 insertions(+) create mode 100644 Documentation/gpu/rfc/color_intentions.drawio create mode 100644 Documentation/gpu/rfc/color_intentions.svg create mode 100644 Documentation/gpu/rfc/colorpipe create mode 100644 Documentation/gpu/rfc/colorpipe.svg create mode 100644 Documentation/gpu/rfc/hdr-wide-gamut.rst
...
diff --git a/Documentation/gpu/rfc/hdr-wide-gamut.rst b/Documentation/gpu/rfc/hdr-wide-gamut.rst new file mode 100644 index 000000000000..e463670191ab --- /dev/null +++ b/Documentation/gpu/rfc/hdr-wide-gamut.rst @@ -0,0 +1,580 @@ +============================== +HDR & Wide Color Gamut Support +==============================
+.. role:: wy-text-strike
+ToDo +====
+* :wy-text-strike:`Reformat as RST kerneldoc` - done +* :wy-text-strike:`Don't use color_encoding for color_space definitions` - done +* :wy-text-strike:`Update SDR luminance description and reasoning` - done +* :wy-text-strike:`Clarify 3D LUT required for some color space transformations` - done +* :wy-text-strike:`Highlight need for named color space and EOTF definitions` - done +* :wy-text-strike:`Define transfer function API` - done +* :wy-text-strike:`Draft upstream plan` - done +* :wy-text-strike:`Reference to wayland plan` - done +* Reference to Chrome plans +* Sketch view of HW pipeline for couple of HW implementations
+Upstream Plan +=============
+* Reach consensus on DRM/KMS API +* Implement support in amdgpu +* Implement IGT tests +* Add API support to Weston, ChromiumOS, or other canonical open-source project interested in HDR +* Merge user-space +* Merge kernel patches
The order is: review acceptance of userspace but don't merge, merge kernel, merge userspace.
+History +=======
+v3:
+* Add sections on single-plane and multi-plane HDR +* Describe approach to define HW details vs approach to define SW intentions +* Link Jeremy Cline's excellent HDR summaries +* Outline intention behind overly verbose doc +* Describe FP16 use-case +* Clean up links
+v2: create this doc
+v1: n/a
+Introduction +============
+We are looking to enable HDR support for a couple of single-plane and +multi-plane scenarios. To do this effectively we recommend new interfaces +to drm_plane. Below I'll give a bit of background on HDR and why we +propose these interfaces.
+As an RFC doc this document is more verbose than what we would want from +an eventual uAPI doc. This is intentional in order to ensure interested +parties are all on the same page and to facilitate discussion if there +is disagreement on aspects of the intentions behind the proposed uAPI.
I would recommend keeping the discussion parts of the document as well, but if you think they hurt the readability of the uAPI specification, then split things into normative and informative sections.
+Overview and background +=======================
+I highly recommend you read `Jeremy Cline's HDR primer`_
+Jeremy Cline did a much better job describing this. I highly recommend +you read it at [1]:
+.. _Jeremy Cline's HDR primer: https://www.jcline.org/blog/fedora/graphics/hdr/2021/05/07/hdr-in-linux-p1.h...
That's a nice write-up I didn't know about, thanks.
I just wish such write-ups would be somehow peer-reviewed for correctness and curated for proper referencing. Perhaps like we develop code: at least some initial peer review and then fixes when anyone notices something to improve. Like... what you are doing here! :-)
The post is perhaps a bit too narrow with OETF/EOTF terms, accidentally implying that OETF = EOTF^-1 which is not generally true, but that all depends on which O-to-E or E-to-O functions one is talking about. Particularly there is a difference between functions used for signal compression which needs an exact matching inverse function, and functions containing tone-mapping and artistic effects that when concatenated result in the (non-identity) OOTF.
Nothing in the post seems to disagree with my current understanding FWI'mW.
+Defining a pixel's luminance +----------------------------
+The luminance space of pixels in a framebuffer/plane presented to the +display is not well defined in the DRM/KMS APIs. It is usually assumed to +be in a 2.2 or 2.4 gamma space and has no mapping to an absolute luminance +value; it is interpreted in relative terms.
+Luminance can be measured and described in absolute terms as candela +per meter squared, or cd/m2, or nits. Even though a pixel value can be +mapped to luminance in a linear fashion to do so without losing a lot of +detail requires 16-bpc color depth. The reason for this is that human +perception can distinguish roughly between a 0.5-1% luminance delta. A +linear representation is suboptimal, wasting precision in the highlights +and losing precision in the shadows.
+A gamma curve is a decent approximation to a human's perception of +luminance, but the `PQ (perceptual quantizer) function`_ improves on +it. It also defines the luminance values in absolute terms, with the +highest value being 10,000 nits and the lowest 0.0005 nits.
+Using a content that's defined in PQ space we can approximate the real +world in a much better way.
Or HLG. It is said that HLG puts the OOTF in the display, while in a PQ system OOTF is baked into the transmission. However, a monitor that consumes PQ will likely do some tone-mapping to fit it to the display capabilities, so it is adding an OOTF of its own. In a HLG system I would think artistic adjustments are done before transmission baking them in, adding its own OOTF in addition to the sink OOTF. So both systems necessarily have some O-O mangling on both sides of transmission.
There is a HLG presentation at https://www.w3.org/Graphics/Color/Workshop/talks.html#intro
+Here are some examples of real-life objects and their approximate +luminance values:
+.. _PQ (perceptual quantizer) function: https://en.wikipedia.org/wiki/High-dynamic-range_video#Perceptual_Quantizer
+.. flat-table::
- :header-rows: 1
- Object
- Luminance in nits
- Fluorescent light
- 10,000
- Highlights
- 1,000 - sunlight
Did fluorescent and highlights get swapped here?
- White Objects
- 250 - 1,000
- Typical Objects
- 1 - 250
- Shadows
- 0.01 - 1
- Ultra Blacks
- 0 - 0.0005
+Transfer functions +------------------
+Traditionally we used the terms gamma and de-gamma to describe the +encoding of a pixel's luminance value and the operation to transfer from +a linear luminance space to the non-linear space used to encode the +pixels. Since some newer encodings don't use a gamma curve I suggest +we refer to non-linear encodings using the terms `EOTF, and OETF`_, or +simply as transfer function in general.
Yeah, gamma could mean lots of things. If you have e.g. OETF gamma 1/2.2 and EOTF gamma 2.4, the result is OOTF gamma 1.09.
OETF, EOTF and OOTF are not unambiguous either, since there is always the question of whose function is it.
Two different EOTFs are of interest in composition for display: - the display EOTF (since display signal is electrical) - the content EOTF (since content is stored in electrical encoding)
+The EOTF (Electro-Optical Transfer Function) describes how to transfer +from an electrical signal to an optical signal. This was traditionally +done by the de-gamma function.
+The OETF (Opto Electronic Transfer Function) describes how to transfer +from an optical signal to an electronic signal. This was traditionally +done by the gamma function.
+More generally we can name the transfer function describing the transform +between scanout and blending space as the **input transfer function**, and
"scanout space" makes me think of cable/signal values, not framebuffer values. Or, I'm not sure. I'd recommend replacing the term "scanout space" with something less ambiguous like framebuffer values.
+the transfer function describing the transform from blending space to the +output space as **output transfer function**.
You're talking about "spaces" here, but what you are actually talking about are value encodings, not (color) spaces. An EOTF or OETF is not meant to modify the color space.
When talking about blending, what you're actually interested in is linear vs. non-linear color value encoding. This matches your talk about EOTF and OETF, although you need to be careful to specify which EOTF and OETF you mean. For blending, color values need to be linear in light intensity, and the inverse of the E-to-O mapping before blending is exactly the same as the O-to-E mapping after blending. Otherwise you would alter even opaque pixels.
OETF is often associated with cameras, not displays. Maybe use EOTF^-1 instead?
Btw. another terminology thing: color space vs. color model. RGB and YCbCr are color models. sRGB, BT.601 and BT.2020 are color spaces. These two are orthogonal concepts.
+.. _EOTF, and OETF: https://en.wikipedia.org/wiki/Transfer_functions_in_imaging
+Mastering Luminances +--------------------
+Even though we are able to describe the absolute luminance of a pixel +using the PQ 2084 EOTF we are presented with physical limitations of the +display technologies on the market today. Here are a few examples of +luminance ranges of displays.
+.. flat-table::
- :header-rows: 1
- Display
- Luminance range in nits
- Typical PC display
- 0.3 - 200
- Excellent LCD HDTV
- 0.3 - 400
- HDR LCD w/ local dimming
- 0.05 - 1,500
+Since no display can currently show the full 0.0005 to 10,000 nits +luminance range of PQ the display will need to tone-map the HDR content, +i.e to fit the content within a display's capabilities. To assist +with tone-mapping HDR content is usually accompanied by a metadata +that describes (among other things) the minimum and maximum mastering +luminance, i.e. the maximum and minimum luminance of the display that +was used to master the HDR content.
+The HDR metadata is currently defined on the drm_connector via the +hdr_output_metadata blob property.
HDR_OUTPUT_METADATA, all caps.
+It might be useful to define per-plane hdr metadata, as different planes +might have been mastered differently.
+.. _SDR Luminance:
+SDR Luminance +-------------
+Traditional SDR content's maximum white luminance is not well defined. +Some like to define it at 80 nits, others at 200 nits. It also depends +to a large extent on the environmental viewing conditions. In practice +this means that we need to define the maximum SDR white luminance, either +in nits, or as a ratio.
+`One Windows API`_ defines it as a ratio against 80 nits.
+`Another Windows API`_ defines it as a nits value.
+The `Wayland color management proposal`_ uses Apple's definition of EDR as a +ratio of the HDR range vs SDR range.
+If a display's maximum HDR white level is correctly reported it is trivial +to convert between all of the above representations of SDR white level. If +it is not, defining SDR luminance as a nits value, or a ratio vs a fixed +nits value is preferred, assuming we are blending in linear space.
+It is our experience that many HDR displays do not report maximum white +level correctly
Which value do you refer to as "maximum white", and how did you measure it?
You also need to define who is "us" since kernel docs tend to get lots of authors over time.
+.. _One Windows API: https://docs.microsoft.com/en-us/windows-hardware/drivers/ddi/dispmprt/ns-di... +.. _Another Windows API: https://docs.microsoft.com/en-us/uwp/api/windows.graphics.display.advancedco... +.. _Wayland color management proposal: https://gitlab.freedesktop.org/swick/wayland-protocols/-/blob/color/unstable...
+Let There Be Color +------------------
+So far we've only talked about luminance, ignoring colors altogether. Just +like in the luminance space, traditionally the color space of display +outputs has not been well defined. Similar to how an EOTF defines a +mapping of pixel data to an absolute luminance value, the color space +maps color information for each pixel onto the CIE 1931 chromaticity +space. This can be thought of as a mapping to an absolute, real-life, +color value.
+A color space is defined by its primaries and white point. The primaries +and white point are expressed as coordinates in the CIE 1931 color +space. Think of the red primary as the reddest red that can be displayed +within the color space. Same for green and blue.
+Examples of color spaces are:
+.. flat-table::
- :header-rows: 1
- Color Space
- Description
- BT 601
- similar to BT 709
- BT 709
- used by sRGB content; ~53% of BT 2020
- DCI-P3
- used by most HDR displays; ~72% of BT 2020
- BT 2020
- standard for most HDR content
+Color Primaries and White Point +-------------------------------
+Just like displays can currently not represent the entire 0.0005 - +10,000 nits HDR range of the PQ 2084 EOTF, they are currently not capable
"PQ" or "ST 2084".
+of representing the entire BT.2020 color Gamut. For this reason video +content will often specify the color primaries and white point used to +master the video, in order to allow displays to be able to map the image +as best as possible onto the display's gamut.
+Displays and Tonemapping +------------------------
+External displays are able to do their own tone and color mapping, based +on the mastering luminance, color primaries, and white space defined in +the HDR metadata.
HLG does things differently wrt. metadata and tone-mapping than PQ.
+Some internal panels might not include the complex HW to do tone and color +mapping on their own and will require the display driver to perform +appropriate mapping.
+How are we solving the problem? +===============================
+Single-plane +------------
+If a single drm_plane is used no further work is required. The compositor +will provide one HDR plane alongside a drm_connector's hdr_output_metadata +and the display HW will output this plane without further processing if +no CRTC LUTs are provided.
+If desired a compositor can use the CRTC LUTs for HDR content but without +support for PWL or multi-segmented LUTs the quality of the operation is +expected to be subpar for HDR content.
Explain/expand PWL.
Do you have references to these subpar results? I'm interested in when and how they appear. I may want to use that information to avoid using KMS LUTs when they are inadequate.
+Multi-plane +-----------
+In multi-plane configurations we need to solve the problem of blending +HDR and SDR content. This blending should be done in linear space and +therefore requires framebuffer data that is presented in linear space +or a way to convert non-linear data to linear space. Additionally +we need a way to define the luminance of any SDR content in relation +to the HDR content.
+In order to present framebuffer data in linear space without losing a +lot of precision it needs to be presented using 16 bpc precision.
Integer or floating-point?
+Defining HW Details +-------------------
+One way to take full advantage of modern HW's color pipelines is by +defining a "generic" pipeline that matches all capable HW. Something +like this, which I took `from Uma Shankar`_ and expanded on:
+.. _from Uma Shankar: https://patchwork.freedesktop.org/series/90826/
+.. kernel-figure:: colorpipe.svg
Btw. there will be interesting issues with alpha-premult, filtering, and linearisation if your planes have alpha channels. That's before HDR is even considered.
+I intentionally put de-Gamma, and Gamma in parentheses in my graph +as they describe the intention of the block but not necessarily a +strict definition of how a userspace implementation is required to +use them.
+De-Gamma and Gamma blocks are named LUT, but they could be non-programmable +LUTs in some HW implementations with no programmable LUT available. See +the definitions for AMD's `latest dGPU generation`_ as an example.
+.. _latest dGPU generation: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/tree/driver...
+I renamed the "Plane Gamma LUT" and "CRTC De-Gamma LUT" to "Tonemapping" +as we generally don't want to re-apply gamma before blending, or do +de-gamma post blending. These blocks tend generally to be intended for +tonemapping purposes.
Right.
+Tonemapping in this case could be a simple nits value or `EDR`_ to describe +how to scale the :ref:`SDR luminance`.
I do wonder how that will turn out in the end... but on Friday there will be HDR Compositing and Tone-mapping live Q&A session: https://www.w3.org/Graphics/Color/Workshop/talks.html#compos
+Tonemapping could also include the ability to use a 3D LUT which might be +accompanied by a 1D shaper LUT. The shaper LUT is required in order to +ensure a 3D LUT with limited entries (e.g. 9x9x9, or 17x17x17) operates +in perceptual (non-linear) space, so as to evenly spread the limited +entries evenly across the perceived space.
+.. _EDR: https://gitlab.freedesktop.org/swick/wayland-protocols/-/blob/color/unstable...
+Creating a model that is flexible enough to define color pipelines for +a wide variety of HW is challenging, though not impossible. Implementing +support for such a flexible definition in userspace, though, amounts +to essentially writing color pipeline drivers for each HW.
My thinking right now is that userspace has it's own pipeline model with the elements it must have. Then it attempts to map that pipeline to what elements the KMS pipeline happens to expose. If there is a mapping, good. If not, fall back to shaders on GPU.
To help that succeed more often, I'm using the current KMS abstract pipeline as a guide in designing the Weston internal color pipeline.
+Defining SW Intentions +----------------------
+An alternative to describing the HW color pipeline in enough detail to +be useful for color management and HDR purposes is to instead define +SW intentions.
+.. kernel-figure:: color_intentions.svg
+This greatly simplifies the API and lets the driver do what a driver +does best: figure out how to program the HW to achieve the desired +effect.
+The above diagram could include white point, primaries, and maximum +peak and average white levels in order to facilitate tone mapping.
+At this point I suggest to keep tonemapping (other than an SDR luminance +adjustment) out of the current DRM/KMS API. Most HDR displays are capable +of tonemapping. If for some reason tonemapping is still desired on +a plane, a shader might be a better way of doing that instead of relying +on display HW.
"Non-programmable LUT" as you referred to them is an interesting departure from the earlier suggestion, where you intended to describe color spaces and encodings of content and display and let the hardware do whatever wild magic in between. Now it seems like you have shifted to programming transformations instead. They may be programmable or enumerated, but still transformations rather than source and destination descriptions. If the enumerated transformations follow standards, even better.
I think this is a step in the right direction.
However, you wrote in the heading "Intentions" which sounds like your old approach.
Conversion from one additive linear color space to another is a matter of matrix multiplication. That is simple and easy to define, just load a matrix. The problem is gamut mapping: you may end up outside of the destination gamut, or maybe you want to use more of the destination gamut than what the color space definitions imply. There are many conflicting goals and ways to this, and I suspect the room for secret sauce is here (and in tone-mapping).
There is also a difference between color space (signal) gamut and device gamut. A display may accept BT.2020 signal, but the gamut it can show is usually much less.
+In some ways this mirrors how various userspace APIs treat HDR:
- Gstreamer's `GstVideoTransferFunction`_
- EGL's `EGL_EXT_gl_colorspace_bt2020_pq`_ extension
- Vulkan's `VkColorSpaceKHR`_
+.. _GstVideoTransferFunction: https://gstreamer.freedesktop.org/documentation/video/video-color.html?gi-la... +.. _EGL_EXT_gl_colorspace_bt2020_pq: https://www.khronos.org/registry/EGL/extensions/EXT/EGL_EXT_gl_colorspace_bt... +.. _VkColorSpaceKHR: https://www.khronos.org/registry/vulkan/specs/1.2-extensions/html/vkspec.htm...
+A hybrid approach to the API +----------------------------
+Our current approach attempts a hybrid approach, defining API to specify +input and output transfer functions, as well as an SDR boost, and a +input color space definition.
Using a color space definition in the KMS UAPI brings us back to the old problem.
Using descriptions of content (color spaces) instead of prescribing transformations seems to be designed to allow vendors make use of their secret hardware sauce: how to best realise the intent. Since it is secret sauce, by definition it cannot be fully replicated in software or shaders. One might even get sued for succeeding.
General purpose (read: desktop) compositors need to adapt to any scenegraph and they want to make the most of the hardware under all situations. This means that it is not possible to guarantee that a certain window is always going to be using a KMS plane. Maybe a small change in the scenegraph, a moving window or cursor, suddenly causes the KMS plane to become unsuitable for the window, or in the opposite case the KMS plane suddenly becomes available for the window. This means that a general purpose compositor will be doing frame-by-frame decisions on which window to put on which KMS plane, and which windows need to be composited with shaders.
Not being able to replicate what the hardware does means that shaders cannot produce the same image on screen as the KMS plane would. When KMS plane assignments change, the window appearance would change as well. I imagine end users would be complaining of such glitches.
However, there are other use cases where I can imagine this descriptive design working perfectly. Any non-general, non-desktop compositor, or a closed system, could probably guarantee that the scenegraph will always map in a specific way to the KMS planes. The window would always map to the KMS plane, meaning that it would never need to be composited with shaders, and therefore cannot change color unexpectedly from end user point of view. TVs, set-top-boxes, etc., maybe even phones. Some use cases have a hard requirement of putting a specific window on a specific KMS plane, or the system simply cannot display it (performance, protection...).
Is it worth having two fundamentally different KMS UAPIs for HDR composition support, where one interface supports only a subset of use cases and the other (per-plane LUT, CTM, LUT, and more, freely programmable by userspace) supports all use cases?
That's a genuine question. Are the benefits worth the kernel developers' efforts to design, implement, and forever maintain both mutually exclusive interfaces?
Now, someone might say that the Wayland protocol design for HDR aims to be descriptive and not prescriptive, so why should KMS UAPI be different? The reason is explained above: *some* KMS clients may switch frame by frame between KMS and shaders, but Wayland clients pick one path and stick to it. Wayland clients have no reason that I can imagine to switch arbitrarily in flight.
+We would like to solicit feedback and encourage discussion around the +merits and weaknesses of these approaches. This question is at the core +of defining a good API and we'd like to get it right.
+Input and Output Transfer functions +-----------------------------------
+We define an input transfer function on drm_plane to describe the +transform from framebuffer to blending space.
+We define an output transfer function on drm_crtc to describe the +transform from blending space to display space.
Here is again the terminology problem between transfer function and (color) space.
+The transfer function can be a pre-defined function, such as PQ EOTF, or +a custom LUT. A driver will be able to specify support for specific +transfer functions, including custom ones.
This sounds good.
+Defining the transfer function in this way allows us to support in on HW +that uses ROMs to support these transforms, as well as on HW that use +LUT definitions that are complex and don't map easily onto a standard LUT +definition.
+We will not define per-plane LUTs in this patchset as the scope of our +current work only deals with pre-defined transfer functions. This API has +the flexibility to add custom 1D or 3D LUTs at a later date.
Ok.
+In order to support the existing 1D de-gamma and gamma LUTs on the drm_crtc +we will include a "custom 1D" enum value to indicate that the custom gamma and +de-gamma 1D LUTs should be used.
Sounds fine.
+Possible transfer functions:
+.. flat-table::
- :header-rows: 1
- Transfer Function
- Description
- Gamma 2.2
- a simple 2.2 gamma function
- sRGB
- 2.4 gamma with small initial linear section
Maybe rephrase to: The piece-wise sRGB transfer function with the small initial linear section, approximately corresponding to 2.4 gamma function.
I recall some debate, too, whether with a digital flat panel you should use a pure 2.4 gamma function or the sRGB function. (Which one do displays expect?)
- PQ 2084
- SMPTE ST 2084; used for HDR video and allows for up to 10,000 nit support
Perceptual Quantizer (PQ), or ST 2084. There is no PQ 2084.
- Linear
- Linear relationship between pixel value and luminance value
- Custom 1D
- Custom 1D de-gamma and gamma LUTs; one LUT per color
- Custom 3D
- Custom 3D LUT (to be defined)
Adding HLG transfer function to this set would be interesting, because it requires a parameter I believe. How would you handle parameterised transfer functions?
It's worth to note that while PQ is absolute in luminance (providing cd/m² values), everything else here is relative for both SDR and HDR. You cannot blend content in PQ with content in something else together, until you practically define the absolute luminance for all non-PQ content or vice versa.
A further complication is that you could have different relative-luminance transfer functions, meaning that the (absolute) luminance they are relative to varies. The obvious case is blending SDR content with HDR content when both have relative-luminance transfer function.
Then you have HLG which is more like scene-referred than display-referred, but that might be solved with the parameter I mentioned, I'm not quite sure.
PQ is said to be display-referred, but it's usually referred to someone else's display than yours, which means it needs the HDR metadata to be able to tone-map suitably to your display. This seems to be a similar problem as with signal gamut vs. device gamut.
The traditional relative-luminance transfer functions, well, the content implied by them, is display-referred when it arrived at KMS or compositor level. There the question of "whose display" doesn't matter much because it's SDR and narrow gamut, and we probably don't even notice when we see an image wrong. With HDR the mismatch might be noticeable.
+Describing SDR Luminance +------------------------------
+Since many displays do no correctly advertise the HDR white level we +propose to define the SDR white level in nits.
This means that even if you had no content using PQ, you still need to define the absolute luminance for all the (HDR) relative-luminance transfer functions.
There probably needs to be something to relate everything to a single, relative or absolute, luminance range. That is necessary for any composition (KMS and software) since the output is a single image.
Is it better to go with relative or absolute metrics? Right now I would tend to say relative, because relative is unitless. Absolute values are numerically equivalent, but they might not have anything to do with actual physical measurements, making them actually relative. This happens when your monitor does not support PQ mode or does tone-mapping to your image, for instance.
The concept we have played with in Wayland so far is EDR, but then you have the question of "what does zero mean", i.e. the luminance of darkest black could vary between contents as well, not just the luminance of extreme white.
+We define a new drm_plane property to specify the white level of an SDR +plane.
+Defining the color space +------------------------
+We propose to add a new color space property to drm_plane to define a +plane's color space.
+While some color space conversions can be performed with a simple color +transformation matrix (CTM) others require a 3D LUT.
+Defining mastering color space and luminance +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ToDo
+Pixel Formats +~~~~~~~~~~~~~
+The pixel formats, such as ARGB8888, ARGB2101010, P010, or FP16 are +unrelated to color space and EOTF definitions. HDR pixels can be formatted
Yes!
+in different ways but in order to not lose precision HDR content requires +at least 10 bpc precision. For this reason ARGB2101010, P010, and FP16 are +the obvious candidates for HDR. ARGB2101010 and P010 have the advantage +of requiring only half the bandwidth as FP16, while FP16 has the advantage +of enough precision to operate in a linear space, i.e. without EOTF.
This reminds me of something interesting said during the W3C WCG & HDR Q&A session yesterday. Unfortunately I forget his name, but I think transcriptions should become available at some point, someone said that pixel depth or bit precision should be thought of as setting the noise floor. When you quantize values, always do dithering. Then the precision only changes your noise floor level. Then something about how audio has realized this ages ago and we are just catching up.
If you don't dither, you get banding artifacts in gradients. If you do dither, it's just noise.
+Use Cases +=========
+RGB10 HDR plane - composited HDR video & desktop +------------------------------------------------
+A single, composited plane of HDR content. The use-case is a video player +on a desktop with the compositor owning the composition of SDR and HDR +content. The content shall be PQ BT.2020 formatted. The drm_connector's +hdr_output_metadata shall be set.
+P010 HDR video plane + RGB8 SDR desktop plane +--------------------------------------------- +A normal 8bpc desktop plane, with a P010 HDR video plane underlayed. The +HDR plane shall be PQ BT.2020 formatted. The desktop plane shall specify +an SDR boost value. The drm_connector's hdr_output_metadata shall be set.
+One XRGB8888 SDR Plane - HDR output +-----------------------------------
+In order to support a smooth transition we recommend an OS that supports +HDR output to provide the hdr_output_metadata on the drm_connector to +configure the output for HDR, even when the content is only SDR. This will +allow for a smooth transition between SDR-only and HDR content. In this
Agreed, but this also kind of contradicts the idea of pushing HDR metadata from video all the way to the display in the RGB10 HDR plane case - something you do not seem to suggest here at all, but I would have expected that to be a prime use case for you.
A set-top-box might want to push the video HDR metadata all the way to the display when supported, and then adapt all the non-video graphics to that.
Thanks, pq
+use-case the SDR max luminance value should be provided on the drm_plane.
+In DCN we will de-PQ or de-Gamma all input in order to blend in linear +space. For SDR content we will also apply any desired boost before +blending. After blending we will then re-apply the PQ EOTF and do RGB +to YCbCr conversion if needed.
+FP16 HDR linear planes +----------------------
+These will require a transformation into the display's encoding (e.g. PQ) +using the CRTC LUT. Current CRTC LUTs are lacking the precision in the +dark areas to do the conversion without losing detail.
+One of the newly defined output transfer functions or a PWL or `multi-segmented +LUT`_ can be used to facilitate the conversion to PQ, HLG, or another +encoding supported by displays.
+.. _multi-segmented LUT: https://patchwork.freedesktop.org/series/90822/
+User Space +==========
+Gnome & GStreamer +-----------------
+See Jeremy Cline's `HDR in Linux: Part 2`_.
+.. _HDR in Linux: Part 2: https://www.jcline.org/blog/fedora/graphics/hdr/2021/06/28/hdr-in-linux-p2.h...
+Wayland +-------
+See `Wayland Color Management and HDR Design Goals`_.
+.. _Wayland Color Management and HDR Design Goals: https://gitlab.freedesktop.org/swick/wayland-protocols/-/blob/color/unstable...
+ChromeOS Ozone +--------------
+ToDo
+HW support +==========
+ToDo, describe pipeline on a couple different HW platforms
+Further Reading +===============
+* https://gitlab.freedesktop.org/swick/wayland-protocols/-/blob/color/unstable... +* http://downloads.bbc.co.uk/rd/pubs/whp/whp-pdf-files/WHP309.pdf +* https://app.spectracal.com/Documents/White%20Papers/HDR_Demystified.pdf +* https://www.jcline.org/blog/fedora/graphics/hdr/2021/05/07/hdr-in-linux-p1.h... +* https://www.jcline.org/blog/fedora/graphics/hdr/2021/06/28/hdr-in-linux-p2.h...
diff --git a/Documentation/gpu/rfc/index.rst b/Documentation/gpu/rfc/index.rst index 05670442ca1b..8d8430cfdde1 100644 --- a/Documentation/gpu/rfc/index.rst +++ b/Documentation/gpu/rfc/index.rst @@ -19,3 +19,4 @@ host such documentation: .. toctree::
i915_gem_lmem.rst
- hdr-wide-gamut.rst
On Wed, 2021-09-15 at 17:01 +0300, Pekka Paalanen wrote:
On Fri, 30 Jul 2021 16:41:29 -0400 Harry Wentland harry.wentland@amd.com wrote:
Use the new DRM RFC doc section to capture the RFC previously only described in the cover letter at https://patchwork.freedesktop.org/series/89506/
v3: * Add sections on single-plane and multi-plane HDR * Describe approach to define HW details vs approach to define SW intentions * Link Jeremy Cline's excellent HDR summaries * Outline intention behind overly verbose doc * Describe FP16 use-case * Clean up links
v2: create this doc
v1: n/a
Signed-off-by: Harry Wentland harry.wentland@amd.com
Hi Harry,
I finally managed to go through this, comments below. Excellent to have pictures included. I wrote this reply over several days, sorry if it's not quite coherent.
<snip>
+Overview and background +=======================
+I highly recommend you read `Jeremy Cline's HDR primer`_
+Jeremy Cline did a much better job describing this. I highly recommend +you read it at [1]:
+.. _Jeremy Cline's HDR primer: https://www.jcline.org/blog/fedora/graphics/hdr/2021/05/07/hdr-in-linux-p1.h...
That's a nice write-up I didn't know about, thanks.
I just wish such write-ups would be somehow peer-reviewed for correctness and curated for proper referencing. Perhaps like we develop code: at least some initial peer review and then fixes when anyone notices something to improve. Like... what you are doing here! :-)
The post is perhaps a bit too narrow with OETF/EOTF terms, accidentally implying that OETF = EOTF^-1 which is not generally true, but that all depends on which O-to-E or E-to-O functions one is talking about. Particularly there is a difference between functions used for signal compression which needs an exact matching inverse function, and functions containing tone-mapping and artistic effects that when concatenated result in the (non-identity) OOTF.
Nothing in the post seems to disagree with my current understanding FWI'mW.
I'm more than happy to update things that are incorrect or mis-leading since the last thing I want to do is muddy the waters. Personally, I would much prefer that any useful content from it be peer-reviewed and included directly in the documentation since, well, it's being hosted out of my laundry room and the cats have a habit of turning off the UPS...
Do let me know if I can be of any assistance there; I'm no longer employed to do anything HDR-related, but I do like clear documentation so I could dedicate a bit of free time to it.
- Jeremy
On 2021-09-15 10:01, Pekka Paalanen wrote:> On Fri, 30 Jul 2021 16:41:29 -0400
Harry Wentland harry.wentland@amd.com wrote:
Use the new DRM RFC doc section to capture the RFC previously only described in the cover letter at https://patchwork.freedesktop.org/series/89506/
v3:
- Add sections on single-plane and multi-plane HDR
- Describe approach to define HW details vs approach to define SW intentions
- Link Jeremy Cline's excellent HDR summaries
- Outline intention behind overly verbose doc
- Describe FP16 use-case
- Clean up links
v2: create this doc
v1: n/a
Signed-off-by: Harry Wentland harry.wentland@amd.com
Hi Harry,
I finally managed to go through this, comments below. Excellent to have pictures included. I wrote this reply over several days, sorry if it's not quite coherent.
Hi Pekka,
Thanks for taking the time to go through this.
My reply is also a multi-day endeavor (due to other interruptions) so please bear with me as well if it looks a bit disjointed in places.
Documentation/gpu/rfc/color_intentions.drawio | 1 + Documentation/gpu/rfc/color_intentions.svg | 3 + Documentation/gpu/rfc/colorpipe | 1 + Documentation/gpu/rfc/colorpipe.svg | 3 + Documentation/gpu/rfc/hdr-wide-gamut.rst | 580 ++++++++++++++++++ Documentation/gpu/rfc/index.rst | 1 + 6 files changed, 589 insertions(+) create mode 100644 Documentation/gpu/rfc/color_intentions.drawio create mode 100644 Documentation/gpu/rfc/color_intentions.svg create mode 100644 Documentation/gpu/rfc/colorpipe create mode 100644 Documentation/gpu/rfc/colorpipe.svg create mode 100644 Documentation/gpu/rfc/hdr-wide-gamut.rst
...
diff --git a/Documentation/gpu/rfc/hdr-wide-gamut.rst b/Documentation/gpu/rfc/hdr-wide-gamut.rst new file mode 100644 index 000000000000..e463670191ab --- /dev/null +++ b/Documentation/gpu/rfc/hdr-wide-gamut.rst @@ -0,0 +1,580 @@ +============================== +HDR & Wide Color Gamut Support +==============================
+.. role:: wy-text-strike
+ToDo +====
+* :wy-text-strike:`Reformat as RST kerneldoc` - done +* :wy-text-strike:`Don't use color_encoding for color_space definitions` - done +* :wy-text-strike:`Update SDR luminance description and reasoning` - done +* :wy-text-strike:`Clarify 3D LUT required for some color space transformations` - done +* :wy-text-strike:`Highlight need for named color space and EOTF definitions` - done +* :wy-text-strike:`Define transfer function API` - done +* :wy-text-strike:`Draft upstream plan` - done +* :wy-text-strike:`Reference to wayland plan` - done +* Reference to Chrome plans +* Sketch view of HW pipeline for couple of HW implementations
+Upstream Plan +=============
+* Reach consensus on DRM/KMS API +* Implement support in amdgpu +* Implement IGT tests +* Add API support to Weston, ChromiumOS, or other canonical open-source project interested in HDR +* Merge user-space +* Merge kernel patches
The order is: review acceptance of userspace but don't merge, merge kernel, merge userspace.
Updated for v4
+History +=======
+v3:
+* Add sections on single-plane and multi-plane HDR +* Describe approach to define HW details vs approach to define SW intentions +* Link Jeremy Cline's excellent HDR summaries +* Outline intention behind overly verbose doc +* Describe FP16 use-case +* Clean up links
+v2: create this doc
+v1: n/a
+Introduction +============
+We are looking to enable HDR support for a couple of single-plane and +multi-plane scenarios. To do this effectively we recommend new interfaces +to drm_plane. Below I'll give a bit of background on HDR and why we +propose these interfaces.
+As an RFC doc this document is more verbose than what we would want from +an eventual uAPI doc. This is intentional in order to ensure interested +parties are all on the same page and to facilitate discussion if there +is disagreement on aspects of the intentions behind the proposed uAPI.
I would recommend keeping the discussion parts of the document as well, but if you think they hurt the readability of the uAPI specification, then split things into normative and informative sections.
Good point. Let me think how to organize this in a way that preserves readability of the spec and also preserves (key) discussions for posterity. The history behind an API can often be more informative than the API doc itself.
+Overview and background +=======================
+I highly recommend you read `Jeremy Cline's HDR primer`_
+Jeremy Cline did a much better job describing this. I highly recommend +you read it at [1]:
+.. _Jeremy Cline's HDR primer: https://www.jcline.org/blog/fedora/graphics/hdr/2021/05/07/hdr-in-linux-p1.h...
That's a nice write-up I didn't know about, thanks.
I just wish such write-ups would be somehow peer-reviewed for correctness and curated for proper referencing. Perhaps like we develop code: at least some initial peer review and then fixes when anyone notices something to improve. Like... what you are doing here! :-)
The post is perhaps a bit too narrow with OETF/EOTF terms, accidentally implying that OETF = EOTF^-1 which is not generally true, but that all depends on which O-to-E or E-to-O functions one is talking about. Particularly there is a difference between functions used for signal compression which needs an exact matching inverse function, and functions containing tone-mapping and artistic effects that when concatenated result in the (non-identity) OOTF.
Nothing in the post seems to disagree with my current understanding FWI'mW.
+Defining a pixel's luminance +----------------------------
+The luminance space of pixels in a framebuffer/plane presented to the +display is not well defined in the DRM/KMS APIs. It is usually assumed to +be in a 2.2 or 2.4 gamma space and has no mapping to an absolute luminance +value; it is interpreted in relative terms.
+Luminance can be measured and described in absolute terms as candela +per meter squared, or cd/m2, or nits. Even though a pixel value can be +mapped to luminance in a linear fashion to do so without losing a lot of +detail requires 16-bpc color depth. The reason for this is that human +perception can distinguish roughly between a 0.5-1% luminance delta. A +linear representation is suboptimal, wasting precision in the highlights +and losing precision in the shadows.
+A gamma curve is a decent approximation to a human's perception of +luminance, but the `PQ (perceptual quantizer) function`_ improves on +it. It also defines the luminance values in absolute terms, with the +highest value being 10,000 nits and the lowest 0.0005 nits.
+Using a content that's defined in PQ space we can approximate the real +world in a much better way.
Or HLG. It is said that HLG puts the OOTF in the display, while in a PQ system OOTF is baked into the transmission. However, a monitor that consumes PQ will likely do some tone-mapping to fit it to the display capabilities, so it is adding an OOTF of its own. In a HLG system I would think artistic adjustments are done before transmission baking them in, adding its own OOTF in addition to the sink OOTF. So both systems necessarily have some O-O mangling on both sides of transmission.
There is a HLG presentation at https://www.w3.org/Graphics/Color/Workshop/talks.html#intro
Thanks for sharing. I spent some time on Friday to watch them all and found them very informative, especially the HLG talk and the talk about linear vs composited HDR pipelines.
+Here are some examples of real-life objects and their approximate +luminance values:
+.. _PQ (perceptual quantizer) function: https://en.wikipedia.org/wiki/High-dynamic-range_video#Perceptual_Quantizer
+.. flat-table::
- :header-rows: 1
- Object
- Luminance in nits
- Fluorescent light
- 10,000
- Highlights
- 1,000 - sunlight
Did fluorescent and highlights get swapped here?
No, though at first glance it can look like that. This is pulled from an internal doc I didn't write, but I think the intention is to show that fluorescent lights can be up to 10,000 nits and highlights are usually 1,000+ nits.
I'll clarify this in v4.
A quick google search seems to show that there are even fluorescent lights with 46,000 nits. I guess these numbers provide a ballpark view more than anything.
- White Objects
- 250 - 1,000
- Typical Objects
- 1 - 250
- Shadows
- 0.01 - 1
- Ultra Blacks
- 0 - 0.0005
+Transfer functions +------------------
+Traditionally we used the terms gamma and de-gamma to describe the +encoding of a pixel's luminance value and the operation to transfer from +a linear luminance space to the non-linear space used to encode the +pixels. Since some newer encodings don't use a gamma curve I suggest +we refer to non-linear encodings using the terms `EOTF, and OETF`_, or +simply as transfer function in general.
Yeah, gamma could mean lots of things. If you have e.g. OETF gamma 1/2.2 and EOTF gamma 2.4, the result is OOTF gamma 1.09.
OETF, EOTF and OOTF are not unambiguous either, since there is always the question of whose function is it.
Yeah, I think both gamma and EO/OE/OO/EETF are all somewhat problematic.
I tend to think about these more in terms of input and output transfer functions but then you have the ambiguity about what your input and output mean. I see the input TF between framebuffer and blender, and the output TF between blender and display.
You also have the challenge that input and output transfer functions fulfill multiple roles, e.g. an output transfer as defined above might do linear-to-PQ conversion but could also fill the role of tone mapping in the case where the input content spans a larger range than the display space.
Two different EOTFs are of interest in composition for display:
- the display EOTF (since display signal is electrical)
- the content EOTF (since content is stored in electrical encoding)
+The EOTF (Electro-Optical Transfer Function) describes how to transfer +from an electrical signal to an optical signal. This was traditionally +done by the de-gamma function.
+The OETF (Opto Electronic Transfer Function) describes how to transfer +from an optical signal to an electronic signal. This was traditionally +done by the gamma function.
+More generally we can name the transfer function describing the transform +between scanout and blending space as the **input transfer function**, and
"scanout space" makes me think of cable/signal values, not framebuffer values. Or, I'm not sure. I'd recommend replacing the term "scanout space" with something less ambiguous like framebuffer values.
Framebuffer space/values is much better than scanout space.
+the transfer function describing the transform from blending space to the +output space as **output transfer function**.
You're talking about "spaces" here, but what you are actually talking about are value encodings, not (color) spaces. An EOTF or OETF is not meant to modify the color space.
When talking about blending, what you're actually interested in is linear vs. non-linear color value encoding. This matches your talk about EOTF and OETF, although you need to be careful to specify which EOTF and OETF you mean. For blending, color values need to be linear in light intensity, and the inverse of the E-to-O mapping before blending is exactly the same as the O-to-E mapping after blending. Otherwise you would alter even opaque pixels.
I struggle a bit with finding the right term to talk about color value encoding in general. Concrete examples can be PQ-encoded, Gamma 2.2, or linearly encoded spaces but I was grasping for a more general term; something that could potentially include TFs that also tone-map.
Interestingly, the Canvas API changes presented by Christopher Cameron also seem to use the new colorSpace property to deal with both color space, as well as EOTF.
https://www.youtube.com/watch?v=fHbLbVacYw4
OETF is often associated with cameras, not displays. Maybe use EOTF^-1 instead?
Good point. Fixed for v4.
Btw. another terminology thing: color space vs. color model. RGB and YCbCr are color models. sRGB, BT.601 and BT.2020 are color spaces. These two are orthogonal concepts.
Thanks for clarifying.
+.. _EOTF, and OETF: https://en.wikipedia.org/wiki/Transfer_functions_in_imaging
+Mastering Luminances +--------------------
+Even though we are able to describe the absolute luminance of a pixel +using the PQ 2084 EOTF we are presented with physical limitations of the +display technologies on the market today. Here are a few examples of +luminance ranges of displays.
+.. flat-table::
- :header-rows: 1
- Display
- Luminance range in nits
- Typical PC display
- 0.3 - 200
- Excellent LCD HDTV
- 0.3 - 400
- HDR LCD w/ local dimming
- 0.05 - 1,500
+Since no display can currently show the full 0.0005 to 10,000 nits +luminance range of PQ the display will need to tone-map the HDR content, +i.e to fit the content within a display's capabilities. To assist +with tone-mapping HDR content is usually accompanied by a metadata +that describes (among other things) the minimum and maximum mastering +luminance, i.e. the maximum and minimum luminance of the display that +was used to master the HDR content.
+The HDR metadata is currently defined on the drm_connector via the +hdr_output_metadata blob property.
HDR_OUTPUT_METADATA, all caps.
+It might be useful to define per-plane hdr metadata, as different planes +might have been mastered differently.
+.. _SDR Luminance:
+SDR Luminance +-------------
+Traditional SDR content's maximum white luminance is not well defined. +Some like to define it at 80 nits, others at 200 nits. It also depends +to a large extent on the environmental viewing conditions. In practice +this means that we need to define the maximum SDR white luminance, either +in nits, or as a ratio.
+`One Windows API`_ defines it as a ratio against 80 nits.
+`Another Windows API`_ defines it as a nits value.
+The `Wayland color management proposal`_ uses Apple's definition of EDR as a +ratio of the HDR range vs SDR range.
+If a display's maximum HDR white level is correctly reported it is trivial +to convert between all of the above representations of SDR white level. If +it is not, defining SDR luminance as a nits value, or a ratio vs a fixed +nits value is preferred, assuming we are blending in linear space.
+It is our experience that many HDR displays do not report maximum white +level correctly
Which value do you refer to as "maximum white", and how did you measure it?
Good question. I haven't played with those displays myself but I'll try to find out a bit more background behind this statement.
You also need to define who is "us" since kernel docs tend to get lots of authors over time.
Good point. Changed in v4
+.. _One Windows API: https://docs.microsoft.com/en-us/windows-hardware/drivers/ddi/dispmprt/ns-di... +.. _Another Windows API: https://docs.microsoft.com/en-us/uwp/api/windows.graphics.display.advancedco... +.. _Wayland color management proposal: https://gitlab.freedesktop.org/swick/wayland-protocols/-/blob/color/unstable...
+Let There Be Color +------------------
+So far we've only talked about luminance, ignoring colors altogether. Just +like in the luminance space, traditionally the color space of display +outputs has not been well defined. Similar to how an EOTF defines a +mapping of pixel data to an absolute luminance value, the color space +maps color information for each pixel onto the CIE 1931 chromaticity +space. This can be thought of as a mapping to an absolute, real-life, +color value.
+A color space is defined by its primaries and white point. The primaries +and white point are expressed as coordinates in the CIE 1931 color +space. Think of the red primary as the reddest red that can be displayed +within the color space. Same for green and blue.
+Examples of color spaces are:
+.. flat-table::
- :header-rows: 1
- Color Space
- Description
- BT 601
- similar to BT 709
- BT 709
- used by sRGB content; ~53% of BT 2020
- DCI-P3
- used by most HDR displays; ~72% of BT 2020
- BT 2020
- standard for most HDR content
+Color Primaries and White Point +-------------------------------
+Just like displays can currently not represent the entire 0.0005 - +10,000 nits HDR range of the PQ 2084 EOTF, they are currently not capable
"PQ" or "ST 2084".
Fixed in v4
+of representing the entire BT.2020 color Gamut. For this reason video +content will often specify the color primaries and white point used to +master the video, in order to allow displays to be able to map the image +as best as possible onto the display's gamut.
+Displays and Tonemapping +------------------------
+External displays are able to do their own tone and color mapping, based +on the mastering luminance, color primaries, and white space defined in +the HDR metadata.
HLG does things differently wrt. metadata and tone-mapping than PQ.
As mentioned above I had some time to watch the HLG presentation and that indeed has interesting implications. With HLG we also have relative luminance HDR content. One challenge is How to tone-map HLG content alongside SDR (sRGB) content and PQ content.
I think ultimately this means that we can't rely on display tonemapping when we are dealing with mixed content on the screen. In that case we would probably want to output to the display in the EDID-referred space and tone-map all incoming buffers to the EDID-referred space.
I think the doc needs a lot more pictures. I wonder if I can do that without polluting git with large files.
+Some internal panels might not include the complex HW to do tone and color +mapping on their own and will require the display driver to perform +appropriate mapping.
+How are we solving the problem? +===============================
+Single-plane +------------
+If a single drm_plane is used no further work is required. The compositor +will provide one HDR plane alongside a drm_connector's hdr_output_metadata +and the display HW will output this plane without further processing if +no CRTC LUTs are provided.
+If desired a compositor can use the CRTC LUTs for HDR content but without +support for PWL or multi-segmented LUTs the quality of the operation is +expected to be subpar for HDR content.
Explain/expand PWL.
Updated in v4.
Do you have references to these subpar results? I'm interested in when and how they appear. I may want to use that information to avoid using KMS LUTs when they are inadequate.
I don't have any actual results or data to back up this statement at this point.
+Multi-plane +-----------
+In multi-plane configurations we need to solve the problem of blending +HDR and SDR content. This blending should be done in linear space and +therefore requires framebuffer data that is presented in linear space +or a way to convert non-linear data to linear space. Additionally +we need a way to define the luminance of any SDR content in relation +to the HDR content.
+In order to present framebuffer data in linear space without losing a +lot of precision it needs to be presented using 16 bpc precision.
Integer or floating-point?
Floating point. Fixed in v4.
I doubt integer would work since we'd lose too much precision in the dark areas. Though, maybe 16-bit would let us map those well enough? I don't know for sure. Either way, I think anybody doing linear is using FP16.
+Defining HW Details +-------------------
+One way to take full advantage of modern HW's color pipelines is by +defining a "generic" pipeline that matches all capable HW. Something +like this, which I took `from Uma Shankar`_ and expanded on:
+.. _from Uma Shankar: https://patchwork.freedesktop.org/series/90826/
+.. kernel-figure:: colorpipe.svg
Btw. there will be interesting issues with alpha-premult, filtering, and linearisation if your planes have alpha channels. That's before HDR is even considered.
Could you expand on this a bit?
+I intentionally put de-Gamma, and Gamma in parentheses in my graph +as they describe the intention of the block but not necessarily a +strict definition of how a userspace implementation is required to +use them.
+De-Gamma and Gamma blocks are named LUT, but they could be non-programmable +LUTs in some HW implementations with no programmable LUT available. See +the definitions for AMD's `latest dGPU generation`_ as an example.
+.. _latest dGPU generation: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/tree/driver...
+I renamed the "Plane Gamma LUT" and "CRTC De-Gamma LUT" to "Tonemapping" +as we generally don't want to re-apply gamma before blending, or do +de-gamma post blending. These blocks tend generally to be intended for +tonemapping purposes.
Right.
+Tonemapping in this case could be a simple nits value or `EDR`_ to describe +how to scale the :ref:`SDR luminance`.
I do wonder how that will turn out in the end... but on Friday there will be HDR Compositing and Tone-mapping live Q&A session: https://www.w3.org/Graphics/Color/Workshop/talks.html#compos
I didn't manage to join the compositing and tone-mapping live Q&A? Did anything interesting emerge from that?
I've watched Timo Kunkel's talk and it's been very eye opening. He does a great job of highlighting the challenges of compositing HDR content.
+Tonemapping could also include the ability to use a 3D LUT which might be +accompanied by a 1D shaper LUT. The shaper LUT is required in order to +ensure a 3D LUT with limited entries (e.g. 9x9x9, or 17x17x17) operates +in perceptual (non-linear) space, so as to evenly spread the limited +entries evenly across the perceived space.
+.. _EDR: https://gitlab.freedesktop.org/swick/wayland-protocols/-/blob/color/unstable...
+Creating a model that is flexible enough to define color pipelines for +a wide variety of HW is challenging, though not impossible. Implementing +support for such a flexible definition in userspace, though, amounts +to essentially writing color pipeline drivers for each HW.
My thinking right now is that userspace has it's own pipeline model with the elements it must have. Then it attempts to map that pipeline to what elements the KMS pipeline happens to expose. If there is a mapping, good. If not, fall back to shaders on GPU.
To help that succeed more often, I'm using the current KMS abstract
pipeline as a guide in designing the Weston internal color pipeline.
I feel I should know, but is this pipeline documented? Is it merely, the plane > crtc > connector model, or does it go beyond that?
+Defining SW Intentions +----------------------
+An alternative to describing the HW color pipeline in enough detail to +be useful for color management and HDR purposes is to instead define +SW intentions.
+.. kernel-figure:: color_intentions.svg
+This greatly simplifies the API and lets the driver do what a driver +does best: figure out how to program the HW to achieve the desired +effect.
+The above diagram could include white point, primaries, and maximum +peak and average white levels in order to facilitate tone mapping.
+At this point I suggest to keep tonemapping (other than an SDR luminance +adjustment) out of the current DRM/KMS API. Most HDR displays are capable +of tonemapping. If for some reason tonemapping is still desired on +a plane, a shader might be a better way of doing that instead of relying +on display HW.
"Non-programmable LUT" as you referred to them is an interesting departure from the earlier suggestion, where you intended to describe color spaces and encodings of content and display and let the hardware do whatever wild magic in between. Now it seems like you have shifted to programming transformations instead. They may be programmable or enumerated, but still transformations rather than source and destination descriptions. If the enumerated transformations follow standards, even better.
I think this is a step in the right direction.
However, you wrote in the heading "Intentions" which sounds like your
old approach.
Conversion from one additive linear color space to another is a matter of matrix multiplication. That is simple and easy to define, just load a matrix. The problem is gamut mapping: you may end up outside of the destination gamut, or maybe you want to use more of the destination gamut than what the color space definitions imply. There are many conflicting goals and ways to this, and I suspect the room for secret sauce is here (and in tone-mapping).
There is also a difference between color space (signal) gamut and device gamut. A display may accept BT.2020 signal, but the gamut it can show is usually much less.
True.
+In some ways this mirrors how various userspace APIs treat HDR:
- Gstreamer's `GstVideoTransferFunction`_
- EGL's `EGL_EXT_gl_colorspace_bt2020_pq`_ extension
- Vulkan's `VkColorSpaceKHR`_
+.. _GstVideoTransferFunction: https://gstreamer.freedesktop.org/documentation/video/video-color.html?gi-la... +.. _EGL_EXT_gl_colorspace_bt2020_pq: https://www.khronos.org/registry/EGL/extensions/EXT/EGL_EXT_gl_colorspace_bt... +.. _VkColorSpaceKHR: https://www.khronos.org/registry/vulkan/specs/1.2-extensions/html/vkspec.htm...
+A hybrid approach to the API +----------------------------
+Our current approach attempts a hybrid approach, defining API to specify +input and output transfer functions, as well as an SDR boost, and a +input color space definition.
Using a color space definition in the KMS UAPI brings us back to the old problem.
Using descriptions of content (color spaces) instead of prescribing transformations seems to be designed to allow vendors make use of their secret hardware sauce: how to best realise the intent. Since it is secret sauce, by definition it cannot be fully replicated in software or shaders. One might even get sued for succeeding.
General purpose (read: desktop) compositors need to adapt to any scenegraph and they want to make the most of the hardware under all situations. This means that it is not possible to guarantee that a certain window is always going to be using a KMS plane. Maybe a small change in the scenegraph, a moving window or cursor, suddenly causes the KMS plane to become unsuitable for the window, or in the opposite case the KMS plane suddenly becomes available for the window. This means that a general purpose compositor will be doing frame-by-frame decisions on which window to put on which KMS plane, and which windows need to be composited with shaders.
Not being able to replicate what the hardware does means that shaders cannot produce the same image on screen as the KMS plane would. When KMS plane assignments change, the window appearance would change as well. I imagine end users would be complaining of such glitches.
I see your point.
However, there are other use cases where I can imagine this descriptive design working perfectly. Any non-general, non-desktop compositor, or a closed system, could probably guarantee that the scenegraph will always map in a specific way to the KMS planes. The window would always map to the KMS plane, meaning that it would never need to be composited with shaders, and therefore cannot change color unexpectedly from end user point of view. TVs, set-top-boxes, etc., maybe even phones. Some use cases have a hard requirement of putting a specific window on a specific KMS plane, or the system simply cannot display it (performance, protection...).
Is it worth having two fundamentally different KMS UAPIs for HDR composition support, where one interface supports only a subset of use cases and the other (per-plane LUT, CTM, LUT, and more, freely programmable by userspace) supports all use cases?
That's a genuine question. Are the benefits worth the kernel developers' efforts to design, implement, and forever maintain both mutually exclusive interfaces?
Tbh, I'm personally less interested in use-cases where specific windows always map to a KMS plane. From an AMD HW point of view we can't really guarantee that a KMS plane is always available in most scenarios. So this would have to work for a general desktop compositor scenario where KMS plane usage could change frame to frame.
Now, someone might say that the Wayland protocol design for HDR aims to be descriptive and not prescriptive, so why should KMS UAPI be different? The reason is explained above: *some* KMS clients may switch frame by frame between KMS and shaders, but Wayland clients pick one path and stick to it. Wayland clients have no reason that I can imagine to switch arbitrarily in flight.
I'm a bit confused about this paragraph. Wouldn't the Wayland compositor decide whether to use a KMS plane or shader and not the client?
+We would like to solicit feedback and encourage discussion around the +merits and weaknesses of these approaches. This question is at the core +of defining a good API and we'd like to get it right.
+Input and Output Transfer functions +-----------------------------------
+We define an input transfer function on drm_plane to describe the +transform from framebuffer to blending space.
+We define an output transfer function on drm_crtc to describe the +transform from blending space to display space.
Here is again the terminology problem between transfer function and (color) space.
Color value encoding? Or luminance space? Or maybe there's a different term altogether to describe this?
+The transfer function can be a pre-defined function, such as PQ EOTF, or +a custom LUT. A driver will be able to specify support for specific +transfer functions, including custom ones.
This sounds good.
+Defining the transfer function in this way allows us to support in on HW +that uses ROMs to support these transforms, as well as on HW that use +LUT definitions that are complex and don't map easily onto a standard LUT +definition.
+We will not define per-plane LUTs in this patchset as the scope of our +current work only deals with pre-defined transfer functions. This API has +the flexibility to add custom 1D or 3D LUTs at a later date.
Ok.
+In order to support the existing 1D de-gamma and gamma LUTs on the drm_crtc +we will include a "custom 1D" enum value to indicate that the custom gamma and +de-gamma 1D LUTs should be used.
Sounds fine.
+Possible transfer functions:
+.. flat-table::
- :header-rows: 1
- Transfer Function
- Description
- Gamma 2.2
- a simple 2.2 gamma function
- sRGB
- 2.4 gamma with small initial linear section
Maybe rephrase to: The piece-wise sRGB transfer function with the small initial linear section, approximately corresponding to 2.4 gamma function.
I recall some debate, too, whether with a digital flat panel you should use a pure 2.4 gamma function or the sRGB function. (Which one do displays expect?)
Updated in v4.
- PQ 2084
- SMPTE ST 2084; used for HDR video and allows for up to 10,000 nit support
Perceptual Quantizer (PQ), or ST 2084. There is no PQ 2084.
Fixed in v4
- Linear
- Linear relationship between pixel value and luminance value
- Custom 1D
- Custom 1D de-gamma and gamma LUTs; one LUT per color
- Custom 3D
- Custom 3D LUT (to be defined)
Adding HLG transfer function to this set would be interesting, because it requires a parameter I believe. How would you handle parameterised transfer functions?
Good question. I haven't really explored HLG so far but it looks like it's important to arrive at a sensible design.
It's worth to note that while PQ is absolute in luminance (providing cd/m² values), everything else here is relative for both SDR and HDR. You cannot blend content in PQ with content in something else together, until you practically define the absolute luminance for all non-PQ content or vice versa.
A further complication is that you could have different relative-luminance transfer functions, meaning that the (absolute) luminance they are relative to varies. The obvious case is blending SDR content with HDR content when both have relative-luminance transfer function.
Good points. It sounds like we would need something akin to EDR (or max-SDR nits) for any relative-luminance TF, i.e. a way to arbitrarily scale the luminance of the respective plane.
Then you have HLG which is more like scene-referred than display-referred, but that might be solved with the parameter I mentioned, I'm not quite sure.
PQ is said to be display-referred, but it's usually referred to someone else's display than yours, which means it needs the HDR metadata to be able to tone-map suitably to your display. This seems to be a similar problem as with signal gamut vs. device gamut.
The traditional relative-luminance transfer functions, well, the content implied by them, is display-referred when it arrived at KMS or compositor level. There the question of "whose display" doesn't matter much because it's SDR and narrow gamut, and we probably don't even notice when we see an image wrong. With HDR the mismatch might be noticeable.
+Describing SDR Luminance +------------------------------
+Since many displays do no correctly advertise the HDR white level we +propose to define the SDR white level in nits.
This means that even if you had no content using PQ, you still need to define the absolute luminance for all the (HDR) relative-luminance transfer functions.
There probably needs to be something to relate everything to a single, relative or absolute, luminance range. That is necessary for any composition (KMS and software) since the output is a single image.
Is it better to go with relative or absolute metrics? Right now I would tend to say relative, because relative is unitless. Absolute values are numerically equivalent, but they might not have anything to do with actual physical measurements, making them actually relative. This happens when your monitor does not support PQ mode or does tone-mapping to your image, for instance.
It sounds like PQ is the outlier here in defining luminance in absolute units. Though it's also currently the most commonly used TF for HDR content.
Wouldn't you use the absolute luminance definition for PQ if you relate everything to a relative range?
Would it make sense to relate everything to a common output luminance range? If that output is PQ then an input PQ buffer is still output as PQ and relative-luminance buffers can be scaled.
Would that scaling (EDR or similar) be different for SDR (sRGB) content vs other HDR relative-luminance content?
The concept we have played with in Wayland so far is EDR, but then you have the question of "what does zero mean", i.e. the luminance of darkest black could vary between contents as well, not just the luminance of extreme white.
This is a good question. For AMD HW we have a way to scaled SDR content but I don't think that includes an ability to set the black point (unless you go and define a LUT for it).
+We define a new drm_plane property to specify the white level of an SDR +plane.
+Defining the color space +------------------------
+We propose to add a new color space property to drm_plane to define a +plane's color space.
+While some color space conversions can be performed with a simple color +transformation matrix (CTM) others require a 3D LUT.
+Defining mastering color space and luminance +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ToDo
+Pixel Formats +~~~~~~~~~~~~~
+The pixel formats, such as ARGB8888, ARGB2101010, P010, or FP16 are +unrelated to color space and EOTF definitions. HDR pixels can be formatted
Yes!
+in different ways but in order to not lose precision HDR content requires +at least 10 bpc precision. For this reason ARGB2101010, P010, and FP16 are +the obvious candidates for HDR. ARGB2101010 and P010 have the advantage +of requiring only half the bandwidth as FP16, while FP16 has the advantage +of enough precision to operate in a linear space, i.e. without EOTF.
This reminds me of something interesting said during the W3C WCG & HDR Q&A session yesterday. Unfortunately I forget his name, but I think transcriptions should become available at some point, someone said that pixel depth or bit precision should be thought of as setting the noise floor. When you quantize values, always do dithering. Then the precision only changes your noise floor level. Then something about how audio has realized this ages ago and we are just catching up.
If you don't dither, you get banding artifacts in gradients. If you do dither, it's just noise.
That's a great way to think about it.
On AMD HW we basically always dither (if programmed correctly) and have done so for ages.
+Use Cases +=========
+RGB10 HDR plane - composited HDR video & desktop +------------------------------------------------
+A single, composited plane of HDR content. The use-case is a video player +on a desktop with the compositor owning the composition of SDR and HDR +content. The content shall be PQ BT.2020 formatted. The drm_connector's +hdr_output_metadata shall be set.
+P010 HDR video plane + RGB8 SDR desktop plane +--------------------------------------------- +A normal 8bpc desktop plane, with a P010 HDR video plane underlayed. The +HDR plane shall be PQ BT.2020 formatted. The desktop plane shall specify +an SDR boost value. The drm_connector's hdr_output_metadata shall be set.
+One XRGB8888 SDR Plane - HDR output +-----------------------------------
+In order to support a smooth transition we recommend an OS that supports +HDR output to provide the hdr_output_metadata on the drm_connector to +configure the output for HDR, even when the content is only SDR. This will +allow for a smooth transition between SDR-only and HDR content. In this
Agreed, but this also kind of contradicts the idea of pushing HDR metadata from video all the way to the display in the RGB10 HDR plane case - something you do not seem to suggest here at all, but I would have expected that to be a prime use case for you.
A set-top-box might want to push the video HDR metadata all the way to the display when supported, and then adapt all the non-video graphics to that.
Initially I was hoping to find a quick way to allow pushing video straight from decoder through a KMS plane to the output. Increasingly I'm realizing that this is probably not going to work well for a general desktop compositor, hence the statement here to pretty much say the Wayland plan is the correct plan for this: single-plane HDR (with shader composition) first, then KMS offloading for power saving.
On some level I'm still interested in the direct decoder-to-KMS-to-display path but am afraid we won't get the API right if we don't deal with the general desktop compositor use-case first.
Apologies, again, if some of my response is a bit incoherent. I've been writing the responses over Friday and today.
Harry
Thanks, pq
+use-case the SDR max luminance value should be provided on the drm_plane.
+In DCN we will de-PQ or de-Gamma all input in order to blend in linear +space. For SDR content we will also apply any desired boost before +blending. After blending we will then re-apply the PQ EOTF and do RGB +to YCbCr conversion if needed.
+FP16 HDR linear planes +----------------------
+These will require a transformation into the display's encoding (e.g. PQ) +using the CRTC LUT. Current CRTC LUTs are lacking the precision in the +dark areas to do the conversion without losing detail.
+One of the newly defined output transfer functions or a PWL or `multi-segmented +LUT`_ can be used to facilitate the conversion to PQ, HLG, or another +encoding supported by displays.
+.. _multi-segmented LUT: https://patchwork.freedesktop.org/series/90822/
+User Space +==========
+Gnome & GStreamer +-----------------
+See Jeremy Cline's `HDR in Linux: Part 2`_.
+.. _HDR in Linux: Part 2: https://www.jcline.org/blog/fedora/graphics/hdr/2021/06/28/hdr-in-linux-p2.h...
+Wayland +-------
+See `Wayland Color Management and HDR Design Goals`_.
+.. _Wayland Color Management and HDR Design Goals: https://gitlab.freedesktop.org/swick/wayland-protocols/-/blob/color/unstable...
+ChromeOS Ozone +--------------
+ToDo
+HW support +==========
+ToDo, describe pipeline on a couple different HW platforms
+Further Reading +===============
+* https://gitlab.freedesktop.org/swick/wayland-protocols/-/blob/color/unstable... +* http://downloads.bbc.co.uk/rd/pubs/whp/whp-pdf-files/WHP309.pdf +* https://app.spectracal.com/Documents/White%20Papers/HDR_Demystified.pdf +* https://www.jcline.org/blog/fedora/graphics/hdr/2021/05/07/hdr-in-linux-p1.h... +* https://www.jcline.org/blog/fedora/graphics/hdr/2021/06/28/hdr-in-linux-p2.h...
diff --git a/Documentation/gpu/rfc/index.rst b/Documentation/gpu/rfc/index.rst index 05670442ca1b..8d8430cfdde1 100644 --- a/Documentation/gpu/rfc/index.rst +++ b/Documentation/gpu/rfc/index.rst @@ -19,3 +19,4 @@ host such documentation: .. toctree::
i915_gem_lmem.rst
- hdr-wide-gamut.rst
On Mon, 20 Sep 2021 20:14:50 -0400 Harry Wentland harry.wentland@amd.com wrote:
On 2021-09-15 10:01, Pekka Paalanen wrote:> On Fri, 30 Jul 2021 16:41:29 -0400
Harry Wentland harry.wentland@amd.com wrote:
Use the new DRM RFC doc section to capture the RFC previously only described in the cover letter at https://patchwork.freedesktop.org/series/89506/
v3:
- Add sections on single-plane and multi-plane HDR
- Describe approach to define HW details vs approach to define SW intentions
- Link Jeremy Cline's excellent HDR summaries
- Outline intention behind overly verbose doc
- Describe FP16 use-case
- Clean up links
v2: create this doc
v1: n/a
Signed-off-by: Harry Wentland harry.wentland@amd.com
Hi Harry!
...
Documentation/gpu/rfc/color_intentions.drawio | 1 + Documentation/gpu/rfc/color_intentions.svg | 3 + Documentation/gpu/rfc/colorpipe | 1 + Documentation/gpu/rfc/colorpipe.svg | 3 + Documentation/gpu/rfc/hdr-wide-gamut.rst | 580 ++++++++++++++++++ Documentation/gpu/rfc/index.rst | 1 + 6 files changed, 589 insertions(+) create mode 100644 Documentation/gpu/rfc/color_intentions.drawio create mode 100644 Documentation/gpu/rfc/color_intentions.svg create mode 100644 Documentation/gpu/rfc/colorpipe create mode 100644 Documentation/gpu/rfc/colorpipe.svg create mode 100644 Documentation/gpu/rfc/hdr-wide-gamut.rst
...
+Here are some examples of real-life objects and their approximate +luminance values:
+.. _PQ (perceptual quantizer) function: https://en.wikipedia.org/wiki/High-dynamic-range_video#Perceptual_Quantizer
+.. flat-table::
- :header-rows: 1
- Object
- Luminance in nits
- Fluorescent light
- 10,000
- Highlights
- 1,000 - sunlight
Did fluorescent and highlights get swapped here?
No, though at first glance it can look like that. This is pulled from an internal doc I didn't write, but I think the intention is to show that fluorescent lights can be up to 10,000 nits and highlights are usually 1,000+ nits.
I'll clarify this in v4.
A quick google search seems to show that there are even fluorescent lights with 46,000 nits. I guess these numbers provide a ballpark view more than anything.
Those seem quite extreme fluorescent lights, far beyond what one might find in offices I suppose?
I mean, I can totally stare straight at my office fluorescent lights without any discomfort.
Highlights OTOH of course depend on which highlights we're talking about, and your 1000 - sunlight range I can totally agree with.
If you look at a sea or a lake on a sunny day, the reflections of Sun on the water surface are much much brighter than anything else in nature aside from Sun itself. I happened to see this myself when playing with a camera: the rest of the image can be black while the water highlights still shoot way beyond the captured dynamic range.
- White Objects
- 250 - 1,000
- Typical Objects
- 1 - 250
- Shadows
- 0.01 - 1
- Ultra Blacks
- 0 - 0.0005
+Transfer functions +------------------
+Traditionally we used the terms gamma and de-gamma to describe the +encoding of a pixel's luminance value and the operation to transfer from +a linear luminance space to the non-linear space used to encode the +pixels. Since some newer encodings don't use a gamma curve I suggest +we refer to non-linear encodings using the terms `EOTF, and OETF`_, or +simply as transfer function in general.
Yeah, gamma could mean lots of things. If you have e.g. OETF gamma 1/2.2 and EOTF gamma 2.4, the result is OOTF gamma 1.09.
OETF, EOTF and OOTF are not unambiguous either, since there is always the question of whose function is it.
Yeah, I think both gamma and EO/OE/OO/EETF are all somewhat problematic.
We can use them, but we have to explain which functions we are referring to. In particular, if you have a specific EOTF, then the inverse of it should be called EOTF^-1 and not OETF, to follow what I have understood of specs like BT.2100.
Personally I'd take things further and talk about encoding and decoding functions when the intent is to translate between pixel values and light-linear color values rather than characterising a piece of equipment.
I tend to think about these more in terms of input and output transfer functions but then you have the ambiguity about what your input and output mean. I see the input TF between framebuffer and blender, and the output TF between blender and display.
Indeed, those are good explanations.
You also have the challenge that input and output transfer functions fulfill multiple roles, e.g. an output transfer as defined above might do linear-to-PQ conversion but could also fill the role of tone mapping in the case where the input content spans a larger range than the display space.
I would like to avoid such conflation or use different terms. That is indeed the confusion often had I think.
I would say that encoding/decoding function does not do any kind of tone-mapping. It's purely for numerical encoding to save bits on transmission or taps in a LUT. Although, for taps in a LUT optimization, it is called "shaper" instead. A shaper function (or 1D LUT) does not need to equal an encoding function.
We're going to need glossary.
Two different EOTFs are of interest in composition for display:
- the display EOTF (since display signal is electrical)
- the content EOTF (since content is stored in electrical encoding)
+The EOTF (Electro-Optical Transfer Function) describes how to transfer +from an electrical signal to an optical signal. This was traditionally +done by the de-gamma function.
+The OETF (Opto Electronic Transfer Function) describes how to transfer +from an optical signal to an electronic signal. This was traditionally +done by the gamma function.
+More generally we can name the transfer function describing the transform +between scanout and blending space as the **input transfer function**, and
"scanout space" makes me think of cable/signal values, not framebuffer values. Or, I'm not sure. I'd recommend replacing the term "scanout space" with something less ambiguous like framebuffer values.
Framebuffer space/values is much better than scanout space.
I'd go with values. Does "space" include encoding or not? Depends on context. Thinking about:
- light-linear RGB values in BT.709 color space - sRGB encoded RGB values in BT.709 color space - sRGB encoded YCbCr values in BT.709 color space
Are these difference spaces, or the same space but with different encodings and color models?
I have been gravitating towards "color space" being the same in all of the above: BT.709 color space. OTOH, saying "color space, encoding and model" gets awkward really fast, so sometimes it's just "color space".
Framebuffer or pixel values could be, say, 10-bit integer, while (non-linear) color values would be that converted to the [0.0, 1.0] range for example.
+the transfer function describing the transform from blending space to the +output space as **output transfer function**.
You're talking about "spaces" here, but what you are actually talking about are value encodings, not (color) spaces. An EOTF or OETF is not meant to modify the color space.
When talking about blending, what you're actually interested in is linear vs. non-linear color value encoding. This matches your talk about EOTF and OETF, although you need to be careful to specify which EOTF and OETF you mean. For blending, color values need to be linear in light intensity, and the inverse of the E-to-O mapping before blending is exactly the same as the O-to-E mapping after blending. Otherwise you would alter even opaque pixels.
I struggle a bit with finding the right term to talk about color value encoding in general. Concrete examples can be PQ-encoded, Gamma 2.2, or linearly encoded spaces but I was grasping for a more general term; something that could potentially include TFs that also tone-map.
I would very much prefer to keep tone-mapping as a separate conceptual object, but I think I see where you are coming from: the API has a single slot for the combined coding/tone-mapping function.
Is "combined coding/tone-mapping function" too long to type? :-)
Interestingly, the Canvas API changes presented by Christopher Cameron also seem to use the new colorSpace property to deal with both color space, as well as EOTF.
That may be practical from API point of view, but conceptually I find it confusing. I think it is easier to think through the theory with completely independent color space and encoding concepts, and then it will be easy to understand that in an API you just pick specific pairs of them since those are enough for most use cases.
If you start from the API concepts, try to work towards the theory, and then you are presented a display whose EOTF is measured and does not match any of the standard ones present in the API, I think you would struggle to make that display work until you realise that color space and encoding can be decoupled.
A bit like how YCbCr is not a color space but a color model you can apply to any RGB color space, and you can even pick the encoding function separately if you want to.
Also mind that tone mapping is completely separate to all the above. The above describe what colors pixels represent on one device (or in an image). Tone mapping is an operation that adapts an image from one device to another device. Gamut mapping is as well.
So describing a color space, color model, and encoding is one thing. Adapting (converting) an image from one such to another is a whole different thing. However, when you have hardware pixel pipeline, you tend to program the total transformation from source to destination, where all those different unrelated or orthogonal concepts have been combined and baked in, usually in such a way that you cannot separate them anymore.
Our plans for Weston internals follow the same: you have descriptions of source and destination pixels, you have your rendering intent that affects how things like gamut mapping and tone mapping work, and then you compute the two transformations from all those: the transformation from source to blending space, and from blending space to output (monitor cable values). In the Weston design the renderer KMS framebuffer will hold either blending space values or cable values.
Btw. another thing is color space conversion vs. gamut and tone mapping. These are also separate concepts. You can start with BT.2020 color space color values, and convert those to sRGB color values. A pure color space conversion can result in color values outside of the sRGB value range, because BT.2020 is a bigger color space. If you clip those out-of-range values into range, then you are doing gamut (and tone?) mapping in my opinion.
...
+Displays and Tonemapping +------------------------
+External displays are able to do their own tone and color mapping, based +on the mastering luminance, color primaries, and white space defined in +the HDR metadata.
HLG does things differently wrt. metadata and tone-mapping than PQ.
As mentioned above I had some time to watch the HLG presentation and that indeed has interesting implications. With HLG we also have relative luminance HDR content. One challenge is How to tone-map HLG content alongside SDR (sRGB) content and PQ content.
I think ultimately this means that we can't rely on display tonemapping when we are dealing with mixed content on the screen. In that case we would probably want to output to the display in the EDID-referred space and tone-map all incoming buffers to the EDID-referred space.
That's exactly the plan with Weston.
The display signal space has three options according to EDID/HDMI:
- HDR with traditional gamma (which I suppose means the relative [0.0, 1.0] range with either sRGB or 2.2 gamma encoding and using the monitor's native gamut)
- BT.2020 PQ
- HLG (BT.2020?)
These are what the monitor cable must carry, so these are what the CRTC must produce. I suppose one could pick the blending space to be something else, but in Weston the plan is to use cable signal as the blending space, just linearised for light and limited by the monitors gamut and dynamic range. That keeps the post-blend operations as simple as possible, meaning we are likely to be able to offload that to KMS and do not need another renderer pass for that.
One thing I realised yesterday is that HLG displays are much better defined than PQ displays, because HLG defines what OOTF the display must implement. In a PQ system, the signal carries the full 10k nits range, and then the monitor must do vendor magic to display it. That's for tone mapping, not sure if HLG has an advantage in gamut mapping as well.
For a PQ display, all we can do is hope that if we tell the monitor via HDR static metadata that our content will never exceed monitor capabilities then the monitor doesn't mangle our images too bad.
I think the doc needs a lot more pictures. I wonder if I can do that without polluting git with large files.
...
+Multi-plane +-----------
+In multi-plane configurations we need to solve the problem of blending +HDR and SDR content. This blending should be done in linear space and +therefore requires framebuffer data that is presented in linear space +or a way to convert non-linear data to linear space. Additionally +we need a way to define the luminance of any SDR content in relation +to the HDR content.
+In order to present framebuffer data in linear space without losing a +lot of precision it needs to be presented using 16 bpc precision.
Integer or floating-point?
Floating point. Fixed in v4.
I doubt integer would work since we'd lose too much precision in the dark areas. Though, maybe 16-bit would let us map those well enough? I don't know for sure. Either way, I think anybody doing linear is using FP16.
That's a safe assumption. Integer precision in the dark end also depends on how high the bright end goes. With floating point that seems like a non-issue.
What I think is "common knowledge" by now is that 8 bits is not enough for a linear channel. However, 10 bits integer might be enough for a linear channel in SDR.
+Defining HW Details +-------------------
+One way to take full advantage of modern HW's color pipelines is by +defining a "generic" pipeline that matches all capable HW. Something +like this, which I took `from Uma Shankar`_ and expanded on:
+.. _from Uma Shankar: https://patchwork.freedesktop.org/series/90826/
+.. kernel-figure:: colorpipe.svg
Btw. there will be interesting issues with alpha-premult, filtering, and linearisation if your planes have alpha channels. That's before HDR is even considered.
Could you expand on this a bit?
First you might want to read http://ssp.impulsetrain.com/gamma-premult.html and then ask, which way does software and hardware do and expect alpha premultiplication. I don't actually know. I have always assumed the intuitive way for compositing in non-linear values before I understood what light-linear means, which means I have always assumed the *wrong* way of doing premult.
The next topic is, when you do filtering to sample from a texture that has an alpha channel, what should the values be from which you compute the weighted average or convolution? If I remember right, the answer is that they must be light-linear *and* premultiplied.
So there is exactly one way that is correct, and all other orders of operations are more or less incorrect.
+I intentionally put de-Gamma, and Gamma in parentheses in my graph +as they describe the intention of the block but not necessarily a +strict definition of how a userspace implementation is required to +use them.
+De-Gamma and Gamma blocks are named LUT, but they could be non-programmable +LUTs in some HW implementations with no programmable LUT available. See +the definitions for AMD's `latest dGPU generation`_ as an example.
+.. _latest dGPU generation: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/tree/driver...
+I renamed the "Plane Gamma LUT" and "CRTC De-Gamma LUT" to "Tonemapping" +as we generally don't want to re-apply gamma before blending, or do +de-gamma post blending. These blocks tend generally to be intended for +tonemapping purposes.
Right.
+Tonemapping in this case could be a simple nits value or `EDR`_ to describe +how to scale the :ref:`SDR luminance`.
I do wonder how that will turn out in the end... but on Friday there will be HDR Compositing and Tone-mapping live Q&A session: https://www.w3.org/Graphics/Color/Workshop/talks.html#compos
I didn't manage to join the compositing and tone-mapping live Q&A? Did anything interesting emerge from that?
I guess for me it wasn't mind blowing really, since I've been struggling to understand things for a good while now, and apparently I've actually learnt something. :-)
It was good (or bad?) to hear that much of the compositing challenges were still unsolved, and we're definitely not alone trying to find answers.
A much more interesting Q&A session was yesterday on Color creation and manipulation, where the topics were even more to our scope, perhaps surprisingly.
I got a grasp of how mindbogglingly complex the ICCmax specification is. It is so complex, that just recently they have started publishing a series of specifications that tell which parts of ICCmax one should implement or support for specific common use cases. Hopefully the emergence of those "Interoperability Conformance Specifications" gives rise to at least partial FOSS implementations.
If you want to do gamut reduction, OKLab color space seems like the best place to do it. It's not a specific gamut reduction algorithm, but it's a good space to work in, whatever you want to do.
The Krita presentation opened up practical issues with HDR and interoperability, and there I was able to ask about PQ and HLG differences and learn that HLG displays are better defined.
Even EDR was also talked about briefly.
As for take-aways... sorry, my mind hasn't returned to me yet. We will have to wait for the Q&A session transcripts to be published. Yes, there are supposed to be transcripts!
I didn't manage to ask how EDR is handling differences in black levels. EDR obviously caters for the peak whites, but I don't know about low blacks. They did give us a link: https://developer.apple.com/videos/play/wwdc2021/10161/
I haven't watched it yet.
I've watched Timo Kunkel's talk and it's been very eye opening. He does a great job of highlighting the challenges of compositing HDR content.
+Tonemapping could also include the ability to use a 3D LUT which might be +accompanied by a 1D shaper LUT. The shaper LUT is required in order to +ensure a 3D LUT with limited entries (e.g. 9x9x9, or 17x17x17) operates +in perceptual (non-linear) space, so as to evenly spread the limited +entries evenly across the perceived space.
+.. _EDR: https://gitlab.freedesktop.org/swick/wayland-protocols/-/blob/color/unstable...
+Creating a model that is flexible enough to define color pipelines for +a wide variety of HW is challenging, though not impossible. Implementing +support for such a flexible definition in userspace, though, amounts +to essentially writing color pipeline drivers for each HW.
My thinking right now is that userspace has it's own pipeline model with the elements it must have. Then it attempts to map that pipeline to what elements the KMS pipeline happens to expose. If there is a mapping, good. If not, fall back to shaders on GPU. To help that succeed more often, I'm using the current KMS abstract pipeline as a guide in designing the Weston internal color pipeline.
I feel I should know, but is this pipeline documented? Is it merely, the plane > crtc > connector model, or does it go beyond that?
The KMS pixel pipeline model right now is just a bunch of properties in the CRTC. These properties allude to the degamma LUT -> CTM -> gamma LUT pipeline model, post-blending.
In Weston, we take a very similar approach. A color transformation (which maps to a single rendering pass, or the CRTC KMS properties, or the future per-plane KMS properties) is:
color model change -> pre-curve -> color mapping -> post-curve
- Color model change is more or less for YCbCr->RGB conversion.
- Pre- and post-curves are essentially per-channel 1D LUTs or enumerated functions.
- Color mapping is a 3D LUT, a matrix, or whatever else is needed.
You can see a similar structure to the KMS degamma->CTM->gamma, but with options to plug in other defined operations in the slots so that at least the GL-renderer can be flexible enough for everything, even if it doesn't match KMS capabilities. Each of the slots can also be identity (which even gets compile out of the GL shader).
Weston has one color transformation per window to go from content to blending space, and another color transformation to go from blending to output (cable) space.
It's not really documented, as half of that code, and more really, is still waiting for review or to be written. Oh, I did have some plans written down here: https://gitlab.freedesktop.org/wayland/weston/-/issues/467#note_864054
Pre-curve for instance could be a combination of decoding to linear light and a shaper for the 3D LUT coming next. That's why we don't call them gamma or EOTF, that would be too limiting.
(Using a shaper may help to keep the 3D LUT size reasonable - I suppose very much like those multi-segmented LUTs.)
...
Now, someone might say that the Wayland protocol design for HDR aims to be descriptive and not prescriptive, so why should KMS UAPI be different? The reason is explained above: *some* KMS clients may switch frame by frame between KMS and shaders, but Wayland clients pick one path and stick to it. Wayland clients have no reason that I can imagine to switch arbitrarily in flight.
I'm a bit confused about this paragraph. Wouldn't the Wayland compositor decide whether to use a KMS plane or shader and not the client?
What I meant is, Wayland clients will not randomly switch between doing color transformations themselves and letting the compositor do it. They should be able to just pick one path and stick to it as long as the window is up.
+We would like to solicit feedback and encourage discussion around the +merits and weaknesses of these approaches. This question is at the core +of defining a good API and we'd like to get it right.
+Input and Output Transfer functions +-----------------------------------
+We define an input transfer function on drm_plane to describe the +transform from framebuffer to blending space.
+We define an output transfer function on drm_crtc to describe the +transform from blending space to display space.
Here is again the terminology problem between transfer function and (color) space.
Color value encoding? Or luminance space? Or maybe there's a different term altogether to describe this?
The problem in the statement is that it implies a transfer function can do color space conversions or color space mapping.
In Weston we call it "color transformation" in an attempt to include everything.
The input function must include the possibility for color space mapping because you may have different planes with different content color spaces, and blending requires converting them all into one common color space.
Depending on what you choose as your blending space, the output function could be just the display EOTF or something more complicated.
...
It's worth to note that while PQ is absolute in luminance (providing cd/m² values), everything else here is relative for both SDR and HDR. You cannot blend content in PQ with content in something else together, until you practically define the absolute luminance for all non-PQ content or vice versa.
A further complication is that you could have different relative-luminance transfer functions, meaning that the (absolute) luminance they are relative to varies. The obvious case is blending SDR content with HDR content when both have relative-luminance transfer function.
Good points. It sounds like we would need something akin to EDR (or max-SDR nits) for any relative-luminance TF, i.e. a way to arbitrarily scale the luminance of the respective plane.
Right. However, in the past few days, I've heard statements that scaling luminance linearly will look not so good. What you need to do is to follow the human visual system (HVS) characteristic and use a gamma function. (This is not about non-linear encoding, just that the function happens to be similar - which is not totally a coincidence, since also non-linear encoding is meant to follow the HVS[*].) HLG OOTF does exactly this IIUC. Naturally, these statements came from Andrew Cotton as I recall.
* Or actually, the non-linear encoding was meant to follow cathode-ray tube characteristic, which by pure coincidence happens to roughly agree with HVS.
Then you have HLG which is more like scene-referred than display-referred, but that might be solved with the parameter I mentioned, I'm not quite sure.
PQ is said to be display-referred, but it's usually referred to someone else's display than yours, which means it needs the HDR metadata to be able to tone-map suitably to your display. This seems to be a similar problem as with signal gamut vs. device gamut.
The traditional relative-luminance transfer functions, well, the content implied by them, is display-referred when it arrived at KMS or compositor level. There the question of "whose display" doesn't matter much because it's SDR and narrow gamut, and we probably don't even notice when we see an image wrong. With HDR the mismatch might be noticeable.
+Describing SDR Luminance +------------------------------
+Since many displays do no correctly advertise the HDR white level we +propose to define the SDR white level in nits.
This means that even if you had no content using PQ, you still need to define the absolute luminance for all the (HDR) relative-luminance transfer functions.
There probably needs to be something to relate everything to a single, relative or absolute, luminance range. That is necessary for any composition (KMS and software) since the output is a single image.
Is it better to go with relative or absolute metrics? Right now I would tend to say relative, because relative is unitless. Absolute values are numerically equivalent, but they might not have anything to do with actual physical measurements, making them actually relative. This happens when your monitor does not support PQ mode or does tone-mapping to your image, for instance.
It sounds like PQ is the outlier here in defining luminance in absolute units. Though it's also currently the most commonly used TF for HDR content.
Yes. "A completely new way", I recall reading somewhere advocating PQ. :-)
You can't switch from PQ to HLG by only replacing the TF, mind. Or so they say... I suppose converting from one to the other requires making decisions on the way. At least you need to know what display dynamic range you are targeting I think.
Wouldn't you use the absolute luminance definition for PQ if you relate everything to a relative range?
Would it make sense to relate everything to a common output luminance range? If that output is PQ then an input PQ buffer is still output as PQ and relative-luminance buffers can be scaled.
Would that scaling (EDR or similar) be different for SDR (sRGB) content vs other HDR relative-luminance content?
I think we need to know the target display, especially the dynamic range of it. Then we know what HLG OOTF it should use. From PQ we need at least the HDR static metadata to know the actual range, as assuming the full 10k nit range being meaningful could seriously lose highlights or something I guess.
Everything is relative to the target display I believe, even PQ since displaying PQ as-is only works on the mastering display.
Since PQ content comes with some metadata, we need PQ-to-PQ conversions for PQ display, assuming we don't just pass through the metadata to the display. Maybe the HLG OOTF could be used for the tone mapping of PQ-to-PQ...
I think both PQ and HLG have different standards written for how to map SDR to them. I don't remember which ITU-R or SMPTE spec those might be, but I suppose BT.2100 could be a starting point searching for them.
...
Initially I was hoping to find a quick way to allow pushing video straight from decoder through a KMS plane to the output. Increasingly I'm realizing that this is probably not going to work well for a general desktop compositor, hence the statement here to pretty much say the Wayland plan is the correct plan for this: single-plane HDR (with shader composition) first, then KMS offloading for power saving.
On some level I'm still interested in the direct decoder-to-KMS-to-display path but am afraid we won't get the API right if we don't deal with the general desktop compositor use-case first.
I am very happy to hear that. :-)
Apologies, again, if some of my response is a bit incoherent. I've been writing the responses over Friday and today.
It wasn't at all!
Thanks, pq
On 2021-09-21 09:31, Pekka Paalanen wrote:
On Mon, 20 Sep 2021 20:14:50 -0400 Harry Wentland harry.wentland@amd.com wrote:
On 2021-09-15 10:01, Pekka Paalanen wrote:> On Fri, 30 Jul 2021 16:41:29 -0400
Harry Wentland harry.wentland@amd.com wrote:
Use the new DRM RFC doc section to capture the RFC previously only described in the cover letter at https://patchwork.freedesktop.org/series/89506/
v3:
- Add sections on single-plane and multi-plane HDR
- Describe approach to define HW details vs approach to define SW intentions
- Link Jeremy Cline's excellent HDR summaries
- Outline intention behind overly verbose doc
- Describe FP16 use-case
- Clean up links
v2: create this doc
v1: n/a
Signed-off-by: Harry Wentland harry.wentland@amd.com
Hi Harry!
...
Documentation/gpu/rfc/color_intentions.drawio | 1 + Documentation/gpu/rfc/color_intentions.svg | 3 + Documentation/gpu/rfc/colorpipe | 1 + Documentation/gpu/rfc/colorpipe.svg | 3 + Documentation/gpu/rfc/hdr-wide-gamut.rst | 580 ++++++++++++++++++ Documentation/gpu/rfc/index.rst | 1 + 6 files changed, 589 insertions(+) create mode 100644 Documentation/gpu/rfc/color_intentions.drawio create mode 100644 Documentation/gpu/rfc/color_intentions.svg create mode 100644 Documentation/gpu/rfc/colorpipe create mode 100644 Documentation/gpu/rfc/colorpipe.svg create mode 100644 Documentation/gpu/rfc/hdr-wide-gamut.rst
...
+Here are some examples of real-life objects and their approximate +luminance values:
+.. _PQ (perceptual quantizer) function: https://en.wikipedia.org/wiki/High-dynamic-range_video#Perceptual_Quantizer
+.. flat-table::
- :header-rows: 1
- Object
- Luminance in nits
- Fluorescent light
- 10,000
- Highlights
- 1,000 - sunlight
Did fluorescent and highlights get swapped here?
No, though at first glance it can look like that. This is pulled from an internal doc I didn't write, but I think the intention is to show that fluorescent lights can be up to 10,000 nits and highlights are usually 1,000+ nits.
I'll clarify this in v4.
A quick google search seems to show that there are even fluorescent lights with 46,000 nits. I guess these numbers provide a ballpark view more than anything.
Those seem quite extreme fluorescent lights, far beyond what one might find in offices I suppose?
I mean, I can totally stare straight at my office fluorescent lights without any discomfort.
Highlights OTOH of course depend on which highlights we're talking about, and your 1000 - sunlight range I can totally agree with.
If you look at a sea or a lake on a sunny day, the reflections of Sun on the water surface are much much brighter than anything else in nature aside from Sun itself. I happened to see this myself when playing with a camera: the rest of the image can be black while the water highlights still shoot way beyond the captured dynamic range.
- White Objects
- 250 - 1,000
- Typical Objects
- 1 - 250
- Shadows
- 0.01 - 1
- Ultra Blacks
- 0 - 0.0005
+Transfer functions +------------------
+Traditionally we used the terms gamma and de-gamma to describe the +encoding of a pixel's luminance value and the operation to transfer from +a linear luminance space to the non-linear space used to encode the +pixels. Since some newer encodings don't use a gamma curve I suggest +we refer to non-linear encodings using the terms `EOTF, and OETF`_, or +simply as transfer function in general.
Yeah, gamma could mean lots of things. If you have e.g. OETF gamma 1/2.2 and EOTF gamma 2.4, the result is OOTF gamma 1.09.
OETF, EOTF and OOTF are not unambiguous either, since there is always the question of whose function is it.
Yeah, I think both gamma and EO/OE/OO/EETF are all somewhat problematic.
We can use them, but we have to explain which functions we are referring to. In particular, if you have a specific EOTF, then the inverse of it should be called EOTF^-1 and not OETF, to follow what I have understood of specs like BT.2100.
I should probably add a paragraph about OOTF. The apple talk you linked below uses OOTF to refer to tone-mapping.
Personally I'd take things further and talk about encoding and decoding functions when the intent is to translate between pixel values and light-linear color values rather than characterising a piece of equipment.
I tend to think about these more in terms of input and output transfer functions but then you have the ambiguity about what your input and output mean. I see the input TF between framebuffer and blender, and the output TF between blender and display.
Indeed, those are good explanations.
You also have the challenge that input and output transfer functions fulfill multiple roles, e.g. an output transfer as defined above might do linear-to-PQ conversion but could also fill the role of tone mapping in the case where the input content spans a larger range than the display space.
I would like to avoid such conflation or use different terms. That is indeed the confusion often had I think.
I would say that encoding/decoding function does not do any kind of tone-mapping. It's purely for numerical encoding to save bits on transmission or taps in a LUT. Although, for taps in a LUT optimization, it is called "shaper" instead. A shaper function (or 1D LUT) does not need to equal an encoding function.
We're going to need glossary.
Ack
Two different EOTFs are of interest in composition for display:
- the display EOTF (since display signal is electrical)
- the content EOTF (since content is stored in electrical encoding)
+The EOTF (Electro-Optical Transfer Function) describes how to transfer +from an electrical signal to an optical signal. This was traditionally +done by the de-gamma function.
+The OETF (Opto Electronic Transfer Function) describes how to transfer +from an optical signal to an electronic signal. This was traditionally +done by the gamma function.
+More generally we can name the transfer function describing the transform +between scanout and blending space as the **input transfer function**, and
"scanout space" makes me think of cable/signal values, not framebuffer values. Or, I'm not sure. I'd recommend replacing the term "scanout space" with something less ambiguous like framebuffer values.
Framebuffer space/values is much better than scanout space.
I'd go with values. Does "space" include encoding or not? Depends on context. Thinking about:
- light-linear RGB values in BT.709 color space
- sRGB encoded RGB values in BT.709 color space
- sRGB encoded YCbCr values in BT.709 color space
Are these difference spaces, or the same space but with different encodings and color models?
I have been gravitating towards "color space" being the same in all of the above: BT.709 color space. OTOH, saying "color space, encoding and model" gets awkward really fast, so sometimes it's just "color space".
Framebuffer or pixel values could be, say, 10-bit integer, while (non-linear) color values would be that converted to the [0.0, 1.0] range for example.
I think we need to talk about what 1.0 means. Apple's EDR defines 1.0 as "reference white" or in other words the max SDR white.
That definition might change depending on the content type.
+the transfer function describing the transform from blending space to the +output space as **output transfer function**.
You're talking about "spaces" here, but what you are actually talking about are value encodings, not (color) spaces. An EOTF or OETF is not meant to modify the color space.
When talking about blending, what you're actually interested in is linear vs. non-linear color value encoding. This matches your talk about EOTF and OETF, although you need to be careful to specify which EOTF and OETF you mean. For blending, color values need to be linear in light intensity, and the inverse of the E-to-O mapping before blending is exactly the same as the O-to-E mapping after blending. Otherwise you would alter even opaque pixels.
I struggle a bit with finding the right term to talk about color value encoding in general. Concrete examples can be PQ-encoded, Gamma 2.2, or linearly encoded spaces but I was grasping for a more general term; something that could potentially include TFs that also tone-map.
I would very much prefer to keep tone-mapping as a separate conceptual object, but I think I see where you are coming from: the API has a single slot for the combined coding/tone-mapping function.
Is "combined coding/tone-mapping function" too long to type? :-)
Interestingly, the Canvas API changes presented by Christopher Cameron also seem to use the new colorSpace property to deal with both color space, as well as EOTF.
That may be practical from API point of view, but conceptually I find it confusing. I think it is easier to think through the theory with completely independent color space and encoding concepts, and then it will be easy to understand that in an API you just pick specific pairs of them since those are enough for most use cases.
If you start from the API concepts, try to work towards the theory, and then you are presented a display whose EOTF is measured and does not match any of the standard ones present in the API, I think you would struggle to make that display work until you realise that color space and encoding can be decoupled.
A bit like how YCbCr is not a color space but a color model you can apply to any RGB color space, and you can even pick the encoding function separately if you want to.
Also mind that tone mapping is completely separate to all the above. The above describe what colors pixels represent on one device (or in an image). Tone mapping is an operation that adapts an image from one device to another device. Gamut mapping is as well.
So describing a color space, color model, and encoding is one thing. Adapting (converting) an image from one such to another is a whole different thing. However, when you have hardware pixel pipeline, you tend to program the total transformation from source to destination, where all those different unrelated or orthogonal concepts have been combined and baked in, usually in such a way that you cannot separate them anymore.
Our plans for Weston internals follow the same: you have descriptions of source and destination pixels, you have your rendering intent that affects how things like gamut mapping and tone mapping work, and then you compute the two transformations from all those: the transformation from source to blending space, and from blending space to output (monitor cable values). In the Weston design the renderer KMS framebuffer will hold either blending space values or cable values.
Btw. another thing is color space conversion vs. gamut and tone mapping. These are also separate concepts. You can start with BT.2020 color space color values, and convert those to sRGB color values. A pure color space conversion can result in color values outside of the sRGB value range, because BT.2020 is a bigger color space. If you clip those out-of-range values into range, then you are doing gamut (and tone?) mapping in my opinion.
...
+Displays and Tonemapping +------------------------
+External displays are able to do their own tone and color mapping, based +on the mastering luminance, color primaries, and white space defined in +the HDR metadata.
HLG does things differently wrt. metadata and tone-mapping than PQ.
As mentioned above I had some time to watch the HLG presentation and that indeed has interesting implications. With HLG we also have relative luminance HDR content. One challenge is How to tone-map HLG content alongside SDR (sRGB) content and PQ content.
I think ultimately this means that we can't rely on display tonemapping when we are dealing with mixed content on the screen. In that case we would probably want to output to the display in the EDID-referred space and tone-map all incoming buffers to the EDID-referred space.
That's exactly the plan with Weston.
The display signal space has three options according to EDID/HDMI:
HDR with traditional gamma (which I suppose means the relative [0.0, 1.0] range with either sRGB or 2.2 gamma encoding and using the monitor's native gamut)
BT.2020 PQ
HLG (BT.2020?)
These are what the monitor cable must carry, so these are what the CRTC must produce. I suppose one could pick the blending space to be something else, but in Weston the plan is to use cable signal as the blending space, just linearised for light and limited by the monitors gamut and dynamic range. That keeps the post-blend operations as simple as possible, meaning we are likely to be able to offload that to KMS and do not need another renderer pass for that.
One thing I realised yesterday is that HLG displays are much better defined than PQ displays, because HLG defines what OOTF the display must implement. In a PQ system, the signal carries the full 10k nits range, and then the monitor must do vendor magic to display it. That's for tone mapping, not sure if HLG has an advantage in gamut mapping as well.
Doesn't the metadata describe the max content white? So even if the signal carries the full 10k nits the actual max luminance of the content should be incoded as part of the metadata.
For a PQ display, all we can do is hope that if we tell the monitor via HDR static metadata that our content will never exceed monitor capabilities then the monitor doesn't mangle our images too bad.
I think the doc needs a lot more pictures. I wonder if I can do that without polluting git with large files.
...
+Multi-plane +-----------
+In multi-plane configurations we need to solve the problem of blending +HDR and SDR content. This blending should be done in linear space and +therefore requires framebuffer data that is presented in linear space +or a way to convert non-linear data to linear space. Additionally +we need a way to define the luminance of any SDR content in relation +to the HDR content.
+In order to present framebuffer data in linear space without losing a +lot of precision it needs to be presented using 16 bpc precision.
Integer or floating-point?
Floating point. Fixed in v4.
I doubt integer would work since we'd lose too much precision in the dark areas. Though, maybe 16-bit would let us map those well enough? I don't know for sure. Either way, I think anybody doing linear is using FP16.
That's a safe assumption. Integer precision in the dark end also depends on how high the bright end goes. With floating point that seems like a non-issue.
What I think is "common knowledge" by now is that 8 bits is not enough for a linear channel. However, 10 bits integer might be enough for a linear channel in SDR.
+Defining HW Details +-------------------
+One way to take full advantage of modern HW's color pipelines is by +defining a "generic" pipeline that matches all capable HW. Something +like this, which I took `from Uma Shankar`_ and expanded on:
+.. _from Uma Shankar: https://patchwork.freedesktop.org/series/90826/
+.. kernel-figure:: colorpipe.svg
Btw. there will be interesting issues with alpha-premult, filtering, and linearisation if your planes have alpha channels. That's before HDR is even considered.
Could you expand on this a bit?
First you might want to read http://ssp.impulsetrain.com/gamma-premult.html and then ask, which way does software and hardware do and expect alpha premultiplication. I don't actually know. I have always assumed the intuitive way for compositing in non-linear values before I understood what light-linear means, which means I have always assumed the *wrong* way of doing premult.
The next topic is, when you do filtering to sample from a texture that has an alpha channel, what should the values be from which you compute the weighted average or convolution? If I remember right, the answer is that they must be light-linear *and* premultiplied.
So there is exactly one way that is correct, and all other orders of operations are more or less incorrect.
+I intentionally put de-Gamma, and Gamma in parentheses in my graph +as they describe the intention of the block but not necessarily a +strict definition of how a userspace implementation is required to +use them.
+De-Gamma and Gamma blocks are named LUT, but they could be non-programmable +LUTs in some HW implementations with no programmable LUT available. See +the definitions for AMD's `latest dGPU generation`_ as an example.
+.. _latest dGPU generation: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/tree/driver...
+I renamed the "Plane Gamma LUT" and "CRTC De-Gamma LUT" to "Tonemapping" +as we generally don't want to re-apply gamma before blending, or do +de-gamma post blending. These blocks tend generally to be intended for +tonemapping purposes.
Right.
+Tonemapping in this case could be a simple nits value or `EDR`_ to describe +how to scale the :ref:`SDR luminance`.
I do wonder how that will turn out in the end... but on Friday there will be HDR Compositing and Tone-mapping live Q&A session: https://www.w3.org/Graphics/Color/Workshop/talks.html#compos
I didn't manage to join the compositing and tone-mapping live Q&A? Did anything interesting emerge from that?
I guess for me it wasn't mind blowing really, since I've been struggling to understand things for a good while now, and apparently I've actually learnt something. :-)
It was good (or bad?) to hear that much of the compositing challenges were still unsolved, and we're definitely not alone trying to find answers.
A much more interesting Q&A session was yesterday on Color creation and manipulation, where the topics were even more to our scope, perhaps surprisingly.
I got a grasp of how mindbogglingly complex the ICCmax specification is. It is so complex, that just recently they have started publishing a series of specifications that tell which parts of ICCmax one should implement or support for specific common use cases. Hopefully the emergence of those "Interoperability Conformance Specifications" gives rise to at least partial FOSS implementations.
If you want to do gamut reduction, OKLab color space seems like the best place to do it. It's not a specific gamut reduction algorithm, but it's a good space to work in, whatever you want to do.
The Krita presentation opened up practical issues with HDR and interoperability, and there I was able to ask about PQ and HLG differences and learn that HLG displays are better defined.
Even EDR was also talked about briefly.
As for take-aways... sorry, my mind hasn't returned to me yet. We will have to wait for the Q&A session transcripts to be published. Yes, there are supposed to be transcripts!
I didn't manage to ask how EDR is handling differences in black levels. EDR obviously caters for the peak whites, but I don't know about low blacks. They did give us a link: https://developer.apple.com/videos/play/wwdc2021/10161/
I haven't watched it yet.
I just went through it. It's a worthwile watch, though contains a bunch of corporate spin.
It sounds like EDR describes not just the mapping of SDR content to HDR outputs but goes beyond that and is the term used to describe the whole technology that allows rendering of content with different color spaces and in different pixel value representations. It looks like Apple has the composition of temporally & spatially mixed media figured out.
They don't seem to do proper tone-mapping in most cases, though. They talk about clipping highlights and seem to allude to the fact that tone-mapping (or soft-clipping) is an application's responsibility.
Their color value representation represents SDR as values between 0.0 and 1.0. Any value above 1.0 is an "HDR" value and can get clipped.
There is some good bits in the "best practices" section of the talk, like a mechanism of converting PQ content to EDR.
I've watched Timo Kunkel's talk and it's been very eye opening. He does a great job of highlighting the challenges of compositing HDR content.
+Tonemapping could also include the ability to use a 3D LUT which might be +accompanied by a 1D shaper LUT. The shaper LUT is required in order to +ensure a 3D LUT with limited entries (e.g. 9x9x9, or 17x17x17) operates +in perceptual (non-linear) space, so as to evenly spread the limited +entries evenly across the perceived space.
+.. _EDR: https://gitlab.freedesktop.org/swick/wayland-protocols/-/blob/color/unstable...
+Creating a model that is flexible enough to define color pipelines for +a wide variety of HW is challenging, though not impossible. Implementing +support for such a flexible definition in userspace, though, amounts +to essentially writing color pipeline drivers for each HW.
My thinking right now is that userspace has it's own pipeline model with the elements it must have. Then it attempts to map that pipeline to what elements the KMS pipeline happens to expose. If there is a mapping, good. If not, fall back to shaders on GPU. To help that succeed more often, I'm using the current KMS abstract pipeline as a guide in designing the Weston internal color pipeline.
I feel I should know, but is this pipeline documented? Is it merely, the plane > crtc > connector model, or does it go beyond that?
The KMS pixel pipeline model right now is just a bunch of properties in the CRTC. These properties allude to the degamma LUT -> CTM -> gamma LUT pipeline model, post-blending.
In Weston, we take a very similar approach. A color transformation (which maps to a single rendering pass, or the CRTC KMS properties, or the future per-plane KMS properties) is:
color model change -> pre-curve -> color mapping -> post-curve
Color model change is more or less for YCbCr->RGB conversion.
Pre- and post-curves are essentially per-channel 1D LUTs or enumerated functions.
Color mapping is a 3D LUT, a matrix, or whatever else is needed.
You can see a similar structure to the KMS degamma->CTM->gamma, but with options to plug in other defined operations in the slots so that at least the GL-renderer can be flexible enough for everything, even if it doesn't match KMS capabilities. Each of the slots can also be identity (which even gets compile out of the GL shader).
Weston has one color transformation per window to go from content to blending space, and another color transformation to go from blending to output (cable) space.
It's not really documented, as half of that code, and more really, is still waiting for review or to be written. Oh, I did have some plans written down here: https://gitlab.freedesktop.org/wayland/weston/-/issues/467#note_864054
Right, I need to digest this again.
Did anybody start any CM doc patches in Weston or Wayland yet?
Pre-curve for instance could be a combination of decoding to linear light and a shaper for the 3D LUT coming next. That's why we don't call them gamma or EOTF, that would be too limiting.
(Using a shaper may help to keep the 3D LUT size reasonable - I suppose very much like those multi-segmented LUTs.)
AFAIU a 3D LUTs will need a shaper as they don't have enough precision. But that's going deeper into color theory than I understand. Vitaly would know better all the details around 3D LUT usage.
...
Now, someone might say that the Wayland protocol design for HDR aims to be descriptive and not prescriptive, so why should KMS UAPI be different? The reason is explained above: *some* KMS clients may switch frame by frame between KMS and shaders, but Wayland clients pick one path and stick to it. Wayland clients have no reason that I can imagine to switch arbitrarily in flight.
I'm a bit confused about this paragraph. Wouldn't the Wayland compositor decide whether to use a KMS plane or shader and not the client?
What I meant is, Wayland clients will not randomly switch between doing color transformations themselves and letting the compositor do it. They should be able to just pick one path and stick to it as long as the window is up.
Makes sense.
+We would like to solicit feedback and encourage discussion around the +merits and weaknesses of these approaches. This question is at the core +of defining a good API and we'd like to get it right.
+Input and Output Transfer functions +-----------------------------------
+We define an input transfer function on drm_plane to describe the +transform from framebuffer to blending space.
+We define an output transfer function on drm_crtc to describe the +transform from blending space to display space.
Here is again the terminology problem between transfer function and (color) space.
Color value encoding? Or luminance space? Or maybe there's a different term altogether to describe this?
The problem in the statement is that it implies a transfer function can do color space conversions or color space mapping.
In Weston we call it "color transformation" in an attempt to include everything.
The input function must include the possibility for color space mapping because you may have different planes with different content color spaces, and blending requires converting them all into one common color space.
Depending on what you choose as your blending space, the output function could be just the display EOTF or something more complicated.
...
It's worth to note that while PQ is absolute in luminance (providing cd/m² values), everything else here is relative for both SDR and HDR. You cannot blend content in PQ with content in something else together, until you practically define the absolute luminance for all non-PQ content or vice versa.
A further complication is that you could have different relative-luminance transfer functions, meaning that the (absolute) luminance they are relative to varies. The obvious case is blending SDR content with HDR content when both have relative-luminance transfer function.
Good points. It sounds like we would need something akin to EDR (or max-SDR nits) for any relative-luminance TF, i.e. a way to arbitrarily scale the luminance of the respective plane.
Right. However, in the past few days, I've heard statements that scaling luminance linearly will look not so good. What you need to do is to follow the human visual system (HVS) characteristic and use a gamma function. (This is not about non-linear encoding, just that the function happens to be similar - which is not totally a coincidence, since also non-linear encoding is meant to follow the HVS[*].) HLG OOTF does exactly this IIUC. Naturally, these statements came from Andrew Cotton as I recall.
Interesting comment about scaling luminance.
- Or actually, the non-linear encoding was meant to follow cathode-ray tube characteristic, which by pure coincidence happens to roughly agree with HVS.
Then you have HLG which is more like scene-referred than display-referred, but that might be solved with the parameter I mentioned, I'm not quite sure.
PQ is said to be display-referred, but it's usually referred to someone else's display than yours, which means it needs the HDR metadata to be able to tone-map suitably to your display. This seems to be a similar problem as with signal gamut vs. device gamut.
The traditional relative-luminance transfer functions, well, the content implied by them, is display-referred when it arrived at KMS or compositor level. There the question of "whose display" doesn't matter much because it's SDR and narrow gamut, and we probably don't even notice when we see an image wrong. With HDR the mismatch might be noticeable.
+Describing SDR Luminance +------------------------------
+Since many displays do no correctly advertise the HDR white level we +propose to define the SDR white level in nits.
This means that even if you had no content using PQ, you still need to define the absolute luminance for all the (HDR) relative-luminance transfer functions.
There probably needs to be something to relate everything to a single, relative or absolute, luminance range. That is necessary for any composition (KMS and software) since the output is a single image.
Is it better to go with relative or absolute metrics? Right now I would tend to say relative, because relative is unitless. Absolute values are numerically equivalent, but they might not have anything to do with actual physical measurements, making them actually relative. This happens when your monitor does not support PQ mode or does tone-mapping to your image, for instance.
It sounds like PQ is the outlier here in defining luminance in absolute units. Though it's also currently the most commonly used TF for HDR content.
Yes. "A completely new way", I recall reading somewhere advocating PQ. :-)
You can't switch from PQ to HLG by only replacing the TF, mind. Or so they say... I suppose converting from one to the other requires making decisions on the way. At least you need to know what display dynamic range you are targeting I think.
Wouldn't you use the absolute luminance definition for PQ if you relate everything to a relative range?
Would it make sense to relate everything to a common output luminance range? If that output is PQ then an input PQ buffer is still output as PQ and relative-luminance buffers can be scaled.
Would that scaling (EDR or similar) be different for SDR (sRGB) content vs other HDR relative-luminance content?
I think we need to know the target display, especially the dynamic range of it. Then we know what HLG OOTF it should use. From PQ we need at least the HDR static metadata to know the actual range, as assuming the full 10k nit range being meaningful could seriously lose highlights or something I guess.
Everything is relative to the target display I believe, even PQ since displaying PQ as-is only works on the mastering display.
Since PQ content comes with some metadata, we need PQ-to-PQ conversions for PQ display, assuming we don't just pass through the metadata to the display. Maybe the HLG OOTF could be used for the tone mapping of PQ-to-PQ...
I think both PQ and HLG have different standards written for how to map SDR to them. I don't remember which ITU-R or SMPTE spec those might be, but I suppose BT.2100 could be a starting point searching for them.
I wonder if an intermediate representation of color values, like the EDR representation, would help with the conversions.
Thanks, Harry
...
Initially I was hoping to find a quick way to allow pushing video straight from decoder through a KMS plane to the output. Increasingly I'm realizing that this is probably not going to work well for a general desktop compositor, hence the statement here to pretty much say the Wayland plan is the correct plan for this: single-plane HDR (with shader composition) first, then KMS offloading for power saving.
On some level I'm still interested in the direct decoder-to-KMS-to-display path but am afraid we won't get the API right if we don't deal with the general desktop compositor use-case first.
I am very happy to hear that. :-)
Apologies, again, if some of my response is a bit incoherent. I've been writing the responses over Friday and today.
It wasn't at all!
Thanks, pq
On Tue, 21 Sep 2021 14:05:05 -0400 Harry Wentland harry.wentland@amd.com wrote:
On 2021-09-21 09:31, Pekka Paalanen wrote:
On Mon, 20 Sep 2021 20:14:50 -0400 Harry Wentland harry.wentland@amd.com wrote:
On 2021-09-15 10:01, Pekka Paalanen wrote:> On Fri, 30 Jul 2021 16:41:29 -0400
Harry Wentland harry.wentland@amd.com wrote:
Use the new DRM RFC doc section to capture the RFC previously only described in the cover letter at https://patchwork.freedesktop.org/series/89506/
v3:
- Add sections on single-plane and multi-plane HDR
- Describe approach to define HW details vs approach to define SW intentions
- Link Jeremy Cline's excellent HDR summaries
- Outline intention behind overly verbose doc
- Describe FP16 use-case
- Clean up links
v2: create this doc
v1: n/a
Signed-off-by: Harry Wentland harry.wentland@amd.com
Hi Harry!
...
Documentation/gpu/rfc/color_intentions.drawio | 1 + Documentation/gpu/rfc/color_intentions.svg | 3 + Documentation/gpu/rfc/colorpipe | 1 + Documentation/gpu/rfc/colorpipe.svg | 3 + Documentation/gpu/rfc/hdr-wide-gamut.rst | 580 ++++++++++++++++++ Documentation/gpu/rfc/index.rst | 1 + 6 files changed, 589 insertions(+) create mode 100644 Documentation/gpu/rfc/color_intentions.drawio create mode 100644 Documentation/gpu/rfc/color_intentions.svg create mode 100644 Documentation/gpu/rfc/colorpipe create mode 100644 Documentation/gpu/rfc/colorpipe.svg create mode 100644 Documentation/gpu/rfc/hdr-wide-gamut.rst
...
I think we need to talk about what 1.0 means. Apple's EDR defines 1.0 as "reference white" or in other words the max SDR white.
That definition might change depending on the content type.
Yes, the definition of 1.0 depends on the... *cough* encoding. Semantic encoding? Sometimes it just means max signal value (like everywhere until now), sometimes it maps to something else. It might be relative (other than PQ system) or absolute (PQ system) luminance, with a fixed scale after non-linear encoding.
The definition of 0.0, or { 0.0, 0.0, 0.0 } more like, is pretty much always the darkest possible black - or is it? The darkest possible black is not usually 0 cd/m², but something above that depending on both the device and the viewing environment. A display necessarily reflects some light from the environment which sets the black level of the image, even if the display itself was capable of exactly 0 cd/m². Maybe VR goggles are an exception.
As a side note: if the viewing environment sets the display black level, then the environment also sets the display black's white point, and that may be different from the display's own white point. Also HVS has rods for low light vision, while color management concentrates wholly on the cones that provide color vision. So dark shades might be in the rod range where color cannot be perceived. I digress though.
Then there is the whole issue of HVS adaptation which basically sets the observable dynamic range bracket (and what one considers as white I think). Minimum observable color and luminance difference depends on that bracket and the color position inside the bracket.
Trying to look at a monitor in bright daylight is a painful example of these. ;-)
Btw. is was an awesome experience many years ago to spend 15-30 minutes in a room lit with a pale green light only, and then walking outside. I have never ever seen so vivid and saturated reds, yellows, violets, browns(!), etc. than just after coming out of that room. That was the real world, not a display. :-)
...
One thing I realised yesterday is that HLG displays are much better defined than PQ displays, because HLG defines what OOTF the display must implement. In a PQ system, the signal carries the full 10k nits range, and then the monitor must do vendor magic to display it. That's for tone mapping, not sure if HLG has an advantage in gamut mapping as well.
Doesn't the metadata describe the max content white? So even if the signal carries the full 10k nits the actual max luminance of the content should be incoded as part of the metadata.
It is in the HDR static metadata, yes, if present. There is also dynamic metadata version.
However, the static metadata describes the presentation on the (professional) mastering display, more or less. Almost certainly the display an end user has is not a mastering display capable device, so arbitrary magic still needs to happen to squeeze the signal down to what the display can do.
Or, I suppose, if the signal (image) does not need squeezing for people who bought the average HDR display, then people who bought high-end HDR displays will be unimpressed by the image on their display. Thinking of buying a new fancy TV and then the image looks exactly the same as in the old one. Ironically, that is exactly what color management might do to SDR content.
One could expand a narrow range to a wider range, and I'm sure displays do that too for more sales, but I guess you would have the usual problems of upscaling. It's hard to invent detail where there was none recorded.
...
Did anybody start any CM doc patches in Weston or Wayland yet?
There is the https://gitlab.freedesktop.org/swick/wayland-protocols/-/blob/color/unstable... we started a long time ago, and have not really touched it for a while. Since we last touched it, at least my understanding has developed somewhat.
It is linked from the overview in https://gitlab.freedesktop.org/wayland/wayland-protocols/-/merge_requests/14 and if you want to propose changes, the way to do it is file a MR in https://gitlab.freedesktop.org/swick/wayland-protocols/-/merge_requests against the 'color' branch. Patches very much welcome, that doc does not need to limit itself to Wayland. :-)
We also have issues tracked at https://gitlab.freedesktop.org/swick/wayland-protocols/-/issues?scope=all&am...
Pre-curve for instance could be a combination of decoding to linear light and a shaper for the 3D LUT coming next. That's why we don't call them gamma or EOTF, that would be too limiting.
(Using a shaper may help to keep the 3D LUT size reasonable - I suppose very much like those multi-segmented LUTs.)
AFAIU a 3D LUTs will need a shaper as they don't have enough precision. But that's going deeper into color theory than I understand. Vitaly would know better all the details around 3D LUT usage.
There is a very practical problem: the sheer number of elements in a 3D LUT grows to the power of three. So you can't have very many taps per channel without storage requirements blowing up. Each element needs to be a 3-channel value, too. And then 8 bits is not enough.
I'm really happy that Vitaly is working with us on Weston and Wayland. :-) He's a huge help, and I feel like I'm currently the one slowing things down by being backlogged in reviews.
Thanks, pq
On 2021-09-22 04:31, Pekka Paalanen wrote:
On Tue, 21 Sep 2021 14:05:05 -0400 Harry Wentland harry.wentland@amd.com wrote:
On 2021-09-21 09:31, Pekka Paalanen wrote:
On Mon, 20 Sep 2021 20:14:50 -0400 Harry Wentland harry.wentland@amd.com wrote:
...
Did anybody start any CM doc patches in Weston or Wayland yet?
There is the https://gitlab.freedesktop.org/swick/wayland-protocols/-/blob/color/unstable... we started a long time ago, and have not really touched it for a while. Since we last touched it, at least my understanding has developed somewhat.
It is linked from the overview in https://gitlab.freedesktop.org/wayland/wayland-protocols/-/merge_requests/14 and if you want to propose changes, the way to do it is file a MR in https://gitlab.freedesktop.org/swick/wayland-protocols/-/merge_requests against the 'color' branch. Patches very much welcome, that doc does not need to limit itself to Wayland. :-)
Right, I've read all that a while back.
It might be a good place to consolidate most of the Linux CM/HDR discussion, since gitlab is good with allowing discussions, we can track changes, and it's more formatting and diagram friendly than text-only email.
We also have issues tracked at https://gitlab.freedesktop.org/swick/wayland-protocols/-/issues?scope=all&am...
Pre-curve for instance could be a combination of decoding to linear light and a shaper for the 3D LUT coming next. That's why we don't call them gamma or EOTF, that would be too limiting.
(Using a shaper may help to keep the 3D LUT size reasonable - I suppose very much like those multi-segmented LUTs.)
AFAIU a 3D LUTs will need a shaper as they don't have enough precision. But that's going deeper into color theory than I understand. Vitaly would know better all the details around 3D LUT usage.
There is a very practical problem: the sheer number of elements in a 3D LUT grows to the power of three. So you can't have very many taps per channel without storage requirements blowing up. Each element needs to be a 3-channel value, too. And then 8 bits is not enough.
And those storage requirements would have a direct impact on silicon real estate and therefore the price and power usage of the HW.
Harry
I'm really happy that Vitaly is working with us on Weston and Wayland. :-) He's a huge help, and I feel like I'm currently the one slowing things down by being backlogged in reviews.
Thanks, pq
On Wed, 22 Sep 2021 11:28:37 -0400 Harry Wentland harry.wentland@amd.com wrote:
On 2021-09-22 04:31, Pekka Paalanen wrote:
On Tue, 21 Sep 2021 14:05:05 -0400 Harry Wentland harry.wentland@amd.com wrote:
On 2021-09-21 09:31, Pekka Paalanen wrote:
On Mon, 20 Sep 2021 20:14:50 -0400 Harry Wentland harry.wentland@amd.com wrote:
...
Did anybody start any CM doc patches in Weston or Wayland yet?
There is the https://gitlab.freedesktop.org/swick/wayland-protocols/-/blob/color/unstable... we started a long time ago, and have not really touched it for a while. Since we last touched it, at least my understanding has developed somewhat.
It is linked from the overview in https://gitlab.freedesktop.org/wayland/wayland-protocols/-/merge_requests/14 and if you want to propose changes, the way to do it is file a MR in https://gitlab.freedesktop.org/swick/wayland-protocols/-/merge_requests against the 'color' branch. Patches very much welcome, that doc does not need to limit itself to Wayland. :-)
Right, I've read all that a while back.
It might be a good place to consolidate most of the Linux CM/HDR discussion, since gitlab is good with allowing discussions, we can track changes, and it's more formatting and diagram friendly than text-only email.
Fine by me, but the way things are right now, we'd be hijacking Sebastian's personal repository for these things. That's not ideal.
We can't merge the protocol XML into wayland-protocols until it has the accepted implementations required by the governance rules, but I wonder if we could land color.rst ahead of time, then work on that in wayland-protocols upstream repo.
It's hard to pick a good place for a cross-project document. Any other ideas?
We also have issues tracked at https://gitlab.freedesktop.org/swick/wayland-protocols/-/issues?scope=all&am...
Thanks, pq
On Thu, 23 Sep 2021 10:43:54 +0300 Pekka Paalanen ppaalanen@gmail.com wrote:
On Wed, 22 Sep 2021 11:28:37 -0400 Harry Wentland harry.wentland@amd.com wrote:
On 2021-09-22 04:31, Pekka Paalanen wrote:
On Tue, 21 Sep 2021 14:05:05 -0400 Harry Wentland harry.wentland@amd.com wrote:
On 2021-09-21 09:31, Pekka Paalanen wrote:
On Mon, 20 Sep 2021 20:14:50 -0400 Harry Wentland harry.wentland@amd.com wrote:
...
Did anybody start any CM doc patches in Weston or Wayland yet?
There is the https://gitlab.freedesktop.org/swick/wayland-protocols/-/blob/color/unstable... we started a long time ago, and have not really touched it for a while. Since we last touched it, at least my understanding has developed somewhat.
It is linked from the overview in https://gitlab.freedesktop.org/wayland/wayland-protocols/-/merge_requests/14 and if you want to propose changes, the way to do it is file a MR in https://gitlab.freedesktop.org/swick/wayland-protocols/-/merge_requests against the 'color' branch. Patches very much welcome, that doc does not need to limit itself to Wayland. :-)
Right, I've read all that a while back.
It might be a good place to consolidate most of the Linux CM/HDR discussion, since gitlab is good with allowing discussions, we can track changes, and it's more formatting and diagram friendly than text-only email.
Fine by me, but the way things are right now, we'd be hijacking Sebastian's personal repository for these things. That's not ideal.
We can't merge the protocol XML into wayland-protocols until it has the accepted implementations required by the governance rules, but I wonder if we could land color.rst ahead of time, then work on that in wayland-protocols upstream repo.
It's hard to pick a good place for a cross-project document. Any other ideas?
We also have issues tracked at https://gitlab.freedesktop.org/swick/wayland-protocols/-/issues?scope=all&am...
Hi all,
we discussed things in https://gitlab.freedesktop.org/swick/wayland-protocols/-/issues/6
and we have a new home for the color related WIP documentation we can use across Wayland, Mesa, DRM, and even X11 if people want to:
https://gitlab.freedesktop.org/pq/color-and-hdr
Yes, it's still someone's personal repository, but we avoid entangling it with wayland-protocols which also means we can keep the full git history. If this gets enough traction, the repository can be moved from under my personal group to somewhere more communal, and if that is still inside gitlab.fd.o then all merge requests and issues will move with it.
The README notes that we will deal out merge permissions as well.
This is not meant to supersede the documentation of individual APIs, but to host additional documentation that would be too verbose, too big, or out of scope to host within respective API docs.
Feel free to join the effort or just to discuss.
Thanks, pq
On 2021-09-20 20:14, Harry Wentland wrote:
On 2021-09-15 10:01, Pekka Paalanen wrote:> On Fri, 30 Jul 2021 16:41:29 -0400
Harry Wentland harry.wentland@amd.com wrote:
<snip>
+If a display's maximum HDR white level is correctly reported it is trivial +to convert between all of the above representations of SDR white level. If +it is not, defining SDR luminance as a nits value, or a ratio vs a fixed +nits value is preferred, assuming we are blending in linear space.
+It is our experience that many HDR displays do not report maximum white +level correctly
Which value do you refer to as "maximum white", and how did you measure it?
Good question. I haven't played with those displays myself but I'll try to find out a bit more background behind this statement.
Some TVs report the EOTF but not the luminance values. For an example edid-code capture of my eDP HDR panel:
HDR Static Metadata Data Block: Electro optical transfer functions: Traditional gamma - SDR luminance range SMPTE ST2084 Supported static metadata descriptors: Static metadata type 1 Desired content max luminance: 115 (603.666 cd/m^2) Desired content max frame-average luminance: 109 (530.095 cd/m^2) Desired content min luminance: 7 (0.005 cd/m^2)
I suspect on those TVs it looks like this:
HDR Static Metadata Data Block: Electro optical transfer functions: Traditional gamma - SDR luminance range SMPTE ST2084 Supported static metadata descriptors: Static metadata type 1
Windows has some defaults in this case and our Windows driver also has some defaults.
Using defaults in the 1000-2000 nits range would yield much better tone-mapping results than assuming the monitor can support a full 10k nits.
As an aside, recently we've come across displays where the max average luminance is higher than the max peak luminance. This is not a mistake but due to how the display's dimming zones work.
Not sure what impact this might have on tone-mapping, other than to keep in mind that we can assume that max_avg < max_peak.
Harry
On Wed, 22 Sep 2021 11:06:53 -0400 Harry Wentland harry.wentland@amd.com wrote:
On 2021-09-20 20:14, Harry Wentland wrote:
On 2021-09-15 10:01, Pekka Paalanen wrote:> On Fri, 30 Jul 2021 16:41:29 -0400
Harry Wentland harry.wentland@amd.com wrote:
<snip>
+If a display's maximum HDR white level is correctly reported it is trivial +to convert between all of the above representations of SDR white level. If +it is not, defining SDR luminance as a nits value, or a ratio vs a fixed +nits value is preferred, assuming we are blending in linear space.
+It is our experience that many HDR displays do not report maximum white +level correctly
Which value do you refer to as "maximum white", and how did you measure it?
Good question. I haven't played with those displays myself but I'll try to find out a bit more background behind this statement.
Some TVs report the EOTF but not the luminance values. For an example edid-code capture of my eDP HDR panel:
HDR Static Metadata Data Block: Electro optical transfer functions: Traditional gamma - SDR luminance range SMPTE ST2084 Supported static metadata descriptors: Static metadata type 1 Desired content max luminance: 115 (603.666 cd/m^2) Desired content max frame-average luminance: 109 (530.095 cd/m^2) Desired content min luminance: 7 (0.005 cd/m^2)
I forget where I heard (you, Vitaly, someone?) that integrated panels may not have the magic gamut and tone mapping hardware, which means that software (or display engine) must do the full correct thing.
That's another reason to not rely on magic display functionality, which suits my plans perfectly.
I suspect on those TVs it looks like this:
HDR Static Metadata Data Block: Electro optical transfer functions: Traditional gamma - SDR luminance range SMPTE ST2084 Supported static metadata descriptors: Static metadata type 1
Windows has some defaults in this case and our Windows driver also has some defaults.
Oh, missing information. Yay.
Using defaults in the 1000-2000 nits range would yield much better tone-mapping results than assuming the monitor can support a full 10k nits.
Obviously.
As an aside, recently we've come across displays where the max average luminance is higher than the max peak luminance. This is not a mistake but due to how the display's dimming zones work.
IOW, the actual max peak luminance in absolute units depends on the current image average luminance. Wonderful, but what am I (the content producer, the display server) supposed to do with that information...
Not sure what impact this might have on tone-mapping, other than to keep in mind that we can assume that max_avg < max_peak.
*cannot
Seems like it would lead to a very different tone mapping algorithm which needs to compute the image average luminance before it can account for max peak luminance (which I wouldn't know how to infer). So either a two-pass algorithm, or taking the average from the previous frame.
I imagine that is going to be fun considering one needs to composite different types of input images together, and the final tone mapping might need to differ for each. Strictly thinking that might lead to an iterative optimisation algorithm which would be quite intractable in practise to complete for a single frame at a time.
Thanks, pq
On 2021-09-23 04:01, Pekka Paalanen wrote:
On Wed, 22 Sep 2021 11:06:53 -0400 Harry Wentland harry.wentland@amd.com wrote:
On 2021-09-20 20:14, Harry Wentland wrote:
On 2021-09-15 10:01, Pekka Paalanen wrote:> On Fri, 30 Jul 2021 16:41:29 -0400
Harry Wentland harry.wentland@amd.com wrote:
<snip>
+If a display's maximum HDR white level is correctly reported it is trivial +to convert between all of the above representations of SDR white level. If +it is not, defining SDR luminance as a nits value, or a ratio vs a fixed +nits value is preferred, assuming we are blending in linear space.
+It is our experience that many HDR displays do not report maximum white +level correctly
Which value do you refer to as "maximum white", and how did you measure it?
Good question. I haven't played with those displays myself but I'll try to find out a bit more background behind this statement.
Some TVs report the EOTF but not the luminance values. For an example edid-code capture of my eDP HDR panel:
HDR Static Metadata Data Block: Electro optical transfer functions: Traditional gamma - SDR luminance range SMPTE ST2084 Supported static metadata descriptors: Static metadata type 1 Desired content max luminance: 115 (603.666 cd/m^2) Desired content max frame-average luminance: 109 (530.095 cd/m^2) Desired content min luminance: 7 (0.005 cd/m^2)
I forget where I heard (you, Vitaly, someone?) that integrated panels may not have the magic gamut and tone mapping hardware, which means that software (or display engine) must do the full correct thing.
That's another reason to not rely on magic display functionality, which suits my plans perfectly.
I've mentioned it before but there aren't really a lot of integrated HDR panels yet. I think we've only seen one or two without tone-mapping ability.
Either way we probably need at least the ability to tone-map the output on the transmitter side (SW, GPU, or display HW).
I suspect on those TVs it looks like this:
HDR Static Metadata Data Block: Electro optical transfer functions: Traditional gamma - SDR luminance range SMPTE ST2084 Supported static metadata descriptors: Static metadata type 1
Windows has some defaults in this case and our Windows driver also has some defaults.
Oh, missing information. Yay.
Using defaults in the 1000-2000 nits range would yield much better tone-mapping results than assuming the monitor can support a full 10k nits.
Obviously.
As an aside, recently we've come across displays where the max average luminance is higher than the max peak luminance. This is not a mistake but due to how the display's dimming zones work.
IOW, the actual max peak luminance in absolute units depends on the current image average luminance. Wonderful, but what am I (the content producer, the display server) supposed to do with that information...
Not sure what impact this might have on tone-mapping, other than to keep in mind that we can assume that max_avg < max_peak.
*cannot
Right
Seems like it would lead to a very different tone mapping algorithm which needs to compute the image average luminance before it can account for max peak luminance (which I wouldn't know how to infer). So either a two-pass algorithm, or taking the average from the previous frame.
I imagine that is going to be fun considering one needs to composite different types of input images together, and the final tone mapping might need to differ for each. Strictly thinking that might lead to an iterative optimisation algorithm which would be quite intractable in practise to complete for a single frame at a time.
Maybe a good approach for this would be to just consider MaxAvg = MaxPeak in this case. At least until one would want to consider dynamic tone-mapping, i.e. tone-mapping that is changing frame-by-frame based on content. Dynamic tone-mapping might be challenging to do in SW but could be a possibility with specialized HW. Though I'm not sure exactly how that HW would look like. Maybe something like a histogram engine like Laurent mentions in https://lists.freedesktop.org/archives/dri-devel/2021-June/311689.html.
Harry
Thanks, pq
On 2021-09-23 9:40 a.m., Harry Wentland wrote:
On 2021-09-23 04:01, Pekka Paalanen wrote:
On Wed, 22 Sep 2021 11:06:53 -0400 Harry Wentland harry.wentland@amd.com wrote:
On 2021-09-20 20:14, Harry Wentland wrote:
On 2021-09-15 10:01, Pekka Paalanen wrote:> On Fri, 30 Jul 2021 16:41:29 -0400
Harry Wentland harry.wentland@amd.com wrote:
<snip>
+If a display's maximum HDR white level is correctly reported it is trivial +to convert between all of the above representations of SDR white level. If +it is not, defining SDR luminance as a nits value, or a ratio vs a fixed +nits value is preferred, assuming we are blending in linear space.
+It is our experience that many HDR displays do not report maximum white +level correctly
Which value do you refer to as "maximum white", and how did you measure it?
Good question. I haven't played with those displays myself but I'll try to find out a bit more background behind this statement.
Some TVs report the EOTF but not the luminance values. For an example edid-code capture of my eDP HDR panel:
HDR Static Metadata Data Block: Electro optical transfer functions: Traditional gamma - SDR luminance range SMPTE ST2084 Supported static metadata descriptors: Static metadata type 1 Desired content max luminance: 115 (603.666 cd/m^2) Desired content max frame-average luminance: 109 (530.095 cd/m^2) Desired content min luminance: 7 (0.005 cd/m^2)
I forget where I heard (you, Vitaly, someone?) that integrated panels may not have the magic gamut and tone mapping hardware, which means that software (or display engine) must do the full correct thing.
That's another reason to not rely on magic display functionality, which suits my plans perfectly.
I've mentioned it before but there aren't really a lot of integrated HDR panels yet. I think we've only seen one or two without tone-mapping ability.
Either way we probably need at least the ability to tone-map the output on the transmitter side (SW, GPU, or display HW).
It is really interesting to see the quality of panel TM algorithm by specifying different metadata
and validate how severe loss of details which could mean no TM at all or 1DLUT is used to soften the
clipping or 3DLUT( which has wider possibilities for TM)
To facilitate this development we may use LCMS proofing capabilities to allow simulate the image
view on high end(wide gamut display) how it may looks on low end
(narrow gamut displays or integrated panels)
I suspect on those TVs it looks like this:
HDR Static Metadata Data Block: Electro optical transfer functions: Traditional gamma - SDR luminance range SMPTE ST2084 Supported static metadata descriptors: Static metadata type 1
Windows has some defaults in this case and our Windows driver also has some defaults.
Oh, missing information. Yay.
Using defaults in the 1000-2000 nits range would yield much better tone-mapping results than assuming the monitor can support a full 10k nits.
Obviously.
As an aside, recently we've come across displays where the max average luminance is higher than the max peak luminance. This is not a mistake but due to how the display's dimming zones work.
IOW, the actual max peak luminance in absolute units depends on the current image average luminance. Wonderful, but what am I (the content producer, the display server) supposed to do with that information...
Not sure what impact this might have on tone-mapping, other than to keep in mind that we can assume that max_avg < max_peak.
*cannot
Right
Seems like it would lead to a very different tone mapping algorithm which needs to compute the image average luminance before it can account for max peak luminance (which I wouldn't know how to infer). So either a two-pass algorithm, or taking the average from the previous frame.
I imagine that is going to be fun considering one needs to composite different types of input images together, and the final tone mapping might need to differ for each. Strictly thinking that might lead to an iterative optimisation algorithm which would be quite intractable in practise to complete for a single frame at a time.
Maybe a good approach for this would be to just consider MaxAvg = MaxPeak in this case. At least until one would want to consider dynamic tone-mapping, i.e. tone-mapping that is changing frame-by-frame based on content. Dynamic tone-mapping might be challenging to do in SW but could be a possibility with specialized HW. Though I'm not sure exactly how that HW would look like. Maybe something like a histogram engine like Laurent mentions in https://lists.freedesktop.org/archives/dri-devel/2021-June/311689.html.
Harry
Thanks, pq
From: Bhawanpreet Lakha Bhawanpreet.Lakha@amd.com
Due to the way displays and human vision work it is most effective to encode luminance information in a non-linear space.
For SDR this non-linear mapping is assumed to roughly use a gamma 2.2 curve. This was due to the way CRTs worked and was fine for SDR content with a low luminance range.
The large luminance range (0-10,000 nits) for HDR exposes some short-comings of a simple gamma curve that have been addressed through various Electro-Optical Transfer Functions (EOTFs).
Rather than assuming how framebuffer content is encoded we want to make sure userspace presenting HDR content is explicit about the EOTF of the content, so a driver can decide whether the content can be supported or not.
This Patch adds common transfer functions for SDR/HDR. These can be used to communicate with the driver regarding the transformation to use for a given plane.
enums added: DRM_TF_UNDEFINED the legacy case where the TF in/out of blending space is undefined DRM_TF_SRGB roughly 2.4 gamma with initial linear section DRM_TF_BT709 Similar to Gamma 2.2-2.8 DRM_TF_PQ2084 most common tf used for HDR video (HDR10/Dolby). Can support up to 10,000 nits
The usage is similar to color_encoding and color_range where the driver can specify the default and supported tfs and pass it into drm_plane_create_color_properties().
v2: - drop "color" from transfer function name (Harry) - add DRM_TF_UNDEFINED enum as legacy default (Harry)
Signed-off-by: Bhawanpreet Lakha Bhawanpreet.Lakha@amd.com Signed-off-by: Harry Wentland harry.wentland@amd.com --- .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 4 +- .../gpu/drm/arm/display/komeda/komeda_plane.c | 4 +- drivers/gpu/drm/arm/malidp_planes.c | 4 +- drivers/gpu/drm/armada/armada_overlay.c | 4 +- drivers/gpu/drm/drm_atomic_uapi.c | 4 ++ drivers/gpu/drm/drm_color_mgmt.c | 64 +++++++++++++++++-- drivers/gpu/drm/i915/display/intel_sprite.c | 4 +- .../drm/i915/display/skl_universal_plane.c | 4 +- drivers/gpu/drm/nouveau/dispnv04/overlay.c | 4 +- drivers/gpu/drm/omapdrm/omap_plane.c | 4 +- drivers/gpu/drm/sun4i/sun8i_vi_layer.c | 4 +- drivers/gpu/drm/tidss/tidss_plane.c | 6 +- include/drm/drm_color_mgmt.h | 18 +++++- include/drm/drm_plane.h | 16 +++++ 14 files changed, 128 insertions(+), 16 deletions(-)
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c index b5b5ccf0ed71..63ddae9c5abe 100644 --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c @@ -7276,7 +7276,9 @@ static int amdgpu_dm_plane_init(struct amdgpu_display_manager *dm, BIT(DRM_COLOR_YCBCR_BT2020), BIT(DRM_COLOR_YCBCR_LIMITED_RANGE) | BIT(DRM_COLOR_YCBCR_FULL_RANGE), - DRM_COLOR_YCBCR_BT709, DRM_COLOR_YCBCR_LIMITED_RANGE); + BIT(DRM_TF_SRGB), + DRM_COLOR_YCBCR_BT709, DRM_COLOR_YCBCR_LIMITED_RANGE, + DRM_TF_SRGB); }
supported_rotations = diff --git a/drivers/gpu/drm/arm/display/komeda/komeda_plane.c b/drivers/gpu/drm/arm/display/komeda/komeda_plane.c index d63d83800a8a..811f79ab6d32 100644 --- a/drivers/gpu/drm/arm/display/komeda/komeda_plane.c +++ b/drivers/gpu/drm/arm/display/komeda/komeda_plane.c @@ -302,8 +302,10 @@ static int komeda_plane_add(struct komeda_kms_dev *kms, BIT(DRM_COLOR_YCBCR_BT2020), BIT(DRM_COLOR_YCBCR_LIMITED_RANGE) | BIT(DRM_COLOR_YCBCR_FULL_RANGE), + BIT(DRM_TF_UNDEFINED), DRM_COLOR_YCBCR_BT601, - DRM_COLOR_YCBCR_LIMITED_RANGE); + DRM_COLOR_YCBCR_LIMITED_RANGE, + DRM_TF_UNDEFINED); if (err) goto cleanup;
diff --git a/drivers/gpu/drm/arm/malidp_planes.c b/drivers/gpu/drm/arm/malidp_planes.c index 8c2ab3d653b7..98d308262880 100644 --- a/drivers/gpu/drm/arm/malidp_planes.c +++ b/drivers/gpu/drm/arm/malidp_planes.c @@ -1030,7 +1030,9 @@ int malidp_de_planes_init(struct drm_device *drm) BIT(DRM_COLOR_YCBCR_BT2020), BIT(DRM_COLOR_YCBCR_LIMITED_RANGE) | \ BIT(DRM_COLOR_YCBCR_FULL_RANGE), - enc, range); + BIT(DRM_TF_UNDEFINED), + enc, range, + DRM_TF_UNDEFINED); if (!ret) /* program the HW registers */ malidp_de_set_color_encoding(plane, enc, range); diff --git a/drivers/gpu/drm/armada/armada_overlay.c b/drivers/gpu/drm/armada/armada_overlay.c index d3e3e5fdc390..f7792444cb73 100644 --- a/drivers/gpu/drm/armada/armada_overlay.c +++ b/drivers/gpu/drm/armada/armada_overlay.c @@ -596,8 +596,10 @@ int armada_overlay_plane_create(struct drm_device *dev, unsigned long crtcs) BIT(DRM_COLOR_YCBCR_BT601) | BIT(DRM_COLOR_YCBCR_BT709), BIT(DRM_COLOR_YCBCR_LIMITED_RANGE), + BIT(DRM_TF_UNDEFINED), DEFAULT_ENCODING, - DRM_COLOR_YCBCR_LIMITED_RANGE); + DRM_COLOR_YCBCR_LIMITED_RANGE, + DRM_TF_UNDEFINED);
return ret; } diff --git a/drivers/gpu/drm/drm_atomic_uapi.c b/drivers/gpu/drm/drm_atomic_uapi.c index 7e48d40600ff..9582515dd12e 100644 --- a/drivers/gpu/drm/drm_atomic_uapi.c +++ b/drivers/gpu/drm/drm_atomic_uapi.c @@ -596,6 +596,8 @@ static int drm_atomic_plane_set_property(struct drm_plane *plane, state->color_encoding = val; } else if (property == plane->color_range_property) { state->color_range = val; + } else if (property == plane->transfer_function_property) { + state->transfer_function = val; } else if (property == config->prop_fb_damage_clips) { ret = drm_atomic_replace_property_blob_from_id(dev, &state->fb_damage_clips, @@ -662,6 +664,8 @@ drm_atomic_plane_get_property(struct drm_plane *plane, *val = state->color_encoding; } else if (property == plane->color_range_property) { *val = state->color_range; + } else if (property == plane->transfer_function_property) { + *val = state->transfer_function; } else if (property == config->prop_fb_damage_clips) { *val = (state->fb_damage_clips) ? state->fb_damage_clips->base.id : 0; diff --git a/drivers/gpu/drm/drm_color_mgmt.c b/drivers/gpu/drm/drm_color_mgmt.c index bb14f488c8f6..daf62fb090a6 100644 --- a/drivers/gpu/drm/drm_color_mgmt.c +++ b/drivers/gpu/drm/drm_color_mgmt.c @@ -106,6 +106,11 @@ * Optional plane enum property to support different non RGB * color parameter ranges. The driver can provide a subset of * standard enum values supported by the DRM plane. + * + * "COLOR_TRANFER_FUNCTION": + * Optional plane enum property to support different + * color luminance mappings. The driver can provide a subset of + * standard enum values supported by the DRM plane. */
/** @@ -476,6 +481,11 @@ static const char * const color_range_name[] = { [DRM_COLOR_YCBCR_LIMITED_RANGE] = "YCbCr limited range", };
+static const char * const tf_name[] = { + [DRM_TF_UNDEFINED] = "undefined", + [DRM_TF_SRGB] = "sRGB", + [DRM_TF_PQ2084] = "PQ2084", +}; /** * drm_get_color_encoding_name - return a string for color encoding * @encoding: color encoding to compute name of @@ -506,30 +516,49 @@ const char *drm_get_color_range_name(enum drm_color_range range) return color_range_name[range]; }
+/** + * drm_get_transfer_function - return a string for transfer function + * @tf: transfer function to compute name of + * + * In contrast to the other drm_get_*_name functions this one here returns a + * const pointer and hence is threadsafe. + */ +const char *drm_get_transfer_function_name(enum drm_transfer_function tf) +{ + if (WARN_ON(tf >= ARRAY_SIZE(tf_name))) + return "unknown"; + + return tf_name[tf]; +} /** * drm_plane_create_color_properties - color encoding related plane properties * @plane: plane object * @supported_encodings: bitfield indicating supported color encodings * @supported_ranges: bitfileld indicating supported color ranges + * @supported_tfs: bitfield indicating supported transfer functions * @default_encoding: default color encoding * @default_range: default color range + * @default_tf: default color transfer function * - * Create and attach plane specific COLOR_ENCODING and COLOR_RANGE - * properties to @plane. The supported encodings and ranges should - * be provided in supported_encodings and supported_ranges bitmasks. + * Create and attach plane specific COLOR_ENCODING, COLOR_RANGE and TRANSFER_FUNCTION + * properties to @plane. The supported encodings, ranges and tfs should + * be provided in supported_encodings, supported_ranges and supported_tfs bitmasks. * Each bit set in the bitmask indicates that its number as enum * value is supported. */ int drm_plane_create_color_properties(struct drm_plane *plane, u32 supported_encodings, u32 supported_ranges, + u32 supported_tfs, enum drm_color_encoding default_encoding, - enum drm_color_range default_range) + enum drm_color_range default_range, + enum drm_transfer_function default_tf) { struct drm_device *dev = plane->dev; struct drm_property *prop; struct drm_prop_enum_list enum_list[max_t(int, DRM_COLOR_ENCODING_MAX, - DRM_COLOR_RANGE_MAX)]; + max_t(int, DRM_COLOR_RANGE_MAX, + DRM_TF_MAX))]; int i, len;
if (WARN_ON(supported_encodings == 0 || @@ -542,6 +571,11 @@ int drm_plane_create_color_properties(struct drm_plane *plane, (supported_ranges & BIT(default_range)) == 0)) return -EINVAL;
+ if (WARN_ON(supported_tfs == 0 || + (supported_tfs & -BIT(DRM_TF_MAX)) != 0 || + (supported_tfs & BIT(default_tf)) == 0)) + return -EINVAL; + len = 0; for (i = 0; i < DRM_COLOR_ENCODING_MAX; i++) { if ((supported_encodings & BIT(i)) == 0) @@ -580,6 +614,26 @@ int drm_plane_create_color_properties(struct drm_plane *plane, if (plane->state) plane->state->color_range = default_range;
+ + len = 0; + for (i = 0; i < DRM_TF_MAX; i++) { + if ((supported_tfs & BIT(i)) == 0) + continue; + + enum_list[len].type = i; + enum_list[len].name = tf_name[i]; + len++; + } + + prop = drm_property_create_enum(dev, 0, "TRANSFER_FUNCTION", + enum_list, len); + if (!prop) + return -ENOMEM; + plane->transfer_function_property = prop; + drm_object_attach_property(&plane->base, prop, default_tf); + if (plane->state) + plane->state->transfer_function = default_tf; + return 0; } EXPORT_SYMBOL(drm_plane_create_color_properties); diff --git a/drivers/gpu/drm/i915/display/intel_sprite.c b/drivers/gpu/drm/i915/display/intel_sprite.c index 4ae9a7455b23..b3f7aca3795b 100644 --- a/drivers/gpu/drm/i915/display/intel_sprite.c +++ b/drivers/gpu/drm/i915/display/intel_sprite.c @@ -1850,8 +1850,10 @@ intel_sprite_plane_create(struct drm_i915_private *dev_priv, BIT(DRM_COLOR_YCBCR_BT709), BIT(DRM_COLOR_YCBCR_LIMITED_RANGE) | BIT(DRM_COLOR_YCBCR_FULL_RANGE), + BIT(DRM_TF_UNDEFINED), DRM_COLOR_YCBCR_BT709, - DRM_COLOR_YCBCR_LIMITED_RANGE); + DRM_COLOR_YCBCR_LIMITED_RANGE, + DRM_TF_UNDEFINED);
zpos = sprite + 1; drm_plane_create_zpos_immutable_property(&plane->base, zpos); diff --git a/drivers/gpu/drm/i915/display/skl_universal_plane.c b/drivers/gpu/drm/i915/display/skl_universal_plane.c index 92a4fd508e92..df596431151d 100644 --- a/drivers/gpu/drm/i915/display/skl_universal_plane.c +++ b/drivers/gpu/drm/i915/display/skl_universal_plane.c @@ -2160,8 +2160,10 @@ skl_universal_plane_create(struct drm_i915_private *dev_priv, supported_csc, BIT(DRM_COLOR_YCBCR_LIMITED_RANGE) | BIT(DRM_COLOR_YCBCR_FULL_RANGE), + BIT(DRM_TF_UNDEFINED), DRM_COLOR_YCBCR_BT709, - DRM_COLOR_YCBCR_LIMITED_RANGE); + DRM_COLOR_YCBCR_LIMITED_RANGE, + DRM_TF_UNDEFINED);
drm_plane_create_alpha_property(&plane->base); drm_plane_create_blend_mode_property(&plane->base, diff --git a/drivers/gpu/drm/nouveau/dispnv04/overlay.c b/drivers/gpu/drm/nouveau/dispnv04/overlay.c index 37e63e98cd08..64e1793212b4 100644 --- a/drivers/gpu/drm/nouveau/dispnv04/overlay.c +++ b/drivers/gpu/drm/nouveau/dispnv04/overlay.c @@ -345,8 +345,10 @@ nv10_overlay_init(struct drm_device *device) BIT(DRM_COLOR_YCBCR_BT601) | BIT(DRM_COLOR_YCBCR_BT709), BIT(DRM_COLOR_YCBCR_LIMITED_RANGE), + BIT(DRM_TF_UNDEFINED), DRM_COLOR_YCBCR_BT601, - DRM_COLOR_YCBCR_LIMITED_RANGE); + DRM_COLOR_YCBCR_LIMITED_RANGE, + DRM_TF_UNDEFINED);
plane->set_params = nv10_set_params; nv10_set_params(plane); diff --git a/drivers/gpu/drm/omapdrm/omap_plane.c b/drivers/gpu/drm/omapdrm/omap_plane.c index 801da917507d..ca7559824dcd 100644 --- a/drivers/gpu/drm/omapdrm/omap_plane.c +++ b/drivers/gpu/drm/omapdrm/omap_plane.c @@ -325,8 +325,10 @@ struct drm_plane *omap_plane_init(struct drm_device *dev, BIT(DRM_COLOR_YCBCR_BT709), BIT(DRM_COLOR_YCBCR_FULL_RANGE) | BIT(DRM_COLOR_YCBCR_LIMITED_RANGE), + BIT(DRM_TF_UNDEFINED), DRM_COLOR_YCBCR_BT601, - DRM_COLOR_YCBCR_FULL_RANGE); + DRM_COLOR_YCBCR_FULL_RANGE, + DRM_TF_UNDEFINED);
return plane;
diff --git a/drivers/gpu/drm/sun4i/sun8i_vi_layer.c b/drivers/gpu/drm/sun4i/sun8i_vi_layer.c index 1c86c2dd0bbf..eda8f51bafd7 100644 --- a/drivers/gpu/drm/sun4i/sun8i_vi_layer.c +++ b/drivers/gpu/drm/sun4i/sun8i_vi_layer.c @@ -600,8 +600,10 @@ struct sun8i_vi_layer *sun8i_vi_layer_init_one(struct drm_device *drm, ret = drm_plane_create_color_properties(&layer->plane, supported_encodings, supported_ranges, + BIT(DRM_TF_UNDEFINED), DRM_COLOR_YCBCR_BT709, - DRM_COLOR_YCBCR_LIMITED_RANGE); + DRM_COLOR_YCBCR_LIMITED_RANGE, + DRM_TF_UNDEFINED); if (ret) { dev_err(drm->dev, "Couldn't add encoding and range properties!\n"); return ERR_PTR(ret); diff --git a/drivers/gpu/drm/tidss/tidss_plane.c b/drivers/gpu/drm/tidss/tidss_plane.c index 1acd15aa4193..a1336ecd5fd5 100644 --- a/drivers/gpu/drm/tidss/tidss_plane.c +++ b/drivers/gpu/drm/tidss/tidss_plane.c @@ -186,8 +186,10 @@ struct tidss_plane *tidss_plane_create(struct tidss_device *tidss, BIT(DRM_COLOR_YCBCR_BT709)); u32 color_ranges = (BIT(DRM_COLOR_YCBCR_FULL_RANGE) | BIT(DRM_COLOR_YCBCR_LIMITED_RANGE)); + u32 transfer_functions = BIT(DRM_TF_UNDEFINED; u32 default_encoding = DRM_COLOR_YCBCR_BT601; u32 default_range = DRM_COLOR_YCBCR_FULL_RANGE; + u32 default_tf = DRM_TF_UNDEFINED;; u32 blend_modes = (BIT(DRM_MODE_BLEND_PREMULTI) | BIT(DRM_MODE_BLEND_COVERAGE)); int ret; @@ -217,8 +219,10 @@ struct tidss_plane *tidss_plane_create(struct tidss_device *tidss, ret = drm_plane_create_color_properties(&tplane->plane, color_encodings, color_ranges, + transfer_functions, default_encoding, - default_range); + default_range, + default_tf); if (ret) goto err;
diff --git a/include/drm/drm_color_mgmt.h b/include/drm/drm_color_mgmt.h index 81c298488b0c..370bbc55b744 100644 --- a/include/drm/drm_color_mgmt.h +++ b/include/drm/drm_color_mgmt.h @@ -87,11 +87,27 @@ enum drm_color_range { DRM_COLOR_RANGE_MAX, };
+/** + * enum drm_transfer_function - common transfer function used for sdr/hdr formats + * + * DRM_TF_UNDEFINED - The legacy case where a TF in and out of the blending + * space is undefined + * DRM_TF_SRGB - Based on gamma curve and is used for printer/monitors/web + * DRM_TF_PQ2084 - Used for HDR and allows for up to 10,000 nit support. +*/ +enum drm_transfer_function { + DRM_TF_UNDEFINED, + DRM_TF_SRGB, + DRM_TF_PQ2084, + DRM_TF_MAX, +}; int drm_plane_create_color_properties(struct drm_plane *plane, u32 supported_encodings, u32 supported_ranges, + u32 supported_tf, enum drm_color_encoding default_encoding, - enum drm_color_range default_range); + enum drm_color_range default_range, + enum drm_transfer_function default_tf);
/** * enum drm_color_lut_tests - hw-specific LUT tests to perform diff --git a/include/drm/drm_plane.h b/include/drm/drm_plane.h index 1294610e84f4..cff56994513f 100644 --- a/include/drm/drm_plane.h +++ b/include/drm/drm_plane.h @@ -179,6 +179,14 @@ struct drm_plane_state { */ enum drm_color_range color_range;
+ /** + * @transfer_function: + * + * Transfer function for HDR color/luminance mapping. This will allow the + * driver to know what transfer function should be used to for the current + * format for a proper HDR color/luminance output. + */ + enum drm_transfer_function transfer_function; /** * @fb_damage_clips: * @@ -741,6 +749,14 @@ struct drm_plane { * See drm_plane_create_color_properties(). */ struct drm_property *color_range_property; + /** + * @transfer_function_property: + * + * Optional "TRANSFER_FUNCTION" enum property for specifying + * color transfer function for non RGB formats, mostly used for HDR. + * See drm_plane_create_color_properties(). + */ + struct drm_property *transfer_function_property;
/** * @scaling_filter_property: property to apply a particular filter while
We currently have 1D LUTs to define output transfer function but using a 1D LUT is not always the best way to define a transfer function for HW that has ROMs for certain transfer functions, or for HW that has complex PWL definition for accurate LUT definitions.
For this reason we're introducing named transfer functions. The original LUT behavior is preserved with the default "1D LUT" transfer function.
Signed-off-by: Harry Wentland harry.wentland@amd.com --- .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 11 +++- .../gpu/drm/arm/display/komeda/komeda_crtc.c | 7 ++- drivers/gpu/drm/arm/malidp_crtc.c | 7 ++- drivers/gpu/drm/armada/armada_crtc.c | 5 +- .../gpu/drm/atmel-hlcdc/atmel_hlcdc_crtc.c | 7 ++- drivers/gpu/drm/drm_color_mgmt.c | 54 ++++++++++++++++--- drivers/gpu/drm/i915/display/intel_color.c | 11 ++-- drivers/gpu/drm/i915/display/intel_color.h | 2 +- drivers/gpu/drm/i915/display/intel_crtc.c | 4 +- drivers/gpu/drm/ingenic/ingenic-drm-drv.c | 9 +++- drivers/gpu/drm/mediatek/mtk_drm_crtc.c | 8 ++- drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c | 9 +++- drivers/gpu/drm/nouveau/dispnv50/head.c | 13 +++-- drivers/gpu/drm/omapdrm/omap_crtc.c | 10 +++- drivers/gpu/drm/rcar-du/rcar_du_crtc.c | 7 ++- drivers/gpu/drm/rockchip/rockchip_drm_vop.c | 5 +- drivers/gpu/drm/stm/ltdc.c | 8 ++- drivers/gpu/drm/tidss/tidss_crtc.c | 9 +++- drivers/gpu/drm/vc4/vc4_crtc.c | 16 +++++- include/drm/drm_color_mgmt.h | 37 +++++++------ include/drm/drm_crtc.h | 20 +++++++ 21 files changed, 208 insertions(+), 51 deletions(-)
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c index 63ddae9c5abe..b6d072211bf9 100644 --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c @@ -7343,8 +7343,15 @@ static int amdgpu_dm_crtc_init(struct amdgpu_display_manager *dm, acrtc->otg_inst = -1;
dm->adev->mode_info.crtcs[crtc_index] = acrtc; - drm_crtc_enable_color_mgmt(&acrtc->base, MAX_COLOR_LUT_ENTRIES, - true, MAX_COLOR_LUT_ENTRIES); + + res = drm_crtc_enable_color_mgmt(&acrtc->base, MAX_COLOR_LUT_ENTRIES, + true, MAX_COLOR_LUT_ENTRIES, + BIT(DRM_TF_1D_LUT), DRM_TF_1D_LUT); + if (res) { + drm_crtc_cleanup(&acrtc->base); + goto fail; + } + drm_mode_crtc_set_gamma_size(&acrtc->base, MAX_COLOR_LEGACY_LUT_ENTRIES);
return 0; diff --git a/drivers/gpu/drm/arm/display/komeda/komeda_crtc.c b/drivers/gpu/drm/arm/display/komeda/komeda_crtc.c index 59172acb9738..f364d37232b5 100644 --- a/drivers/gpu/drm/arm/display/komeda/komeda_crtc.c +++ b/drivers/gpu/drm/arm/display/komeda/komeda_crtc.c @@ -626,7 +626,12 @@ static int komeda_crtc_add(struct komeda_kms_dev *kms,
crtc->port = kcrtc->master->of_output_port;
- drm_crtc_enable_color_mgmt(crtc, 0, true, KOMEDA_COLOR_LUT_SIZE); + err = drm_crtc_enable_color_mgmt(crtc, 0, true, KOMEDA_COLOR_LUT_SIZE, + BIT(DRM_TF_1D_LUT), DRM_TF_1D_LUT); + if (err) { + drm_crtc_cleanup(crtc); + return err; + }
return err; } diff --git a/drivers/gpu/drm/arm/malidp_crtc.c b/drivers/gpu/drm/arm/malidp_crtc.c index 494075ddbef6..7af87002c375 100644 --- a/drivers/gpu/drm/arm/malidp_crtc.c +++ b/drivers/gpu/drm/arm/malidp_crtc.c @@ -552,7 +552,12 @@ int malidp_crtc_init(struct drm_device *drm) drm_crtc_helper_add(&malidp->crtc, &malidp_crtc_helper_funcs); drm_mode_crtc_set_gamma_size(&malidp->crtc, MALIDP_GAMMA_LUT_SIZE); /* No inverse-gamma: it is per-plane. */ - drm_crtc_enable_color_mgmt(&malidp->crtc, 0, true, MALIDP_GAMMA_LUT_SIZE); + ret = drm_crtc_enable_color_mgmt(&malidp->crtc, 0, true, MALIDP_GAMMA_LUT_SIZE, + BIT(DRM_TF_1D_LUT), DRM_TF_1D_LUT); + if (ret) { + drm_crtc_cleanup(&malidp->crtc); + return ret; + }
malidp_se_set_enh_coeffs(malidp->dev);
diff --git a/drivers/gpu/drm/armada/armada_crtc.c b/drivers/gpu/drm/armada/armada_crtc.c index b7bb90ae787f..d44a1d4fa475 100644 --- a/drivers/gpu/drm/armada/armada_crtc.c +++ b/drivers/gpu/drm/armada/armada_crtc.c @@ -992,7 +992,10 @@ static int armada_drm_crtc_create(struct drm_device *drm, struct device *dev, if (ret) return ret;
- drm_crtc_enable_color_mgmt(&dcrtc->crtc, 0, false, 256); + ret = drm_crtc_enable_color_mgmt(&dcrtc->crtc, 0, false, , + BIT(DRM_TF_1D_LUT), DRM_TF_1D_LUT); + if (ret) + return ret;
return armada_overlay_plane_create(drm, 1 << dcrtc->num);
diff --git a/drivers/gpu/drm/atmel-hlcdc/atmel_hlcdc_crtc.c b/drivers/gpu/drm/atmel-hlcdc/atmel_hlcdc_crtc.c index 05ad75d155e8..e5911826d002 100644 --- a/drivers/gpu/drm/atmel-hlcdc/atmel_hlcdc_crtc.c +++ b/drivers/gpu/drm/atmel-hlcdc/atmel_hlcdc_crtc.c @@ -528,8 +528,11 @@ int atmel_hlcdc_crtc_create(struct drm_device *dev) drm_crtc_helper_add(&crtc->base, &lcdc_crtc_helper_funcs);
drm_mode_crtc_set_gamma_size(&crtc->base, ATMEL_HLCDC_CLUT_SIZE); - drm_crtc_enable_color_mgmt(&crtc->base, 0, false, - ATMEL_HLCDC_CLUT_SIZE); + ret = drm_crtc_enable_color_mgmt(&crtc->base, 0, false, + ATMEL_HLCDC_CLUT_SIZE, + BIT(DRM_TF_1D_LUT), DRM_TF_1D_LUT); + if (ret) + goto fail;
dc->crtc = &crtc->base;
diff --git a/drivers/gpu/drm/drm_color_mgmt.c b/drivers/gpu/drm/drm_color_mgmt.c index daf62fb090a6..196544951ab7 100644 --- a/drivers/gpu/drm/drm_color_mgmt.c +++ b/drivers/gpu/drm/drm_color_mgmt.c @@ -147,12 +147,21 @@ u64 drm_color_ctm_s31_32_to_qm_n(u64 user_input, u32 m, u32 n) } EXPORT_SYMBOL(drm_color_ctm_s31_32_to_qm_n);
+static const char * const tf_name[] = { + [DRM_TF_UNDEFINED] = "undefined", + [DRM_TF_SRGB] = "sRGB", + [DRM_TF_PQ2084] = "PQ2084", + [DRM_TF_1D_LUT] = "1D LUT", +}; + /** * drm_crtc_enable_color_mgmt - enable color management properties * @crtc: DRM CRTC * @degamma_lut_size: the size of the degamma lut (before CSC) * @has_ctm: whether to attach ctm_property for CSC matrix * @gamma_lut_size: the size of the gamma lut (after CSC) + * @supported_tfs: bitfield indicating supported transfer functions + * @default_tf: default output transfer function * * This function lets the driver enable the color correction * properties on a CRTC. This includes 3 degamma, csc and gamma @@ -162,13 +171,27 @@ EXPORT_SYMBOL(drm_color_ctm_s31_32_to_qm_n); * their size is not 0 and ctm_property is only attached if has_ctm is * true. */ -void drm_crtc_enable_color_mgmt(struct drm_crtc *crtc, +bool drm_crtc_enable_color_mgmt(struct drm_crtc *crtc, uint degamma_lut_size, bool has_ctm, - uint gamma_lut_size) + uint gamma_lut_size, + u32 supported_tfs, + enum drm_transfer_function default_tf) { struct drm_device *dev = crtc->dev; struct drm_mode_config *config = &dev->mode_config; + struct drm_property *prop; + struct drm_prop_enum_list enum_list[DRM_TF_MAX]; + int i, len; + + if (WARN_ON(supported_tfs == 0 || + (supported_tfs & -BIT(DRM_TF_MAX)) != 0 || + (supported_tfs & BIT(default_tf)) == 0)) + return -EINVAL; + + if (!!(supported_tfs & BIT(DRM_TF_1D_LUT)) != + !!(degamma_lut_size || gamma_lut_size)) + return -EINVAL;
if (degamma_lut_size) { drm_object_attach_property(&crtc->base, @@ -189,6 +212,28 @@ void drm_crtc_enable_color_mgmt(struct drm_crtc *crtc, config->gamma_lut_size_property, gamma_lut_size); } + + len = 0; + for (i = 0; i < DRM_TF_MAX; i++) { + if ((supported_tfs & BIT(i)) == 0) + continue; + + enum_list[len].type = i; + enum_list[len].name = tf_name[i]; + len++; + } + + prop = drm_property_create_enum(dev, 0, "OUT TRANSFER_FUNCTION", + enum_list, len); + if (!prop) + return -ENOMEM; + crtc->out_transfer_function_property = prop; + drm_object_attach_property(&crtc->base, prop, default_tf); + if (crtc->state) + crtc->state->out_transfer_function = default_tf; + + return 0; + } EXPORT_SYMBOL(drm_crtc_enable_color_mgmt);
@@ -481,11 +526,6 @@ static const char * const color_range_name[] = { [DRM_COLOR_YCBCR_LIMITED_RANGE] = "YCbCr limited range", };
-static const char * const tf_name[] = { - [DRM_TF_UNDEFINED] = "undefined", - [DRM_TF_SRGB] = "sRGB", - [DRM_TF_PQ2084] = "PQ2084", -}; /** * drm_get_color_encoding_name - return a string for color encoding * @encoding: color encoding to compute name of diff --git a/drivers/gpu/drm/i915/display/intel_color.c b/drivers/gpu/drm/i915/display/intel_color.c index dab892d2251b..a9332080efe5 100644 --- a/drivers/gpu/drm/i915/display/intel_color.c +++ b/drivers/gpu/drm/i915/display/intel_color.c @@ -2093,7 +2093,7 @@ static void icl_read_luts(struct intel_crtc_state *crtc_state) } }
-void intel_color_init(struct intel_crtc *crtc) +bool intel_color_init(struct intel_crtc *crtc) { struct drm_i915_private *dev_priv = to_i915(crtc->base.dev); bool has_ctm = INTEL_INFO(dev_priv)->color.degamma_lut_size != 0; @@ -2150,8 +2150,9 @@ void intel_color_init(struct intel_crtc *crtc) } }
- drm_crtc_enable_color_mgmt(&crtc->base, - INTEL_INFO(dev_priv)->color.degamma_lut_size, - has_ctm, - INTEL_INFO(dev_priv)->color.gamma_lut_size); + return drm_crtc_enable_color_mgmt(&crtc->base, + INTEL_INFO(dev_priv)->color.degamma_lut_size, + has_ctm, + INTEL_INFO(dev_priv)->color.gamma_lut_size, + BIT(DRM_TF_1D_LUT), DRM_TF_1D_LUT); } diff --git a/drivers/gpu/drm/i915/display/intel_color.h b/drivers/gpu/drm/i915/display/intel_color.h index 173727aaa24d..a8e015acc60c 100644 --- a/drivers/gpu/drm/i915/display/intel_color.h +++ b/drivers/gpu/drm/i915/display/intel_color.h @@ -12,7 +12,7 @@ struct intel_crtc_state; struct intel_crtc; struct drm_property_blob;
-void intel_color_init(struct intel_crtc *crtc); +bool intel_color_init(struct intel_crtc *crtc); int intel_color_check(struct intel_crtc_state *crtc_state); void intel_color_commit(const struct intel_crtc_state *crtc_state); void intel_color_load_luts(const struct intel_crtc_state *crtc_state); diff --git a/drivers/gpu/drm/i915/display/intel_crtc.c b/drivers/gpu/drm/i915/display/intel_crtc.c index 95ff1707b4bd..0846fb4ef14e 100644 --- a/drivers/gpu/drm/i915/display/intel_crtc.c +++ b/drivers/gpu/drm/i915/display/intel_crtc.c @@ -340,7 +340,9 @@ int intel_crtc_init(struct drm_i915_private *dev_priv, enum pipe pipe) BIT(DRM_SCALING_FILTER_DEFAULT) | BIT(DRM_SCALING_FILTER_NEAREST_NEIGHBOR));
- intel_color_init(crtc); + ret = intel_color_init(crtc); + if (ret) + goto fail;
intel_crtc_crc_init(crtc);
diff --git a/drivers/gpu/drm/ingenic/ingenic-drm-drv.c b/drivers/gpu/drm/ingenic/ingenic-drm-drv.c index 5244f4763477..f21fdd7e5f2a 100644 --- a/drivers/gpu/drm/ingenic/ingenic-drm-drv.c +++ b/drivers/gpu/drm/ingenic/ingenic-drm-drv.c @@ -1017,8 +1017,13 @@ static int ingenic_drm_bind(struct device *dev, bool has_components) return ret; }
- drm_crtc_enable_color_mgmt(&priv->crtc, 0, false, - ARRAY_SIZE(priv->dma_hwdescs->palette)); + ret = drm_crtc_enable_color_mgmt(&priv->crtc, 0, false, + ARRAY_SIZE(priv->dma_hwdescs->palette), + BIT(DRM_TF_1D_LUT), DRM_TF_1D_LUT); + if (ret) { + dev_err(dev, "Failed to init color management: %i\n", ret); + return ret; + }
if (soc_info->has_osd) { drm_plane_helper_add(&priv->f0, diff --git a/drivers/gpu/drm/mediatek/mtk_drm_crtc.c b/drivers/gpu/drm/mediatek/mtk_drm_crtc.c index 474efb844249..d2496ad16931 100644 --- a/drivers/gpu/drm/mediatek/mtk_drm_crtc.c +++ b/drivers/gpu/drm/mediatek/mtk_drm_crtc.c @@ -827,7 +827,13 @@ int mtk_drm_crtc_create(struct drm_device *drm_dev,
if (gamma_lut_size) drm_mode_crtc_set_gamma_size(&mtk_crtc->base, gamma_lut_size); - drm_crtc_enable_color_mgmt(&mtk_crtc->base, 0, has_ctm, gamma_lut_size); + ret = drm_crtc_enable_color_mgmt(&mtk_crtc->base, 0, has_ctm, gamma_lut_size, + BIT(DRM_TF_1D_LUT), DRM_TF_1D_LUT); + if (ret) { + drm_crtc_cleanup(mtk_crtc->base); + kfree(mtk_crtc); + return ret; + priv->num_pipes++; mutex_init(&mtk_crtc->hw_lock);
diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c index 9a5c70c87cc8..9b7e947e8c8b 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c @@ -1337,6 +1337,7 @@ struct drm_crtc *dpu_crtc_init(struct drm_device *dev, struct drm_plane *plane, struct drm_crtc *crtc = NULL; struct dpu_crtc *dpu_crtc = NULL; int i; + int ret = 0;
dpu_crtc = kzalloc(sizeof(*dpu_crtc), GFP_KERNEL); if (!dpu_crtc) @@ -1365,7 +1366,13 @@ struct drm_crtc *dpu_crtc_init(struct drm_device *dev, struct drm_plane *plane,
drm_crtc_helper_add(crtc, &dpu_crtc_helper_funcs);
- drm_crtc_enable_color_mgmt(crtc, 0, true, 0); + ret = drm_crtc_enable_color_mgmt(crtc, 0, true, 0, + BIT(DRM_TF_UNDEFINED), DRM_TF_UNDEFINED); + if (ret) { + drm_crtc_cleanup(crtc); + kfree(dpu_crtc); + return ERR_PTR(ret); + }
/* save user friendly CRTC name for later */ snprintf(dpu_crtc->name, DPU_CRTC_NAME_SIZE, "crtc%u", crtc->base.id); diff --git a/drivers/gpu/drm/nouveau/dispnv50/head.c b/drivers/gpu/drm/nouveau/dispnv50/head.c index ec361d17e900..f97b3f70152b 100644 --- a/drivers/gpu/drm/nouveau/dispnv50/head.c +++ b/drivers/gpu/drm/nouveau/dispnv50/head.c @@ -589,9 +589,16 @@ nv50_head_create(struct drm_device *dev, int index) drm_crtc_helper_add(crtc, &nv50_head_help); /* Keep the legacy gamma size at 256 to avoid compatibility issues */ drm_mode_crtc_set_gamma_size(crtc, 256); - drm_crtc_enable_color_mgmt(crtc, base->func->ilut_size, - disp->disp->object.oclass >= GF110_DISP, - head->func->olut_size); + ret = drm_crtc_enable_color_mgmt(crtc, base->func->ilut_size, + disp->disp->object.oclass >= GF110_DISP, + head->func->olut_size, + BIT(DRM_TF_1D_LUT), DRM_TF_1D_LUT); + if (ret) { + drm_crtc_cleanup(crtc); + kfree(head); + return ERR_PTR(ret); + } +
if (head->func->olut_set) { ret = nv50_lut_init(disp, &drm->client.mmu, &head->olut); diff --git a/drivers/gpu/drm/omapdrm/omap_crtc.c b/drivers/gpu/drm/omapdrm/omap_crtc.c index 06a719c104f4..a618b3338c38 100644 --- a/drivers/gpu/drm/omapdrm/omap_crtc.c +++ b/drivers/gpu/drm/omapdrm/omap_crtc.c @@ -839,7 +839,15 @@ struct drm_crtc *omap_crtc_init(struct drm_device *dev, if (dispc_mgr_gamma_size(priv->dispc, channel)) { unsigned int gamma_lut_size = 256;
- drm_crtc_enable_color_mgmt(crtc, gamma_lut_size, true, 0); + ret = drm_crtc_enable_color_mgmt(crtc, gamma_lut_size, true, 0, + BIT(DRM_TF_1D_LUT), DRM_TF_1D_LUT); + if (ret) { + dev_err(dev->dev, "$s(): could not init color management for: %s\n", + __func__, pipe->output->name); + drm_crtc_cleanup(crtc); + kfree(omap_crtc); + return ERR_PTR(ret); + } drm_mode_crtc_set_gamma_size(crtc, gamma_lut_size); }
diff --git a/drivers/gpu/drm/rcar-du/rcar_du_crtc.c b/drivers/gpu/drm/rcar-du/rcar_du_crtc.c index ea7e39d03545..02d8737e6603 100644 --- a/drivers/gpu/drm/rcar-du/rcar_du_crtc.c +++ b/drivers/gpu/drm/rcar-du/rcar_du_crtc.c @@ -1263,7 +1263,12 @@ int rcar_du_crtc_create(struct rcar_du_group *rgrp, unsigned int swindex, rgrp->cmms_mask |= BIT(hwindex % 2);
drm_mode_crtc_set_gamma_size(crtc, CM2_LUT_SIZE); - drm_crtc_enable_color_mgmt(crtc, 0, false, CM2_LUT_SIZE); + ret = drm_crtc_enable_color_mgmt(crtc, 0, false, CM2_LUT_SIZE, + BIT(DRM_TF_1D_LUT), DRM_TF_1D_LUT); + if (ret) { + drm_crtc_cleanup(crtc); + return ret; + } }
drm_crtc_helper_add(crtc, &crtc_helper_funcs); diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_vop.c b/drivers/gpu/drm/rockchip/rockchip_drm_vop.c index f5b9028a16a3..68d3a7b1f041 100644 --- a/drivers/gpu/drm/rockchip/rockchip_drm_vop.c +++ b/drivers/gpu/drm/rockchip/rockchip_drm_vop.c @@ -1817,7 +1817,10 @@ static int vop_create_crtc(struct vop *vop) drm_crtc_helper_add(crtc, &vop_crtc_helper_funcs); if (vop->lut_regs) { drm_mode_crtc_set_gamma_size(crtc, vop_data->lut_size); - drm_crtc_enable_color_mgmt(crtc, 0, false, vop_data->lut_size); + ret = drm_crtc_enable_color_mgmt(crtc, 0, false, vop_data->lut_size, + BIT(DRM_TF_1D_LUT), DRM_TF_1D_LUT); + if (ret) + goto err_cleanup_crtc; }
/* diff --git a/drivers/gpu/drm/stm/ltdc.c b/drivers/gpu/drm/stm/ltdc.c index 08b71248044d..ffdf7114f50a 100644 --- a/drivers/gpu/drm/stm/ltdc.c +++ b/drivers/gpu/drm/stm/ltdc.c @@ -1035,7 +1035,13 @@ static int ltdc_crtc_init(struct drm_device *ddev, struct drm_crtc *crtc) drm_crtc_helper_add(crtc, <dc_crtc_helper_funcs);
drm_mode_crtc_set_gamma_size(crtc, CLUT_SIZE); - drm_crtc_enable_color_mgmt(crtc, 0, false, CLUT_SIZE); + ret = drm_crtc_enable_color_mgmt(crtc, 0, false, CLUT_SIZE, + BIT(DRM_TF_1D_LUT), DRM_TF_1D_LUT); + if (ret) { + DRM_ERROR("Can not initialize color management\n"); + drm_crtc_cleanup(crtc); + goto cleanup; + }
DRM_DEBUG_DRIVER("CRTC:%d created\n", crtc->base.id);
diff --git a/drivers/gpu/drm/tidss/tidss_crtc.c b/drivers/gpu/drm/tidss/tidss_crtc.c index 2218da3b3ca3..34ed098887bc 100644 --- a/drivers/gpu/drm/tidss/tidss_crtc.c +++ b/drivers/gpu/drm/tidss/tidss_crtc.c @@ -439,7 +439,14 @@ struct tidss_crtc *tidss_crtc_create(struct tidss_device *tidss, if (tidss->feat->vp_feat.color.gamma_size) gamma_lut_size = 256;
- drm_crtc_enable_color_mgmt(crtc, 0, has_ctm, gamma_lut_size); + ret = drm_crtc_enable_color_mgmt(crtc, 0, has_ctm, gamma_lut_size, + BIT(DRM_TF_1D_LUT), DRM_TF_1D_LUT); + if (ret) { + drm_crtc_cleanup(crtc); + kfree(tcrtc); + return ERR_PTR(ret); + } + if (gamma_lut_size) drm_mode_crtc_set_gamma_size(crtc, gamma_lut_size);
diff --git a/drivers/gpu/drm/vc4/vc4_crtc.c b/drivers/gpu/drm/vc4/vc4_crtc.c index 18f5009ce90e..3bb2c0dba09a 100644 --- a/drivers/gpu/drm/vc4/vc4_crtc.c +++ b/drivers/gpu/drm/vc4/vc4_crtc.c @@ -1118,12 +1118,24 @@ int vc4_crtc_init(struct drm_device *drm, struct vc4_crtc *vc4_crtc, if (!vc4->hvs->hvs5) { drm_mode_crtc_set_gamma_size(crtc, ARRAY_SIZE(vc4_crtc->lut_r));
- drm_crtc_enable_color_mgmt(crtc, 0, false, crtc->gamma_size); + ret = drm_crtc_enable_color_mgmt(crtc, 0, false, crtc->gamma_size, + BIT(DRM_TF_1D_LUT), DRM_TF_1D_LUT); + if (ret) { + dev_err(drm->dev, "failed to enable color management\n"); + drm_crtc_cleanup(crtc); + return ret; + }
/* We support CTM, but only for one CRTC at a time. It's therefore * implemented as private driver state in vc4_kms, not here. */ - drm_crtc_enable_color_mgmt(crtc, 0, true, crtc->gamma_size); + ret = drm_crtc_enable_color_mgmt(crtc, 0, true, crtc->gamma_size, + BIT(DRM_TF_1D_LUT), DRM_TF_1D_LUT); + if (ret) { + dev_err(drm->dev, "failed to enable color management\n"); + drm_crtc_cleanup(crtc); + return ret; + } }
for (i = 0; i < crtc->gamma_size; i++) { diff --git a/include/drm/drm_color_mgmt.h b/include/drm/drm_color_mgmt.h index 370bbc55b744..408561acdb3d 100644 --- a/include/drm/drm_color_mgmt.h +++ b/include/drm/drm_color_mgmt.h @@ -54,10 +54,29 @@ static inline u32 drm_color_lut_extract(u32 user_input, int bit_precision)
u64 drm_color_ctm_s31_32_to_qm_n(u64 user_input, u32 m, u32 n);
-void drm_crtc_enable_color_mgmt(struct drm_crtc *crtc, +/** + * enum drm_transfer_function - common transfer function used for sdr/hdr formats + * + * DRM_TF_UNDEFINED - The legacy case where a TF in and out of the blending + * space is undefined + * DRM_TF_SRGB - Based on gamma curve and is used for printer/monitors/web + * DRM_TF_PQ2084 - Used for HDR and allows for up to 10,000 nit support. + * DRM_TF_1D_LUT - Use 1D gamma/degamma LUTs (currently only defined on crtc) +*/ +enum drm_transfer_function { + DRM_TF_UNDEFINED, + DRM_TF_SRGB, + DRM_TF_PQ2084, + DRM_TF_1D_LUT, + DRM_TF_MAX, +}; + +bool drm_crtc_enable_color_mgmt(struct drm_crtc *crtc, uint degamma_lut_size, bool has_ctm, - uint gamma_lut_size); + uint gamma_lut_size, + u32 supported_tfs, + enum drm_transfer_function default_tf);
int drm_mode_crtc_set_gamma_size(struct drm_crtc *crtc, int gamma_size); @@ -87,20 +106,6 @@ enum drm_color_range { DRM_COLOR_RANGE_MAX, };
-/** - * enum drm_transfer_function - common transfer function used for sdr/hdr formats - * - * DRM_TF_UNDEFINED - The legacy case where a TF in and out of the blending - * space is undefined - * DRM_TF_SRGB - Based on gamma curve and is used for printer/monitors/web - * DRM_TF_PQ2084 - Used for HDR and allows for up to 10,000 nit support. -*/ -enum drm_transfer_function { - DRM_TF_UNDEFINED, - DRM_TF_SRGB, - DRM_TF_PQ2084, - DRM_TF_MAX, -}; int drm_plane_create_color_properties(struct drm_plane *plane, u32 supported_encodings, u32 supported_ranges, diff --git a/include/drm/drm_crtc.h b/include/drm/drm_crtc.h index 13eeba2a750a..35580dd36294 100644 --- a/include/drm/drm_crtc.h +++ b/include/drm/drm_crtc.h @@ -288,6 +288,15 @@ struct drm_crtc_state { */ struct drm_property_blob *gamma_lut;
+ /** + * @out_transfer_function: + * + * Transfer function for conversion from blending space to + * display space. DRM_TF_1D_LUT can be specified to use the + * gamma/degamma LUTs from mode_config instead. + */ + enum drm_transfer_function out_transfer_function; + /** * @target_vblank: * @@ -1096,6 +1105,17 @@ struct drm_crtc { */ struct drm_property *scaling_filter_property;
+ /** + * @out_transfer_function_property: + * + * Optional "OUT TRANSFER FUNCTION" enum property for specifying + * an output transfer function, i.e. a TF to convert from + * blending space to luminance space. Use DRM_TF_1D_LUT to + * indicate using the 1D gamma/degamma LUTs instead of a + * named transfer function. + */ + struct drm_property *out_transfer_function_property; + /** * @state: *
From: Bhawanpreet Lakha Bhawanpreet.Lakha@amd.com
SDR is typically mastered at 200 nits and HDR is mastered at up to 10,000 nits. Due to this luminance range difference if we blend a SDR and HDR plane together, we can run into problems where the HDR plane is too bright or the SDR plane is too dim
A common solution to this problem is to boost the SDR plane so that its not too dim.
This patch introduces a "sdr_white_level" property, this property can be used by the userspace to boost the SDR content luminance. The boost value is an explicit luminance value in nits. This allows the userspace to set the maximum white level for the SDR plane.
v2: - fix type in description
Signed-off-by: Bhawanpreet Lakha Bhawanpreet.Lakha@amd.com Signed-off-by: Harry Wentland harry.wentland@amd.com --- drivers/gpu/drm/drm_atomic_uapi.c | 4 ++++ drivers/gpu/drm/drm_color_mgmt.c | 17 +++++++++++++++++ include/drm/drm_color_mgmt.h | 6 ++++++ include/drm/drm_plane.h | 15 ++++++++++++++- 4 files changed, 41 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/drm_atomic_uapi.c b/drivers/gpu/drm/drm_atomic_uapi.c index 9582515dd12e..e5a193657f7d 100644 --- a/drivers/gpu/drm/drm_atomic_uapi.c +++ b/drivers/gpu/drm/drm_atomic_uapi.c @@ -598,6 +598,8 @@ static int drm_atomic_plane_set_property(struct drm_plane *plane, state->color_range = val; } else if (property == plane->transfer_function_property) { state->transfer_function = val; + } else if (property == plane->sdr_white_level_property) { + state->sdr_white_level = val; } else if (property == config->prop_fb_damage_clips) { ret = drm_atomic_replace_property_blob_from_id(dev, &state->fb_damage_clips, @@ -666,6 +668,8 @@ drm_atomic_plane_get_property(struct drm_plane *plane, *val = state->color_range; } else if (property == plane->transfer_function_property) { *val = state->transfer_function; + } else if (property == plane->sdr_white_level_property) { + *val = state->sdr_white_level; } else if (property == config->prop_fb_damage_clips) { *val = (state->fb_damage_clips) ? state->fb_damage_clips->base.id : 0; diff --git a/drivers/gpu/drm/drm_color_mgmt.c b/drivers/gpu/drm/drm_color_mgmt.c index 196544951ab7..44842ba0454d 100644 --- a/drivers/gpu/drm/drm_color_mgmt.c +++ b/drivers/gpu/drm/drm_color_mgmt.c @@ -556,6 +556,23 @@ const char *drm_get_color_range_name(enum drm_color_range range) return color_range_name[range]; }
+int drm_plane_create_sdr_white_level_property(struct drm_plane *plane){ + + struct drm_property *prop; + + prop = drm_property_create_range(plane->dev, 0, "SDR_WHITE_LEVEL", 0, UINT_MAX); + + if (!prop) + return -ENOMEM; + + plane->sdr_white_level_property = prop; + drm_object_attach_property(&plane->base, prop, DRM_DEFAULT_SDR_WHITE_LEVEL); + + if (plane->state) + plane->state->sdr_white_level = DRM_DEFAULT_SDR_WHITE_LEVEL; + + return 0; +} /** * drm_get_transfer_function - return a string for transfer function * @tf: transfer function to compute name of diff --git a/include/drm/drm_color_mgmt.h b/include/drm/drm_color_mgmt.h index 408561acdb3d..2a356a9601df 100644 --- a/include/drm/drm_color_mgmt.h +++ b/include/drm/drm_color_mgmt.h @@ -26,6 +26,12 @@ #include <linux/ctype.h> #include <drm/drm_property.h>
+/** + * Default SDR white level in nits. Although there is no standard SDR nit level, 200 + * is chosen as the default since that is the generally accepted value. + */ +#define DRM_DEFAULT_SDR_WHITE_LEVEL 200 + struct drm_crtc; struct drm_plane;
diff --git a/include/drm/drm_plane.h b/include/drm/drm_plane.h index cff56994513f..93ee308a46af 100644 --- a/include/drm/drm_plane.h +++ b/include/drm/drm_plane.h @@ -187,6 +187,11 @@ struct drm_plane_state { * format for a proper HDR color/luminance output. */ enum drm_transfer_function transfer_function; + /** + * @sdr_white_level: + * SDR white level boost for HDR+SDR multi plane usecases. max white level in nits + */ + unsigned int sdr_white_level; /** * @fb_damage_clips: * @@ -757,7 +762,15 @@ struct drm_plane { * See drm_plane_create_color_properties(). */ struct drm_property *transfer_function_property; - + /** + * @sdr_white_level: + * + * Optional sdr_white_level. When HDR and SDR are combined in multi plane + * overlay cases, the sdr plane will be very dim. This property allows + * the driver to boost the sdr plane's white level. The value should be + * max white level in nits. + */ + struct drm_property *sdr_white_level_property; /** * @scaling_filter_property: property to apply a particular filter while * scaling.
From: Bhawanpreet Lakha Bhawanpreet.Lakha@amd.com
Add color space definitions for BT601, BT709, BT2020, and DCI-P3.
Default to BT709, the sRGB color space.
Signed-off-by: Bhawanpreet Lakha Bhawanpreet.Lakha@amd.com Signed-off-by: Harry Wentland harry.wentland@amd.com --- .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 2 + .../gpu/drm/arm/display/komeda/komeda_plane.c | 2 + drivers/gpu/drm/arm/malidp_planes.c | 4 +- drivers/gpu/drm/armada/armada_overlay.c | 2 + drivers/gpu/drm/drm_color_mgmt.c | 58 ++++++++++++++++++- drivers/gpu/drm/i915/display/intel_sprite.c | 2 + .../drm/i915/display/skl_universal_plane.c | 2 + drivers/gpu/drm/nouveau/dispnv04/overlay.c | 2 + drivers/gpu/drm/omapdrm/omap_plane.c | 2 + drivers/gpu/drm/sun4i/sun8i_vi_layer.c | 6 +- drivers/gpu/drm/tidss/tidss_plane.c | 4 ++ include/drm/drm_color_mgmt.h | 16 +++++ include/drm/drm_plane.h | 16 +++++ 13 files changed, 113 insertions(+), 5 deletions(-)
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c index b6d072211bf9..a6dbf1f26787 100644 --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c @@ -7276,8 +7276,10 @@ static int amdgpu_dm_plane_init(struct amdgpu_display_manager *dm, BIT(DRM_COLOR_YCBCR_BT2020), BIT(DRM_COLOR_YCBCR_LIMITED_RANGE) | BIT(DRM_COLOR_YCBCR_FULL_RANGE), + BIT(DRM_COLOR_SPACE_BT709), BIT(DRM_TF_SRGB), DRM_COLOR_YCBCR_BT709, DRM_COLOR_YCBCR_LIMITED_RANGE, + DRM_COLOR_SPACE_BT709, DRM_TF_SRGB); }
diff --git a/drivers/gpu/drm/arm/display/komeda/komeda_plane.c b/drivers/gpu/drm/arm/display/komeda/komeda_plane.c index 811f79ab6d32..4fb3ad4c691e 100644 --- a/drivers/gpu/drm/arm/display/komeda/komeda_plane.c +++ b/drivers/gpu/drm/arm/display/komeda/komeda_plane.c @@ -302,9 +302,11 @@ static int komeda_plane_add(struct komeda_kms_dev *kms, BIT(DRM_COLOR_YCBCR_BT2020), BIT(DRM_COLOR_YCBCR_LIMITED_RANGE) | BIT(DRM_COLOR_YCBCR_FULL_RANGE), + BIT(DRM_COLOR_SPACE_BT709), BIT(DRM_TF_UNDEFINED), DRM_COLOR_YCBCR_BT601, DRM_COLOR_YCBCR_LIMITED_RANGE, + DRM_COLOR_SPACE_BT709, DRM_TF_UNDEFINED); if (err) goto cleanup; diff --git a/drivers/gpu/drm/arm/malidp_planes.c b/drivers/gpu/drm/arm/malidp_planes.c index 98d308262880..0bc809435555 100644 --- a/drivers/gpu/drm/arm/malidp_planes.c +++ b/drivers/gpu/drm/arm/malidp_planes.c @@ -1023,6 +1023,7 @@ int malidp_de_planes_init(struct drm_device *drm) /* default encoding for YUV->RGB is BT601 NARROW */ enum drm_color_encoding enc = DRM_COLOR_YCBCR_BT601; enum drm_color_range range = DRM_COLOR_YCBCR_LIMITED_RANGE; + enum drm_color_space space = DRM_COLOR_SPACE_BT709;
ret = drm_plane_create_color_properties(&plane->base, BIT(DRM_COLOR_YCBCR_BT601) | \ @@ -1030,8 +1031,9 @@ int malidp_de_planes_init(struct drm_device *drm) BIT(DRM_COLOR_YCBCR_BT2020), BIT(DRM_COLOR_YCBCR_LIMITED_RANGE) | \ BIT(DRM_COLOR_YCBCR_FULL_RANGE), + BIT(DRM_COLOR_SPACE_BT709), BIT(DRM_TF_UNDEFINED), - enc, range, + enc, range, space, DRM_TF_UNDEFINED); if (!ret) /* program the HW registers */ diff --git a/drivers/gpu/drm/armada/armada_overlay.c b/drivers/gpu/drm/armada/armada_overlay.c index f7792444cb73..e66f2fa72830 100644 --- a/drivers/gpu/drm/armada/armada_overlay.c +++ b/drivers/gpu/drm/armada/armada_overlay.c @@ -596,9 +596,11 @@ int armada_overlay_plane_create(struct drm_device *dev, unsigned long crtcs) BIT(DRM_COLOR_YCBCR_BT601) | BIT(DRM_COLOR_YCBCR_BT709), BIT(DRM_COLOR_YCBCR_LIMITED_RANGE), + BIT(DRM_COLOR_SPACE_BT709), BIT(DRM_TF_UNDEFINED), DEFAULT_ENCODING, DRM_COLOR_YCBCR_LIMITED_RANGE, + DRM_COLOR_SPACE_BT709, DRM_TF_UNDEFINED);
return ret; diff --git a/drivers/gpu/drm/drm_color_mgmt.c b/drivers/gpu/drm/drm_color_mgmt.c index 44842ba0454d..75e6dbbd0081 100644 --- a/drivers/gpu/drm/drm_color_mgmt.c +++ b/drivers/gpu/drm/drm_color_mgmt.c @@ -526,6 +526,13 @@ static const char * const color_range_name[] = { [DRM_COLOR_YCBCR_LIMITED_RANGE] = "YCbCr limited range", };
+static const char * const color_space_name[] = { + [DRM_COLOR_SPACE_BT601] = "ITU-R BT.601 RGB", + [DRM_COLOR_SPACE_BT709] = "ITU-R BT.709 RGB", + [DRM_COLOR_SPACE_BT2020] = "ITU-R BT.2020 RGB", + [DRM_COLOR_SPACE_P3] = "DCI-P3", +}; + /** * drm_get_color_encoding_name - return a string for color encoding * @encoding: color encoding to compute name of @@ -556,6 +563,21 @@ const char *drm_get_color_range_name(enum drm_color_range range) return color_range_name[range]; }
+/** + * drm_get_color_space_name - return a string for color space + * @space: color space to compute name of + * + * In contrast to the other drm_get_*_name functions this one here returns a + * const pointer and hence is threadsafe. + */ +const char *drm_get_color_space_name(enum drm_color_space space) +{ + if (WARN_ON(space >= ARRAY_SIZE(color_space_name))) + return "unknown"; + + return color_space_name[space]; +} + int drm_plane_create_sdr_white_level_property(struct drm_plane *plane){
struct drm_property *prop; @@ -592,23 +614,28 @@ const char *drm_get_transfer_function_name(enum drm_transfer_function tf) * @plane: plane object * @supported_encodings: bitfield indicating supported color encodings * @supported_ranges: bitfileld indicating supported color ranges + * @supported_spaces: bitfield indicating supported color spaces * @supported_tfs: bitfield indicating supported transfer functions * @default_encoding: default color encoding * @default_range: default color range + * @default_space: default color space * @default_tf: default color transfer function * - * Create and attach plane specific COLOR_ENCODING, COLOR_RANGE and TRANSFER_FUNCTION - * properties to @plane. The supported encodings, ranges and tfs should - * be provided in supported_encodings, supported_ranges and supported_tfs bitmasks. + * Create and attach plane specific COLOR_ENCODING, COLOR_RANGE, COLOR_SPACE, + * and TRANSFER_FUNCTION properties to @plane. The supported encodings, ranges, + * spaces, and tfs should be provided in supported_encodings, supported_ranges, + * supported_spaces, and supported_tfs bitmasks. * Each bit set in the bitmask indicates that its number as enum * value is supported. */ int drm_plane_create_color_properties(struct drm_plane *plane, u32 supported_encodings, u32 supported_ranges, + u32 supported_spaces, u32 supported_tfs, enum drm_color_encoding default_encoding, enum drm_color_range default_range, + enum drm_color_space default_space, enum drm_transfer_function default_tf) { struct drm_device *dev = plane->dev; @@ -628,6 +655,11 @@ int drm_plane_create_color_properties(struct drm_plane *plane, (supported_ranges & BIT(default_range)) == 0)) return -EINVAL;
+ if (WARN_ON(supported_spaces == 0 || + (supported_spaces & -BIT(DRM_COLOR_SPACE_MAX)) != 0 || + (supported_spaces & BIT(default_space)) == 0)) + return -EINVAL; + if (WARN_ON(supported_tfs == 0 || (supported_tfs & -BIT(DRM_TF_MAX)) != 0 || (supported_tfs & BIT(default_tf)) == 0)) @@ -672,6 +704,26 @@ int drm_plane_create_color_properties(struct drm_plane *plane, plane->state->color_range = default_range;
+ len = 0; + for (i = 0; i < DRM_COLOR_SPACE_MAX; i++) { + if ((supported_spaces & BIT(i)) == 0) + continue; + + enum_list[len].type = i; + enum_list[len].name = color_space_name[i]; + len++; + } + + prop = drm_property_create_enum(dev, 0, "COLOR_SPACE", + enum_list, len); + if (!prop) + return -ENOMEM; + plane->color_space_property = prop; + drm_object_attach_property(&plane->base, prop, default_space); + if (plane->state) + plane->state->color_space = default_space; + + len = 0; for (i = 0; i < DRM_TF_MAX; i++) { if ((supported_tfs & BIT(i)) == 0) diff --git a/drivers/gpu/drm/i915/display/intel_sprite.c b/drivers/gpu/drm/i915/display/intel_sprite.c index b3f7aca3795b..76637b1aa5dc 100644 --- a/drivers/gpu/drm/i915/display/intel_sprite.c +++ b/drivers/gpu/drm/i915/display/intel_sprite.c @@ -1850,9 +1850,11 @@ intel_sprite_plane_create(struct drm_i915_private *dev_priv, BIT(DRM_COLOR_YCBCR_BT709), BIT(DRM_COLOR_YCBCR_LIMITED_RANGE) | BIT(DRM_COLOR_YCBCR_FULL_RANGE), + BIT(DRM_COLOR_SPACE_BT709), BIT(DRM_TF_UNDEFINED), DRM_COLOR_YCBCR_BT709, DRM_COLOR_YCBCR_LIMITED_RANGE, + DRM_COLOR_SPACE_BT709, DRM_TF_UNDEFINED);
zpos = sprite + 1; diff --git a/drivers/gpu/drm/i915/display/skl_universal_plane.c b/drivers/gpu/drm/i915/display/skl_universal_plane.c index df596431151d..3ed65e527dde 100644 --- a/drivers/gpu/drm/i915/display/skl_universal_plane.c +++ b/drivers/gpu/drm/i915/display/skl_universal_plane.c @@ -2160,9 +2160,11 @@ skl_universal_plane_create(struct drm_i915_private *dev_priv, supported_csc, BIT(DRM_COLOR_YCBCR_LIMITED_RANGE) | BIT(DRM_COLOR_YCBCR_FULL_RANGE), + BIT(DRM_COLOR_SPACE_BT709), BIT(DRM_TF_UNDEFINED), DRM_COLOR_YCBCR_BT709, DRM_COLOR_YCBCR_LIMITED_RANGE, + DRM_COLOR_SPACE_BT709, DRM_TF_UNDEFINED);
drm_plane_create_alpha_property(&plane->base); diff --git a/drivers/gpu/drm/nouveau/dispnv04/overlay.c b/drivers/gpu/drm/nouveau/dispnv04/overlay.c index 64e1793212b4..dc350245c98b 100644 --- a/drivers/gpu/drm/nouveau/dispnv04/overlay.c +++ b/drivers/gpu/drm/nouveau/dispnv04/overlay.c @@ -345,9 +345,11 @@ nv10_overlay_init(struct drm_device *device) BIT(DRM_COLOR_YCBCR_BT601) | BIT(DRM_COLOR_YCBCR_BT709), BIT(DRM_COLOR_YCBCR_LIMITED_RANGE), + BIT(DRM_COLOR_SPACE_BT709), BIT(DRM_TF_UNDEFINED), DRM_COLOR_YCBCR_BT601, DRM_COLOR_YCBCR_LIMITED_RANGE, + DRM_COLOR_SPACE_BT709, DRM_TF_UNDEFINED);
plane->set_params = nv10_set_params; diff --git a/drivers/gpu/drm/omapdrm/omap_plane.c b/drivers/gpu/drm/omapdrm/omap_plane.c index ca7559824dcd..3eb52e78e08d 100644 --- a/drivers/gpu/drm/omapdrm/omap_plane.c +++ b/drivers/gpu/drm/omapdrm/omap_plane.c @@ -325,9 +325,11 @@ struct drm_plane *omap_plane_init(struct drm_device *dev, BIT(DRM_COLOR_YCBCR_BT709), BIT(DRM_COLOR_YCBCR_FULL_RANGE) | BIT(DRM_COLOR_YCBCR_LIMITED_RANGE), + BIT(DRM_COLOR_SPACE_BT709), BIT(DRM_TF_UNDEFINED), DRM_COLOR_YCBCR_BT601, DRM_COLOR_YCBCR_FULL_RANGE, + DRM_COLOR_SPACE_BT709, DRM_TF_UNDEFINED);
return plane; diff --git a/drivers/gpu/drm/sun4i/sun8i_vi_layer.c b/drivers/gpu/drm/sun4i/sun8i_vi_layer.c index eda8f51bafd7..c0115783c6a6 100644 --- a/drivers/gpu/drm/sun4i/sun8i_vi_layer.c +++ b/drivers/gpu/drm/sun4i/sun8i_vi_layer.c @@ -543,7 +543,7 @@ struct sun8i_vi_layer *sun8i_vi_layer_init_one(struct drm_device *drm, struct sun8i_mixer *mixer, int index) { - u32 supported_encodings, supported_ranges; + u32 supported_encodings, supported_ranges, supported_spaces; unsigned int plane_cnt, format_count; struct sun8i_vi_layer *layer; const u32 *formats; @@ -597,12 +597,16 @@ struct sun8i_vi_layer *sun8i_vi_layer_init_one(struct drm_device *drm, supported_ranges = BIT(DRM_COLOR_YCBCR_LIMITED_RANGE) | BIT(DRM_COLOR_YCBCR_FULL_RANGE);
+ supported_spaces = BIT(DRM_COLOR_SPACE_BT709); + ret = drm_plane_create_color_properties(&layer->plane, supported_encodings, supported_ranges, + supported_spaces, BIT(DRM_TF_UNDEFINED), DRM_COLOR_YCBCR_BT709, DRM_COLOR_YCBCR_LIMITED_RANGE, + DRM_COLOR_SPACE_BT709, DRM_TF_UNDEFINED); if (ret) { dev_err(drm->dev, "Couldn't add encoding and range properties!\n"); diff --git a/drivers/gpu/drm/tidss/tidss_plane.c b/drivers/gpu/drm/tidss/tidss_plane.c index a1336ecd5fd5..367a14616756 100644 --- a/drivers/gpu/drm/tidss/tidss_plane.c +++ b/drivers/gpu/drm/tidss/tidss_plane.c @@ -186,9 +186,11 @@ struct tidss_plane *tidss_plane_create(struct tidss_device *tidss, BIT(DRM_COLOR_YCBCR_BT709)); u32 color_ranges = (BIT(DRM_COLOR_YCBCR_FULL_RANGE) | BIT(DRM_COLOR_YCBCR_LIMITED_RANGE)); + u32 color_spaces = BIT(DRM_COLOR_SPACE_BY709); u32 transfer_functions = BIT(DRM_TF_UNDEFINED; u32 default_encoding = DRM_COLOR_YCBCR_BT601; u32 default_range = DRM_COLOR_YCBCR_FULL_RANGE; + u32 default_space = DRM_COLOR_SPACE_BT709; u32 default_tf = DRM_TF_UNDEFINED;; u32 blend_modes = (BIT(DRM_MODE_BLEND_PREMULTI) | BIT(DRM_MODE_BLEND_COVERAGE)); @@ -219,9 +221,11 @@ struct tidss_plane *tidss_plane_create(struct tidss_device *tidss, ret = drm_plane_create_color_properties(&tplane->plane, color_encodings, color_ranges, + color_spaces, transfer_functions, default_encoding, default_range, + default_space, default_tf); if (ret) goto err; diff --git a/include/drm/drm_color_mgmt.h b/include/drm/drm_color_mgmt.h index 2a356a9601df..575427650542 100644 --- a/include/drm/drm_color_mgmt.h +++ b/include/drm/drm_color_mgmt.h @@ -99,6 +99,9 @@ static inline int drm_color_lut_size(const struct drm_property_blob *blob) return blob->length / sizeof(struct drm_color_lut); }
+/** + * drm_color_encoding - describes the coefficient for YCbCr-RGB conversion + */ enum drm_color_encoding { DRM_COLOR_YCBCR_BT601, DRM_COLOR_YCBCR_BT709, @@ -112,12 +115,25 @@ enum drm_color_range { DRM_COLOR_RANGE_MAX, };
+/** + * drm_color_space - describes the color space (primaries & white point) + */ +enum drm_color_space { + DRM_COLOR_SPACE_BT601, + DRM_COLOR_SPACE_BT709, + DRM_COLOR_SPACE_BT2020, + DRM_COLOR_SPACE_P3, + DRM_COLOR_SPACE_MAX, +}; + int drm_plane_create_color_properties(struct drm_plane *plane, u32 supported_encodings, u32 supported_ranges, + u32 supported_spaces, u32 supported_tf, enum drm_color_encoding default_encoding, enum drm_color_range default_range, + enum drm_color_space default_space, enum drm_transfer_function default_tf);
/** diff --git a/include/drm/drm_plane.h b/include/drm/drm_plane.h index 93ee308a46af..8c9fe6350ead 100644 --- a/include/drm/drm_plane.h +++ b/include/drm/drm_plane.h @@ -179,6 +179,13 @@ struct drm_plane_state { */ enum drm_color_range color_range;
+ /** + * @color_space + * + * Color space (primaries & white point) of the plane + */ + enum drm_color_space color_space; + /** * @transfer_function: * @@ -754,6 +761,15 @@ struct drm_plane { * See drm_plane_create_color_properties(). */ struct drm_property *color_range_property; + /** + * @color_space_property: + * + * Optional "COLOR_SPACE" enum property for specifying + * the color space (i.e. primaries and white point) of + * the plane. + * See drm_plane_create_color_properties(). + */ + struct drm_property *color_space_property; /** * @transfer_function_property: *
Show the CSC matrixes in a 4x3 format.
Signed-off-by: Harry Wentland harry.wentland@amd.com --- drivers/gpu/drm/amd/display/dc/inc/hw/dpp.h | 28 +++++++++++++-------- 1 file changed, 18 insertions(+), 10 deletions(-)
diff --git a/drivers/gpu/drm/amd/display/dc/inc/hw/dpp.h b/drivers/gpu/drm/amd/display/dc/inc/hw/dpp.h index 00fc81431b43..345b3956425a 100644 --- a/drivers/gpu/drm/amd/display/dc/inc/hw/dpp.h +++ b/drivers/gpu/drm/amd/display/dc/inc/hw/dpp.h @@ -55,22 +55,30 @@ struct dpp_input_csc_matrix {
static const struct dpp_input_csc_matrix __maybe_unused dpp_input_csc_matrix[] = { {COLOR_SPACE_SRGB, - {0x2000, 0, 0, 0, 0, 0x2000, 0, 0, 0, 0, 0x2000, 0} }, + {0x2000, 0, 0, 0, + 0, 0x2000, 0, 0, + 0, 0, 0x2000, 0} }, {COLOR_SPACE_SRGB_LIMITED, - {0x2000, 0, 0, 0, 0, 0x2000, 0, 0, 0, 0, 0x2000, 0} }, + {0x2000, 0, 0, 0, + 0, 0x2000, 0, 0, + 0, 0, 0x2000, 0} }, {COLOR_SPACE_YCBCR601, - {0x2cdd, 0x2000, 0, 0xe991, 0xe926, 0x2000, 0xf4fd, 0x10ef, - 0, 0x2000, 0x38b4, 0xe3a6} }, + {0x2cdd, 0x2000, 0, 0xe991, + 0xe926, 0x2000, 0xf4fd, 0x10ef, + 0, 0x2000, 0x38b4, 0xe3a6} }, {COLOR_SPACE_YCBCR601_LIMITED, - {0x3353, 0x2568, 0, 0xe400, 0xe5dc, 0x2568, 0xf367, 0x1108, - 0, 0x2568, 0x40de, 0xdd3a} }, + {0x3353, 0x2568, 0, 0xe400, + 0xe5dc, 0x2568, 0xf367, 0x1108, + 0, 0x2568, 0x40de, 0xdd3a} }, {COLOR_SPACE_YCBCR709, - {0x3265, 0x2000, 0, 0xe6ce, 0xf105, 0x2000, 0xfa01, 0xa7d, 0, - 0x2000, 0x3b61, 0xe24f} }, + {0x3265, 0x2000, 0, 0xe6ce, + 0xf105, 0x2000, 0xfa01, 0xa7d, + 0, 0x2000, 0x3b61, 0xe24f} },
{COLOR_SPACE_YCBCR709_LIMITED, - {0x39a6, 0x2568, 0, 0xe0d6, 0xeedd, 0x2568, 0xf925, 0x9a8, 0, - 0x2568, 0x43ee, 0xdbb2} } + {0x39a6, 0x2568, 0, 0xe0d6, + 0xeedd, 0x2568, 0xf925, 0x9a8, + 0, 0x2568, 0x43ee, 0xdbb2} } };
struct dpp_grph_csc_adjustment {
dri-devel@lists.freedesktop.org