Inline..
On Mon, Jun 1, 2020 at 2:19 PM Pekka Paalanen ppaalanen@gmail.com wrote:
On Mon, 1 Jun 2020 09:22:27 +0530 Yogish Kulkarni yogishkulkarni@gmail.com wrote:
Hi,
For letting DRM clients to select output encoding: Sink can support certain display timings with high output bit-depths
using
multiple output encodings, e.g. sink can support a particular timing with RGB 10-bit, YCbCr422 10-bit and YCbCr420 10-bit. So DRM client may want
to
select YCbCr422 10-bit over RBG 10-bit output to reduce the link
bandwidth
(and in turn reduce power/voltage). If DRM driver automatically selects output encoding then we are restricting DRM clients from making
appropriate
choice.
Hi,
right, that seems to be another reason.
For selectable output color range: Certain applications (typically graphics) usually rendered in full range while some applications (typically video) have limited range content.
Since
content can change dynamically, DRM driver does not have enough
information
to choose correct quantization. Only DRM client can correctly select
which
quantization to set (to preserve artist's intent).
Now this is an interesting topic for me. As far as I know, there is no window system protocol to tell the display server whether the application provided content is using full or limited range. This means that the display server cannot tell DRM about full vs. limited range either. It also means that when not fullscreen, the display server cannot show the limited range video content correctly, because it would have to be converted to full-range (or vice versa).
Right, but there could be DRM client which doesn't use window system (e.g.
Gstreamer video sink) and wants to select between full/limited color range. I agree that there is no window system protocol yet but maybe Wayland protocol could be added/extended for this purpose once we finalize things that needs to be done in DRM.
But why would an application produce limited range pixels anyway? Is it
common that hardware video decoders are unable to produce full-range pixels?
The primary reason for why content producer masters video/gfx content as limited range is for compatibility with sinks which only support limited range, and not because video decoders are not capable of decoding full-range content. Also, certain cinema-related content (e.g., movies) may be better suited for limited range encoding due to the level of detail that they need to present/hide (see "Why does limited RGB even exist?" section in https://www.benq.com/en-us/knowledge-center/knowledge/full-rgb-vs-limited-rg... ).
I am asking, because I have a request to add limited vs. full range
information to Wayland.
What about video sinks, including monitors? Are there devices that accept limited-range only, full-range only, or switchable?
Yes, there are sinks which support selectable quantization range and there are sinks which don't. If the quantization range is not selectable, then in general, sources should output full-range for IT timings, and output limited for CE timings. At a high-level, IT timings are part of a standard developed by VESA for computer monitor-like displays. CE (Consumer Electronics) timings are a separate standard for timings more applicable to sinks like consumer TVs, etc.
Why not just always use full-range everywhere?
Or if a sink supports only limited-range, have the display chip automatically convert from full-range, so that software doesn't have to convert in software.
I think it is ok to convert from limited range to full range in display HW pipeline. By "automatically" if you mean display HW or DRM driver should look at the content to figure out whether it is limited range content and then program display pipeline to do the conversion, I don't think that is a good idea since we would need to inspect each pixel. Also, there may be some post-processing done to full-range content that happens to cause the pixel component values to fall within the limited quantization range. How about adding a new DRM KMS plane property to let client convey the driver about input content range? More details on this below.
If you actually have a DRM KMS property for the range, does it mean that:
- the sink is configured to accept that range, and the pixels in the framebuffer need to comply, or
- the display chip converts to that range while framebuffer remains in full-range?
I would imagine this as: (1) Add new read DRM KMS connector property which DRM client will read to know whether sink support selectable quantization range. (2) Add new read/write DRM KMS connector property which DRM client will write to set output quantization range and read to know the current output quantization range. (3) Add new read/write DRM KMS plane property which DRM client will write to set input quantization range and read to know the current input quantization range.
Now lets say if client has limited range content that it wants to display using limited range, it will perform below steps: (A) Set plane's input range property to LIMITED. (B) Set connector's output range property to LIMITED. (C) Read connector property to know whether sink support selectable quantization range: (i) If no, validate HW timing + output range (LIMITED) using atomic test commit, if validation doesn't pass, client should choose another HW timing and revalidate. (ii) if yes, it is not necessary to validate HW timing + output range.
Now lets say if client has limited range content that it wants to display using full range, it will perform below steps: (A) Set plane's input range property to LIMITED (B) Set connector's output range property to FULL (C) Read connector property to know whether sink support selectable quantization range: (i) if no, validate HW timing + output range (FULL) using atomic test commit, if validation doesn't pass, client should choose another HW timing and revalidate (ii) if yes, it is not necessary to validate HW timing + output range. In this example DRM driver will automatically set up display pipeline to do limited to full-range conversion.
Out of the three new properties mentioned above there is another choice for property (1): Instead of expecting client to read whether sink support selectable quantization range and perform validations as mentioned above when quantization range is not selectable, how about adding new flags to drmModeModeInfo->flags and let DRM driver to inform client using this flag whether given HW timing is supported with full range, limited range or both? This will avoid validation step mentioned in (C)(i).
Let me know what you think about the overall proposal mentioned above. If there is no strong disagreement about adding new DRM KMS properties for output quantization range (and output encoding), I'll plan to start working on the changes.
Thanks, -Yogish
If we look at I915 driver's "Broadcast RGB" property, it seems to say to me that the framebuffer is always primarily assumed to be in full-range, and the conversion to limited-range happens in the scanout circuitry. So that property would not help with video content that is already in limited-range.
To recap, there are two orthogonal things: application content or framebuffer range, and video sink / monitor range. The display server between the two, at last if it is a Wayland compositor, would be able to convert as necessary.
For how to use selectable output encoding with Weston: I was thinking that DRM should have separate property to list the
encodings
supported by sink and Weston will present this list to its client. Your
Not client. A configuration tool perhaps, but not generically to all Wayland clients, not as a directly settable knob at least.
idea to validate encodings using TEST_ONLY commit and present a list of timings along with encodings supported by particular timing seems better. Instead of validating all possible encodings, does it make sense to validate only those supported by sink? Irrespective of this we would
Yes, having a list of what the sink actually supports would be nice.
As for Wayland clients, there is an extension brewing at https://gitlab.freedesktop.org/wayland/wayland-protocols/-/merge_requests/8 that would allow suggesting the optimal encoding (pixel format and modifier really) in flight.
That said, we are talking about the two different things here: framebuffer format vs. encoding on the wire. Whether making them match has benefits is another matter.
anyway need some mechanism which will allow user to select particular encoding for a particular mode. I was thinking to allow this using new
DRM
property "Encoding". Do you have anything better in mind?
I think that is a reasonable and useful goal and idea. Just remember to document it when proposing, even if it seems obvious. The details on how to formulate that into UAPI is up for debate.
As said, changing KMS properties after they have been exposed to userspace won't really work from either kernel or userspace point of view. So you'd probably need to expose one blob type property listing the encodings that may work as an array, and another property for setting the one to use. IN_FORMATS property is somewhat similar, although more complicated because it is the combination of format and modifier.
(Since I am using my Gmail Id, I feel I should mention that I work at Nvidia)
Nice to know the source of interest. :-)
Thanks, pq
Thanks, -Yogish
On Thu, May 28, 2020 at 6:18 PM Pekka Paalanen ppaalanen@gmail.com
wrote:
On Thu, 28 May 2020 17:38:59 +0530 Yogish Kulkarni yogishkulkarni@gmail.com wrote:
I am trying to find a way through Weston which will allow setting
specific
encoding at display output.
Hi,
why do *you* want to control that?
Why not let the driver always choose the highest possible encoding given the video mode and hardware capability?
I can understand userspace wanting to know what it got, but why should userspace be able to control it?
Would people want to pick the encoding first, and then go for the highest possible video mode?
Could you please elaborate on why it is best to let DRM driver automatically configure which encoding to choose
rather
than making it selectable by DRM client ? I am not able to find
reference
to past discussion about this. I was only able to find a proposed
change
https://lists.freedesktop.org/archives/intel-gfx/2017-April/125451.html
but
am not able to find why it got rejected.
Alternatively, is there existing way through which DRM clients can
specify
preference for output encoding ? Or currently it's all up to the
DRM
driver
to choose what output encoding to use.
There must be some reason why userspace needs to be able to control it. I'm also asking as a Weston maintainer, since I'm interested in how this affects e.g. color reproduction or HDR support.
One thing that comes to my mind is using atomic TEST_ONLY commits to probe all the possible video modes × encodings for presenting a list to the user to choose from, if you have a display configuration GUI. E.g with some TV use cases, maybe the user wants to avoid sub-sampling, use the native resolution, but limit refresh rate to what's actually possible. Or any other combination of the three.
Thanks, pq
Thanks, -Yogish
On Thu, May 28, 2020 at 1:54 PM Daniel Vetter daniel@ffwll.ch
wrote:
On Thu, May 28, 2020 at 12:29:43PM +0530, Yogish Kulkarni wrote:
For creating new source property, is it good to follow "drm_mode_create_hdmi_colorspace_property()" as an example ?
It
seems
that
currently there is no standard DRM property which allows DRM
client
to
set
a specific output encoding (like YUV420, YUV422 etc). Also,
there is
no
standard property for letting client select YUV/RGB color range.
I
see
there are two ways to introduce new properties, 1. do something
like
drm_mode_create_hdmi_colorspace_property 2. create custom
property
similar
to "Broadcast RGB". Is there opinion on which is a preferable
way
to
expose
encoding and color rage selection property ?
I guess first question is "why?" Thus far we've gone with the
opinion
that
automatically configuring output stuff as much as possible is
best.
What's
the use-case where the driver can't select this? -Daniel