On Mon, 1 Jun 2020 09:22:27 +0530 Yogish Kulkarni yogishkulkarni@gmail.com wrote:
Hi,
For letting DRM clients to select output encoding: Sink can support certain display timings with high output bit-depths using multiple output encodings, e.g. sink can support a particular timing with RGB 10-bit, YCbCr422 10-bit and YCbCr420 10-bit. So DRM client may want to select YCbCr422 10-bit over RBG 10-bit output to reduce the link bandwidth (and in turn reduce power/voltage). If DRM driver automatically selects output encoding then we are restricting DRM clients from making appropriate choice.
Hi,
right, that seems to be another reason.
For selectable output color range: Certain applications (typically graphics) usually rendered in full range while some applications (typically video) have limited range content. Since content can change dynamically, DRM driver does not have enough information to choose correct quantization. Only DRM client can correctly select which quantization to set (to preserve artist's intent).
Now this is an interesting topic for me. As far as I know, there is no window system protocol to tell the display server whether the application provided content is using full or limited range. This means that the display server cannot tell DRM about full vs. limited range either. It also means that when not fullscreen, the display server cannot show the limited range video content correctly, because it would have to be converted to full-range (or vice versa).
But why would an application produce limited range pixels anyway? Is it common that hardware video decoders are unable to produce full-range pixels?
I am asking, because I have a request to add limited vs. full range information to Wayland.
What about video sinks, including monitors? Are there devices that accept limited-range only, full-range only, or switchable?
Why not just always use full-range everywhere?
Or if a sink supports only limited-range, have the display chip automatically convert from full-range, so that software doesn't have to convert in software.
If you actually have a DRM KMS property for the range, does it mean that: - the sink is configured to accept that range, and the pixels in the framebuffer need to comply, or - the display chip converts to that range while framebuffer remains in full-range?
If we look at I915 driver's "Broadcast RGB" property, it seems to say to me that the framebuffer is always primarily assumed to be in full-range, and the conversion to limited-range happens in the scanout circuitry. So that property would not help with video content that is already in limited-range.
To recap, there are two orthogonal things: application content or framebuffer range, and video sink / monitor range. The display server between the two, at last if it is a Wayland compositor, would be able to convert as necessary.
For how to use selectable output encoding with Weston: I was thinking that DRM should have separate property to list the encodings supported by sink and Weston will present this list to its client. Your
Not client. A configuration tool perhaps, but not generically to all Wayland clients, not as a directly settable knob at least.
idea to validate encodings using TEST_ONLY commit and present a list of timings along with encodings supported by particular timing seems better. Instead of validating all possible encodings, does it make sense to validate only those supported by sink? Irrespective of this we would
Yes, having a list of what the sink actually supports would be nice.
As for Wayland clients, there is an extension brewing at https://gitlab.freedesktop.org/wayland/wayland-protocols/-/merge_requests/8 that would allow suggesting the optimal encoding (pixel format and modifier really) in flight.
That said, we are talking about the two different things here: framebuffer format vs. encoding on the wire. Whether making them match has benefits is another matter.
anyway need some mechanism which will allow user to select particular encoding for a particular mode. I was thinking to allow this using new DRM property "Encoding". Do you have anything better in mind?
I think that is a reasonable and useful goal and idea. Just remember to document it when proposing, even if it seems obvious. The details on how to formulate that into UAPI is up for debate.
As said, changing KMS properties after they have been exposed to userspace won't really work from either kernel or userspace point of view. So you'd probably need to expose one blob type property listing the encodings that may work as an array, and another property for setting the one to use. IN_FORMATS property is somewhat similar, although more complicated because it is the combination of format and modifier.
(Since I am using my Gmail Id, I feel I should mention that I work at Nvidia)
Nice to know the source of interest. :-)
Thanks, pq
Thanks, -Yogish
On Thu, May 28, 2020 at 6:18 PM Pekka Paalanen ppaalanen@gmail.com wrote:
On Thu, 28 May 2020 17:38:59 +0530 Yogish Kulkarni yogishkulkarni@gmail.com wrote:
I am trying to find a way through Weston which will allow setting
specific
encoding at display output.
Hi,
why do *you* want to control that?
Why not let the driver always choose the highest possible encoding given the video mode and hardware capability?
I can understand userspace wanting to know what it got, but why should userspace be able to control it?
Would people want to pick the encoding first, and then go for the highest possible video mode?
Could you please elaborate on why it is best to let DRM driver automatically configure which encoding to choose rather than making it selectable by DRM client ? I am not able to find reference to past discussion about this. I was only able to find a proposed change
https://lists.freedesktop.org/archives/intel-gfx/2017-April/125451.html
but
am not able to find why it got rejected.
Alternatively, is there existing way through which DRM clients can
specify
preference for output encoding ? Or currently it's all up to the DRM
driver
to choose what output encoding to use.
There must be some reason why userspace needs to be able to control it. I'm also asking as a Weston maintainer, since I'm interested in how this affects e.g. color reproduction or HDR support.
One thing that comes to my mind is using atomic TEST_ONLY commits to probe all the possible video modes × encodings for presenting a list to the user to choose from, if you have a display configuration GUI. E.g with some TV use cases, maybe the user wants to avoid sub-sampling, use the native resolution, but limit refresh rate to what's actually possible. Or any other combination of the three.
Thanks, pq
Thanks, -Yogish
On Thu, May 28, 2020 at 1:54 PM Daniel Vetter daniel@ffwll.ch wrote:
On Thu, May 28, 2020 at 12:29:43PM +0530, Yogish Kulkarni wrote:
For creating new source property, is it good to follow "drm_mode_create_hdmi_colorspace_property()" as an example ? It
seems
that
currently there is no standard DRM property which allows DRM client
to
set
a specific output encoding (like YUV420, YUV422 etc). Also, there is
no
standard property for letting client select YUV/RGB color range. I
see
there are two ways to introduce new properties, 1. do something like drm_mode_create_hdmi_colorspace_property 2. create custom property
similar
to "Broadcast RGB". Is there opinion on which is a preferable way
to
expose
encoding and color rage selection property ?
I guess first question is "why?" Thus far we've gone with the opinion
that
automatically configuring output stuff as much as possible is best.
What's
the use-case where the driver can't select this? -Daniel