On Thu, 17 Jun 2021 00:05:24 +0300 Laurent Pinchart laurent.pinchart@ideasonboard.com wrote:
On Tue, Jun 15, 2021 at 01:16:56PM +0300, Pekka Paalanen wrote:
On Tue, 15 Jun 2021 12:45:57 +0300 Laurent Pinchart wrote:
On Tue, Jun 15, 2021 at 07:15:18AM +0000, Simon Ser wrote:
On Tuesday, June 15th, 2021 at 09:03, Pekka Paalanen wrote:
indeed it will, but what else could one do to test userspace KMS clients in generic CI where all you can have is virtual hardware? Maybe in the long run VKMS needs to loop back to a userspace daemon that implements all the complex processing and returns the writeback result via VKMS again? That daemon would then need a single upstream, like the kernel, where it is maintained and correctness verified.
The complex processing must be implemented even without write-back, because user-space can ask for CRCs of the CRTC.
Or an LD_PRELOAD that hijacks all KMS ioctls and implements virtual stuff in userspace? Didn't someone already have something like that? It would need to be lifted to be a required part of kernel UAPI submissions, I suppose like IGT is nowadays.
FWIW, I have a mock libdrm [1] for libliftoff. This is nowhere near a full software implementation with write-back connectors, but allows to expose virtual planes and check atomic commits in CI.
For compositor developers like me knowing the exact formulas would be a huge benefit as it would allow me to use KMS to off-load precision-sensitive operations (e.g. professional color management). Otherwise, compositors probably need a switch: "high quality color management? Then do not use KMS features."
I think for alpha blending there are already rounding issues depending on the hardware. I wouldn't keep my hopes up for any guarantee that all hw uses the exact same formulae for color management stuff.
Good, because otherwise you would be very quickly disappointed :-)
For scaling we would also need to replicate the exact same filter taps, which are often not documented.
That is where the documented tolerances come into play.
This is something I've experimented with a while ago, when developing automated tests for the rcar-du driver. When playing with different input images we had to constantly increases tolerances, up to a point where the tests started to miss real problems :-(
What should we infer from that? That the hardware is broken and exposing those KMS properties is a false promise?
If a driver on certain hardware cannot correctly implement a KMS property over the full domain of the input space, should that driver then simply not expose the KMS property at all?
But I would assume that the vendor still wants to expose the features in upstream kernels, yet they cannot use the standard KMS properties for that. Should the driver then expose vendor-specific properties with the disclaimer that the result is not always what one would expect, so that userspace written and tested explicitly for that hardware can still work?
That is, a sufficient justification for a vendor-specific KMS property would be that a standard property already exists, but the hardware is too buggy to make it work. IOW, give up trying to make sense.
I would like to move towards a direction where *hardware* design and testing is eventually guided by Linux KMS property definitions and their tests. If we could have a rule that if a driver cannot correctly implement a property then it must not expose the property, maybe in the long term that might start having an effect?
My underlying assumption is that generic userspace will not use vendor-specific properties.
Or, since we have atomic commits with TEST_ONLY, should it be driver's responsibility to carefully inspect the full state and reject the commit if the hardware is incapable of implementing it correctly? Vendor-specific userspace would know to avoid failing configurations to begin with. I suppose that might put an endless whack-a-mole game on drivers though.
Thanks, pq