Hi
On Fri, Dec 4, 2015 at 9:07 AM, Daniel Vetter daniel@ffwll.ch wrote:
On Thu, Dec 03, 2015 at 07:26:31PM +0200, Martin Peres wrote:
You are right Ilia, this is indeed what Jaakko and I had in mind, but they did not re-use the fuse/cuse framework to do the serialization of the ioctls.
Not sure what we can do against allowing proprietary drivers to use this feature though :s To be fair, nothing prevents any vendor to do this shim themselves and nvidia definitely did it, and directly called their closed-source driver.
Any proposition on how to handle this case? I guess we could limit that to screens only, no rendering. That would block any serious GPU manufacturer from using this code even if any sane person would never write a driver in the userspace...
Hm for virtual devices like this I figured there's no point exporting the full kms api to userspace, but instead we'd just need a simple kms driver with just 1 crtc and 1 connector per drm_device. Plus a special device node (v4l is probably inappropriate since it doesn't do damage) where the miracast userspace can receive events with just the following information:
- virtual screen size
- fd to the underlying shmem node for the current fb. Or maybe a dma-buf (but then we'd need the dma-buf mmap stuff to land first).
- damage tracking
If we want fancy, we could allow userspace to reply (through an ioctl) when it's done reading the previous image, which the kernel could then forward as vblank complete events.
Connector configuration could be done by forcing the outputs (we'll send out uevents nowadays for that), so the only thing we need is some configfs to instantiate new copies of this.
At least for miracst (as opposed to full-blown hw drivers in userspace) I don't think we need to export everything.
I looked into all this when working on WFD, but I cannot recommend going down that road. First of all, you still need heavy modifications for gnome-shell, kwin, and friends, as neither of them supports seamless drm-device hotplugging. Hence, providing more devices than the main GPU just confuses them. Secondly, you really don't win much by re-using DRM for all that. On the contrary, you get very heavy overhead, need to feed all this through limited ioctl interfaces, and fake DRM crtcs/encoders/connectors, when all you really have is an mpeg stream.
I wouldn't mind if anyone writes a virtual DRM interface, it'd be really great for automated testing. However, if you want your wifi-display (or whatever else) integrated into desktop environments, then I recommend teaching those environments to accept gstreamer sinks as outputs.
Thanks David