Hello,
We're developing Miracast (HDMI over Wireless connections). The current progress is that it 'works' in the userspace but doesn't have any integration with X/Wayland and can only mirror the current desktop using gstreamer.
We're looking into extending the implementation so that we would be able to use the remote screens just as any other connected screen, but we're not quite sure where we should implement it.
The DRM interface seems like the perfect fit since we wouldn't need to patch every compositor.
Right now, gstreamer is the equivalent of the crtc/encoder, in the DRM model. Screens / crtcs are discovered using a WiFi's p2p protocol which means that screens should be hotpluggable. Since we cannot change the number of crtcs of a driver on the fly, we propose adding and removing gpus with one crtc attached and no rendering capabilities.
Compositors and X currently use udev to list gpus and get run-time events for gpu hot-plugging (see the work from Dave Airlie for USB GPUs, using the modesetting X driver). We did not find a way to tell udev that we have a new device and it seems like the only way to get it to pick up our driver is from a uevent which can only be generated from the kernel.
Since we have so many userspace components, it doesn't make sense to implement the entire driver in the kernel.
We would thus need to have a communication from the kernel space to the userspace at least to send the flip commands to the fake crtc. Since we need this, why not implement everything in the userspace and just redirect the ioctls to the userspace driver?
This is exactly what fuse / cuse [1] does, with the minor catch that it creates devices in /sys/class/cuse instead of drm. This prevents the wayland compositors and X to pick it up as a normal drm driver...
We would thus need to have the drm subsystem create the device nodes for us when the userspace needs to create a new gpu. We could create a node named /dev/dri/cuse_card that, when opened, would allocate a node (/dev/dri/cardX) and would use cuse/fuse to redirect the ioctls to the process who opened /dev/dri/cuse_card.
The process would then be responsible for decoding the ioctl and implementing the drm API.
Since this is a major change which would allow proprietary drivers to be implemented in the userspace and since we may have missed something obvious, we would like to start a discussion on this. What are your thoughts?
Hi Jaakko,
On Thursday 03 December 2015 14:42:51 Hannikainen, Jaakko wrote:
Hello,
We're developing Miracast (HDMI over Wireless connections). The current progress is that it 'works' in the userspace but doesn't have any integration with X/Wayland and can only mirror the current desktop using gstreamer.
We're looking into extending the implementation so that we would be able to use the remote screens just as any other connected screen, but we're not quite sure where we should implement it.
The DRM interface seems like the perfect fit since we wouldn't need to patch every compositor.
Right now, gstreamer is the equivalent of the crtc/encoder, in the DRM model. Screens / crtcs are discovered using a WiFi's p2p protocol which means that screens should be hotpluggable. Since we cannot change the number of crtcs of a driver on the fly, we propose adding and removing gpus with one crtc attached and no rendering capabilities.
Compositors and X currently use udev to list gpus and get run-time events for gpu hot-plugging (see the work from Dave Airlie for USB GPUs, using the modesetting X driver). We did not find a way to tell udev that we have a new device and it seems like the only way to get it to pick up our driver is from a uevent which can only be generated from the kernel.
Since we have so many userspace components, it doesn't make sense to implement the entire driver in the kernel.
We would thus need to have a communication from the kernel space to the userspace at least to send the flip commands to the fake crtc. Since we need this, why not implement everything in the userspace and just redirect the ioctls to the userspace driver?
This is exactly what fuse / cuse [1] does, with the minor catch that it creates devices in /sys/class/cuse instead of drm. This prevents the wayland compositors and X to pick it up as a normal drm driver...
We would thus need to have the drm subsystem create the device nodes for us when the userspace needs to create a new gpu. We could create a node named /dev/dri/cuse_card that, when opened, would allocate a node (/dev/dri/cardX) and would use cuse/fuse to redirect the ioctls to the process who opened /dev/dri/cuse_card.
The process would then be responsible for decoding the ioctl and implementing the drm API.
Since this is a major change which would allow proprietary drivers to be implemented in the userspace and since we may have missed something obvious, we would like to start a discussion on this. What are your thoughts?
As you raise the issue, how would you prevent proprietary userspace drivers to be implemented ? Anything that would allow vendors to destroy the Linux graphics ecosystem would receive a big nack from me.
On Thu, Dec 3, 2015 at 10:34 AM, Laurent Pinchart laurent.pinchart@ideasonboard.com wrote:
Hi Jaakko,
On Thursday 03 December 2015 14:42:51 Hannikainen, Jaakko wrote:
Hello,
We're developing Miracast (HDMI over Wireless connections). The current progress is that it 'works' in the userspace but doesn't have any integration with X/Wayland and can only mirror the current desktop using gstreamer.
We're looking into extending the implementation so that we would be able to use the remote screens just as any other connected screen, but we're not quite sure where we should implement it.
The DRM interface seems like the perfect fit since we wouldn't need to patch every compositor.
Right now, gstreamer is the equivalent of the crtc/encoder, in the DRM model. Screens / crtcs are discovered using a WiFi's p2p protocol which means that screens should be hotpluggable. Since we cannot change the number of crtcs of a driver on the fly, we propose adding and removing gpus with one crtc attached and no rendering capabilities.
Compositors and X currently use udev to list gpus and get run-time events for gpu hot-plugging (see the work from Dave Airlie for USB GPUs, using the modesetting X driver). We did not find a way to tell udev that we have a new device and it seems like the only way to get it to pick up our driver is from a uevent which can only be generated from the kernel.
Since we have so many userspace components, it doesn't make sense to implement the entire driver in the kernel.
We would thus need to have a communication from the kernel space to the userspace at least to send the flip commands to the fake crtc. Since we need this, why not implement everything in the userspace and just redirect the ioctls to the userspace driver?
This is exactly what fuse / cuse [1] does, with the minor catch that it creates devices in /sys/class/cuse instead of drm. This prevents the wayland compositors and X to pick it up as a normal drm driver...
We would thus need to have the drm subsystem create the device nodes for us when the userspace needs to create a new gpu. We could create a node named /dev/dri/cuse_card that, when opened, would allocate a node (/dev/dri/cardX) and would use cuse/fuse to redirect the ioctls to the process who opened /dev/dri/cuse_card.
The process would then be responsible for decoding the ioctl and implementing the drm API.
Since this is a major change which would allow proprietary drivers to be implemented in the userspace and since we may have missed something obvious, we would like to start a discussion on this. What are your thoughts?
As you raise the issue, how would you prevent proprietary userspace drivers to be implemented ? Anything that would allow vendors to destroy the Linux graphics ecosystem would receive a big nack from me.
AFAIK the displaylink people already have precisely such a driver -- a (open-source) kernel module that allows their (closed-source) userspace blob to present a drm node to pass through modesetting/etc ioctl's.
-ilia
On Thursday 03 December 2015 10:42:50 Ilia Mirkin wrote:
On Thu, Dec 3, 2015 at 10:34 AM, Laurent Pinchart
laurent.pinchart@ideasonboard.com wrote:
Hi Jaakko,
On Thursday 03 December 2015 14:42:51 Hannikainen, Jaakko wrote:
Hello,
We're developing Miracast (HDMI over Wireless connections). The current progress is that it 'works' in the userspace but doesn't have any integration with X/Wayland and can only mirror the current desktop using gstreamer.
We're looking into extending the implementation so that we would be able to use the remote screens just as any other connected screen, but we're not quite sure where we should implement it.
The DRM interface seems like the perfect fit since we wouldn't need to patch every compositor.
Right now, gstreamer is the equivalent of the crtc/encoder, in the DRM model. Screens / crtcs are discovered using a WiFi's p2p protocol which means that screens should be hotpluggable. Since we cannot change the number of crtcs of a driver on the fly, we propose adding and removing gpus with one crtc attached and no rendering capabilities.
Compositors and X currently use udev to list gpus and get run-time events for gpu hot-plugging (see the work from Dave Airlie for USB GPUs, using the modesetting X driver). We did not find a way to tell udev that we have a new device and it seems like the only way to get it to pick up our driver is from a uevent which can only be generated from the kernel.
Since we have so many userspace components, it doesn't make sense to implement the entire driver in the kernel.
We would thus need to have a communication from the kernel space to the userspace at least to send the flip commands to the fake crtc. Since we need this, why not implement everything in the userspace and just redirect the ioctls to the userspace driver?
This is exactly what fuse / cuse [1] does, with the minor catch that it creates devices in /sys/class/cuse instead of drm. This prevents the wayland compositors and X to pick it up as a normal drm driver...
We would thus need to have the drm subsystem create the device nodes for us when the userspace needs to create a new gpu. We could create a node named /dev/dri/cuse_card that, when opened, would allocate a node (/dev/dri/cardX) and would use cuse/fuse to redirect the ioctls to the process who opened /dev/dri/cuse_card.
The process would then be responsible for decoding the ioctl and implementing the drm API.
Since this is a major change which would allow proprietary drivers to be implemented in the userspace and since we may have missed something obvious, we would like to start a discussion on this. What are your thoughts?
As you raise the issue, how would you prevent proprietary userspace drivers to be implemented ? Anything that would allow vendors to destroy the Linux graphics ecosystem would receive a big nack from me.
AFAIK the displaylink people already have precisely such a driver -- a (open-source) kernel module that allows their (closed-source) userspace blob to present a drm node to pass through modesetting/etc ioctl's.
Are you talking about the drivers/gpu/drm/udl/ driver ? I might be wrong but I'm not aware of that kernel driver requiring a closed-source userspace blob.
On Thu, Dec 3, 2015 at 10:53 AM, Laurent Pinchart laurent.pinchart@ideasonboard.com wrote:
On Thursday 03 December 2015 10:42:50 Ilia Mirkin wrote:
On Thu, Dec 3, 2015 at 10:34 AM, Laurent Pinchart
laurent.pinchart@ideasonboard.com wrote:
Hi Jaakko,
On Thursday 03 December 2015 14:42:51 Hannikainen, Jaakko wrote:
Hello,
We're developing Miracast (HDMI over Wireless connections). The current progress is that it 'works' in the userspace but doesn't have any integration with X/Wayland and can only mirror the current desktop using gstreamer.
We're looking into extending the implementation so that we would be able to use the remote screens just as any other connected screen, but we're not quite sure where we should implement it.
The DRM interface seems like the perfect fit since we wouldn't need to patch every compositor.
Right now, gstreamer is the equivalent of the crtc/encoder, in the DRM model. Screens / crtcs are discovered using a WiFi's p2p protocol which means that screens should be hotpluggable. Since we cannot change the number of crtcs of a driver on the fly, we propose adding and removing gpus with one crtc attached and no rendering capabilities.
Compositors and X currently use udev to list gpus and get run-time events for gpu hot-plugging (see the work from Dave Airlie for USB GPUs, using the modesetting X driver). We did not find a way to tell udev that we have a new device and it seems like the only way to get it to pick up our driver is from a uevent which can only be generated from the kernel.
Since we have so many userspace components, it doesn't make sense to implement the entire driver in the kernel.
We would thus need to have a communication from the kernel space to the userspace at least to send the flip commands to the fake crtc. Since we need this, why not implement everything in the userspace and just redirect the ioctls to the userspace driver?
This is exactly what fuse / cuse [1] does, with the minor catch that it creates devices in /sys/class/cuse instead of drm. This prevents the wayland compositors and X to pick it up as a normal drm driver...
We would thus need to have the drm subsystem create the device nodes for us when the userspace needs to create a new gpu. We could create a node named /dev/dri/cuse_card that, when opened, would allocate a node (/dev/dri/cardX) and would use cuse/fuse to redirect the ioctls to the process who opened /dev/dri/cuse_card.
The process would then be responsible for decoding the ioctl and implementing the drm API.
Since this is a major change which would allow proprietary drivers to be implemented in the userspace and since we may have missed something obvious, we would like to start a discussion on this. What are your thoughts?
As you raise the issue, how would you prevent proprietary userspace drivers to be implemented ? Anything that would allow vendors to destroy the Linux graphics ecosystem would receive a big nack from me.
AFAIK the displaylink people already have precisely such a driver -- a (open-source) kernel module that allows their (closed-source) userspace blob to present a drm node to pass through modesetting/etc ioctl's.
Are you talking about the drivers/gpu/drm/udl/ driver ? I might be wrong but I'm not aware of that kernel driver requiring a closed-source userspace blob.
Nope. That driver only works for their USB2 parts. This is what I mean:
https://github.com/DisplayLink/evdi http://support.displaylink.com/knowledgebase/articles/679060 http://support.displaylink.com/knowledgebase/articles/615714#ubuntu
-ilia
Hi Ilia,
On Thursday 03 December 2015 11:03:28 Ilia Mirkin wrote:
On Thu, Dec 3, 2015 at 10:53 AM, Laurent Pinchart wrote:
On Thursday 03 December 2015 10:42:50 Ilia Mirkin wrote:
On Thu, Dec 3, 2015 at 10:34 AM, Laurent Pinchart wrote:
On Thursday 03 December 2015 14:42:51 Hannikainen, Jaakko wrote:
Hello,
We're developing Miracast (HDMI over Wireless connections). The current progress is that it 'works' in the userspace but doesn't have any integration with X/Wayland and can only mirror the current desktop using gstreamer.
We're looking into extending the implementation so that we would be able to use the remote screens just as any other connected screen, but we're not quite sure where we should implement it.
The DRM interface seems like the perfect fit since we wouldn't need to patch every compositor.
Right now, gstreamer is the equivalent of the crtc/encoder, in the DRM model. Screens / crtcs are discovered using a WiFi's p2p protocol which means that screens should be hotpluggable. Since we cannot change the number of crtcs of a driver on the fly, we propose adding and removing gpus with one crtc attached and no rendering capabilities.
Compositors and X currently use udev to list gpus and get run-time events for gpu hot-plugging (see the work from Dave Airlie for USB GPUs, using the modesetting X driver). We did not find a way to tell udev that we have a new device and it seems like the only way to get it to pick up our driver is from a uevent which can only be generated from the kernel.
Since we have so many userspace components, it doesn't make sense to implement the entire driver in the kernel.
We would thus need to have a communication from the kernel space to the userspace at least to send the flip commands to the fake crtc. Since we need this, why not implement everything in the userspace and just redirect the ioctls to the userspace driver?
This is exactly what fuse / cuse [1] does, with the minor catch that it creates devices in /sys/class/cuse instead of drm. This prevents the wayland compositors and X to pick it up as a normal drm driver...
We would thus need to have the drm subsystem create the device nodes for us when the userspace needs to create a new gpu. We could create a node named /dev/dri/cuse_card that, when opened, would allocate a node (/dev/dri/cardX) and would use cuse/fuse to redirect the ioctls to the process who opened /dev/dri/cuse_card.
The process would then be responsible for decoding the ioctl and implementing the drm API.
Since this is a major change which would allow proprietary drivers to be implemented in the userspace and since we may have missed something obvious, we would like to start a discussion on this. What are your thoughts?
As you raise the issue, how would you prevent proprietary userspace drivers to be implemented ? Anything that would allow vendors to destroy the Linux graphics ecosystem would receive a big nack from me.
AFAIK the displaylink people already have precisely such a driver -- a (open-source) kernel module that allows their (closed-source) userspace blob to present a drm node to pass through modesetting/etc ioctl's.
Are you talking about the drivers/gpu/drm/udl/ driver ? I might be wrong but I'm not aware of that kernel driver requiring a closed-source userspace blob.
Nope. That driver only works for their USB2 parts. This is what I mean:
https://github.com/DisplayLink/evdi http://support.displaylink.com/knowledgebase/articles/679060 http://support.displaylink.com/knowledgebase/articles/615714#ubuntu
Right. That's out-of-tree, people are free to screw up on their own there ;-)
On Thu, Dec 3, 2015 at 11:10 AM, Laurent Pinchart laurent.pinchart@ideasonboard.com wrote:
Hi Ilia,
On Thursday 03 December 2015 11:03:28 Ilia Mirkin wrote:
On Thu, Dec 3, 2015 at 10:53 AM, Laurent Pinchart wrote:
On Thursday 03 December 2015 10:42:50 Ilia Mirkin wrote:
On Thu, Dec 3, 2015 at 10:34 AM, Laurent Pinchart wrote:
On Thursday 03 December 2015 14:42:51 Hannikainen, Jaakko wrote:
Hello,
We're developing Miracast (HDMI over Wireless connections). The current progress is that it 'works' in the userspace but doesn't have any integration with X/Wayland and can only mirror the current desktop using gstreamer.
We're looking into extending the implementation so that we would be able to use the remote screens just as any other connected screen, but we're not quite sure where we should implement it.
The DRM interface seems like the perfect fit since we wouldn't need to patch every compositor.
Right now, gstreamer is the equivalent of the crtc/encoder, in the DRM model. Screens / crtcs are discovered using a WiFi's p2p protocol which means that screens should be hotpluggable. Since we cannot change the number of crtcs of a driver on the fly, we propose adding and removing gpus with one crtc attached and no rendering capabilities.
Compositors and X currently use udev to list gpus and get run-time events for gpu hot-plugging (see the work from Dave Airlie for USB GPUs, using the modesetting X driver). We did not find a way to tell udev that we have a new device and it seems like the only way to get it to pick up our driver is from a uevent which can only be generated from the kernel.
Since we have so many userspace components, it doesn't make sense to implement the entire driver in the kernel.
We would thus need to have a communication from the kernel space to the userspace at least to send the flip commands to the fake crtc. Since we need this, why not implement everything in the userspace and just redirect the ioctls to the userspace driver?
This is exactly what fuse / cuse [1] does, with the minor catch that it creates devices in /sys/class/cuse instead of drm. This prevents the wayland compositors and X to pick it up as a normal drm driver...
We would thus need to have the drm subsystem create the device nodes for us when the userspace needs to create a new gpu. We could create a node named /dev/dri/cuse_card that, when opened, would allocate a node (/dev/dri/cardX) and would use cuse/fuse to redirect the ioctls to the process who opened /dev/dri/cuse_card.
The process would then be responsible for decoding the ioctl and implementing the drm API.
Since this is a major change which would allow proprietary drivers to be implemented in the userspace and since we may have missed something obvious, we would like to start a discussion on this. What are your thoughts?
As you raise the issue, how would you prevent proprietary userspace drivers to be implemented ? Anything that would allow vendors to destroy the Linux graphics ecosystem would receive a big nack from me.
AFAIK the displaylink people already have precisely such a driver -- a (open-source) kernel module that allows their (closed-source) userspace blob to present a drm node to pass through modesetting/etc ioctl's.
Are you talking about the drivers/gpu/drm/udl/ driver ? I might be wrong but I'm not aware of that kernel driver requiring a closed-source userspace blob.
Nope. That driver only works for their USB2 parts. This is what I mean:
https://github.com/DisplayLink/evdi http://support.displaylink.com/knowledgebase/articles/679060 http://support.displaylink.com/knowledgebase/articles/615714#ubuntu
Right. That's out-of-tree, people are free to screw up on their own there ;-)
Sure, but it's identical to Jaakko's proposal from what I can (quickly) tell. And it's an example of someone taking an interface like that and writing a proprietary driver on top.
-ilia
On 03/12/15 18:38, Ilia Mirkin wrote:
On Thu, Dec 3, 2015 at 11:10 AM, Laurent Pinchart laurent.pinchart@ideasonboard.com wrote:
Hi Ilia,
On Thursday 03 December 2015 11:03:28 Ilia Mirkin wrote:
On Thu, Dec 3, 2015 at 10:53 AM, Laurent Pinchart wrote:
On Thursday 03 December 2015 10:42:50 Ilia Mirkin wrote:
On Thu, Dec 3, 2015 at 10:34 AM, Laurent Pinchart wrote:
On Thursday 03 December 2015 14:42:51 Hannikainen, Jaakko wrote: > Hello, > > We're developing Miracast (HDMI over Wireless connections). The > current progress is that it 'works' in the userspace but doesn't have > any integration with X/Wayland and can only mirror the current desktop > using gstreamer. > > We're looking into extending the implementation so that we would be > able to use the remote screens just as any other connected screen, but > we're not quite sure where we should implement it. > > The DRM interface seems like the perfect fit since we wouldn't need to > patch every compositor. > > Right now, gstreamer is the equivalent of the crtc/encoder, in the DRM > model. Screens / crtcs are discovered using a WiFi's p2p protocol > which means that screens should be hotpluggable. Since we cannot > change the number of crtcs of a driver on the fly, we propose adding > and removing gpus with one crtc attached and no rendering > capabilities. > > Compositors and X currently use udev to list gpus and get run-time > events for gpu hot-plugging (see the work from Dave Airlie for USB > GPUs, using the modesetting X driver). We did not find a way to tell > udev that we have a new device and it seems like the only way to get > it to pick up our driver is from a uevent which can only be generated > from the kernel. > > Since we have so many userspace components, it doesn't make sense to > implement the entire driver in the kernel. > > We would thus need to have a communication from the kernel space to > the userspace at least to send the flip commands to the fake crtc. > Since we need this, why not implement everything in the userspace and > just redirect the ioctls to the userspace driver? > > This is exactly what fuse / cuse [1] does, with the minor catch that > it creates devices in /sys/class/cuse instead of drm. This prevents > the wayland compositors and X to pick it up as a normal drm driver... > > We would thus need to have the drm subsystem create the device nodes > for us when the userspace needs to create a new gpu. We could create a > node named /dev/dri/cuse_card that, when opened, would allocate a node > (/dev/dri/cardX) and would use cuse/fuse to redirect the ioctls to the > process who opened /dev/dri/cuse_card. > > The process would then be responsible for decoding the ioctl and > implementing the drm API. > > Since this is a major change which would allow proprietary drivers to > be implemented in the userspace and since we may have missed something > obvious, we would like to start a discussion on this. What are your > thoughts? As you raise the issue, how would you prevent proprietary userspace drivers to be implemented ? Anything that would allow vendors to destroy the Linux graphics ecosystem would receive a big nack from me.
AFAIK the displaylink people already have precisely such a driver -- a (open-source) kernel module that allows their (closed-source) userspace blob to present a drm node to pass through modesetting/etc ioctl's.
Are you talking about the drivers/gpu/drm/udl/ driver ? I might be wrong but I'm not aware of that kernel driver requiring a closed-source userspace blob.
Nope. That driver only works for their USB2 parts. This is what I mean:
https://github.com/DisplayLink/evdi http://support.displaylink.com/knowledgebase/articles/679060 http://support.displaylink.com/knowledgebase/articles/615714#ubuntu
Right. That's out-of-tree, people are free to screw up on their own there ;-)
Sure, but it's identical to Jaakko's proposal from what I can (quickly) tell. And it's an example of someone taking an interface like that and writing a proprietary driver on top.
-ilia
You are right Ilia, this is indeed what Jaakko and I had in mind, but they did not re-use the fuse/cuse framework to do the serialization of the ioctls.
Not sure what we can do against allowing proprietary drivers to use this feature though :s To be fair, nothing prevents any vendor to do this shim themselves and nvidia definitely did it, and directly called their closed-source driver.
Any proposition on how to handle this case? I guess we could limit that to screens only, no rendering. That would block any serious GPU manufacturer from using this code even if any sane person would never write a driver in the userspace...
On Thu, Dec 03, 2015 at 07:26:31PM +0200, Martin Peres wrote:
On 03/12/15 18:38, Ilia Mirkin wrote:
On Thu, Dec 3, 2015 at 11:10 AM, Laurent Pinchart laurent.pinchart@ideasonboard.com wrote:
Hi Ilia,
On Thursday 03 December 2015 11:03:28 Ilia Mirkin wrote:
On Thu, Dec 3, 2015 at 10:53 AM, Laurent Pinchart wrote:
On Thursday 03 December 2015 10:42:50 Ilia Mirkin wrote:
On Thu, Dec 3, 2015 at 10:34 AM, Laurent Pinchart wrote: >On Thursday 03 December 2015 14:42:51 Hannikainen, Jaakko wrote: >>Hello, >> >>We're developing Miracast (HDMI over Wireless connections). The >>current progress is that it 'works' in the userspace but doesn't have >>any integration with X/Wayland and can only mirror the current desktop >>using gstreamer. >> >>We're looking into extending the implementation so that we would be >>able to use the remote screens just as any other connected screen, but >>we're not quite sure where we should implement it. >> >>The DRM interface seems like the perfect fit since we wouldn't need to >>patch every compositor. >> >>Right now, gstreamer is the equivalent of the crtc/encoder, in the DRM >>model. Screens / crtcs are discovered using a WiFi's p2p protocol >>which means that screens should be hotpluggable. Since we cannot >>change the number of crtcs of a driver on the fly, we propose adding >>and removing gpus with one crtc attached and no rendering >>capabilities. >> >>Compositors and X currently use udev to list gpus and get run-time >>events for gpu hot-plugging (see the work from Dave Airlie for USB >>GPUs, using the modesetting X driver). We did not find a way to tell >>udev that we have a new device and it seems like the only way to get >>it to pick up our driver is from a uevent which can only be generated >>from the kernel. >> >>Since we have so many userspace components, it doesn't make sense to >>implement the entire driver in the kernel. >> >>We would thus need to have a communication from the kernel space to >>the userspace at least to send the flip commands to the fake crtc. >>Since we need this, why not implement everything in the userspace and >>just redirect the ioctls to the userspace driver? >> >>This is exactly what fuse / cuse [1] does, with the minor catch that >>it creates devices in /sys/class/cuse instead of drm. This prevents >>the wayland compositors and X to pick it up as a normal drm driver... >> >>We would thus need to have the drm subsystem create the device nodes >>for us when the userspace needs to create a new gpu. We could create a >>node named /dev/dri/cuse_card that, when opened, would allocate a node >>(/dev/dri/cardX) and would use cuse/fuse to redirect the ioctls to the >>process who opened /dev/dri/cuse_card. >> >>The process would then be responsible for decoding the ioctl and >>implementing the drm API. >> >>Since this is a major change which would allow proprietary drivers to >>be implemented in the userspace and since we may have missed something >>obvious, we would like to start a discussion on this. What are your >>thoughts? >As you raise the issue, how would you prevent proprietary userspace >drivers to be implemented ? Anything that would allow vendors to >destroy the Linux graphics ecosystem would receive a big nack from me. AFAIK the displaylink people already have precisely such a driver -- a (open-source) kernel module that allows their (closed-source) userspace blob to present a drm node to pass through modesetting/etc ioctl's.
Are you talking about the drivers/gpu/drm/udl/ driver ? I might be wrong but I'm not aware of that kernel driver requiring a closed-source userspace blob.
Nope. That driver only works for their USB2 parts. This is what I mean:
https://github.com/DisplayLink/evdi http://support.displaylink.com/knowledgebase/articles/679060 http://support.displaylink.com/knowledgebase/articles/615714#ubuntu
Right. That's out-of-tree, people are free to screw up on their own there ;-)
Sure, but it's identical to Jaakko's proposal from what I can (quickly) tell. And it's an example of someone taking an interface like that and writing a proprietary driver on top.
-ilia
You are right Ilia, this is indeed what Jaakko and I had in mind, but they did not re-use the fuse/cuse framework to do the serialization of the ioctls.
Not sure what we can do against allowing proprietary drivers to use this feature though :s To be fair, nothing prevents any vendor to do this shim themselves and nvidia definitely did it, and directly called their closed-source driver.
Any proposition on how to handle this case? I guess we could limit that to screens only, no rendering. That would block any serious GPU manufacturer from using this code even if any sane person would never write a driver in the userspace...
Hm for virtual devices like this I figured there's no point exporting the full kms api to userspace, but instead we'd just need a simple kms driver with just 1 crtc and 1 connector per drm_device. Plus a special device node (v4l is probably inappropriate since it doesn't do damage) where the miracast userspace can receive events with just the following information: - virtual screen size - fd to the underlying shmem node for the current fb. Or maybe a dma-buf (but then we'd need the dma-buf mmap stuff to land first). - damage tracking
If we want fancy, we could allow userspace to reply (through an ioctl) when it's done reading the previous image, which the kernel could then forward as vblank complete events.
Connector configuration could be done by forcing the outputs (we'll send out uevents nowadays for that), so the only thing we need is some configfs to instantiate new copies of this.
At least for miracst (as opposed to full-blown hw drivers in userspace) I don't think we need to export everything. Cheers, Daniel
Sorry if this is completely off-topic, but could this be useful for high performance screen recording also? This seems to be all the rage on Windows these days, with software like OBS (+AMD VCE support), Nvidia ShadowPlay (also HW accel encoding), Xsplit, Twitch etc...
Regards //Ernst
2015-12-04 9:07 GMT+01:00 Daniel Vetter daniel@ffwll.ch:
On Thu, Dec 03, 2015 at 07:26:31PM +0200, Martin Peres wrote:
On 03/12/15 18:38, Ilia Mirkin wrote:
On Thu, Dec 3, 2015 at 11:10 AM, Laurent Pinchart laurent.pinchart@ideasonboard.com wrote:
Hi Ilia,
On Thursday 03 December 2015 11:03:28 Ilia Mirkin wrote:
On Thu, Dec 3, 2015 at 10:53 AM, Laurent Pinchart wrote:
On Thursday 03 December 2015 10:42:50 Ilia Mirkin wrote: >On Thu, Dec 3, 2015 at 10:34 AM, Laurent Pinchart wrote: >>On Thursday 03 December 2015 14:42:51 Hannikainen, Jaakko wrote: >>>Hello, >>> >>>We're developing Miracast (HDMI over Wireless connections). The >>>current progress is that it 'works' in the userspace but doesn't have >>>any integration with X/Wayland and can only mirror the current desktop >>>using gstreamer. >>> >>>We're looking into extending the implementation so that we would be >>>able to use the remote screens just as any other connected screen, but >>>we're not quite sure where we should implement it. >>> >>>The DRM interface seems like the perfect fit since we wouldn't need to >>>patch every compositor. >>> >>>Right now, gstreamer is the equivalent of the crtc/encoder, in the DRM >>>model. Screens / crtcs are discovered using a WiFi's p2p protocol >>>which means that screens should be hotpluggable. Since we cannot >>>change the number of crtcs of a driver on the fly, we propose adding >>>and removing gpus with one crtc attached and no rendering >>>capabilities. >>> >>>Compositors and X currently use udev to list gpus and get run-time >>>events for gpu hot-plugging (see the work from Dave Airlie for USB >>>GPUs, using the modesetting X driver). We did not find a way to tell >>>udev that we have a new device and it seems like the only way to get >>>it to pick up our driver is from a uevent which can only be generated >>>from the kernel. >>> >>>Since we have so many userspace components, it doesn't make sense to >>>implement the entire driver in the kernel. >>> >>>We would thus need to have a communication from the kernel space to >>>the userspace at least to send the flip commands to the fake crtc. >>>Since we need this, why not implement everything in the userspace and >>>just redirect the ioctls to the userspace driver? >>> >>>This is exactly what fuse / cuse [1] does, with the minor catch that >>>it creates devices in /sys/class/cuse instead of drm. This prevents >>>the wayland compositors and X to pick it up as a normal drm driver... >>> >>>We would thus need to have the drm subsystem create the device nodes >>>for us when the userspace needs to create a new gpu. We could create a >>>node named /dev/dri/cuse_card that, when opened, would allocate a node >>>(/dev/dri/cardX) and would use cuse/fuse to redirect the ioctls to the >>>process who opened /dev/dri/cuse_card. >>> >>>The process would then be responsible for decoding the ioctl and >>>implementing the drm API. >>> >>>Since this is a major change which would allow proprietary drivers to >>>be implemented in the userspace and since we may have missed something >>>obvious, we would like to start a discussion on this. What are your >>>thoughts? >>As you raise the issue, how would you prevent proprietary userspace >>drivers to be implemented ? Anything that would allow vendors to >>destroy the Linux graphics ecosystem would receive a big nack from me. >AFAIK the displaylink people already have precisely such a driver -- a >(open-source) kernel module that allows their (closed-source) >userspace blob to present a drm node to pass through modesetting/etc >ioctl's. Are you talking about the drivers/gpu/drm/udl/ driver ? I might be wrong but I'm not aware of that kernel driver requiring a closed-source userspace blob.
Nope. That driver only works for their USB2 parts. This is what I mean:
https://github.com/DisplayLink/evdi http://support.displaylink.com/knowledgebase/articles/679060 http://support.displaylink.com/knowledgebase/articles/615714#ubuntu
Right. That's out-of-tree, people are free to screw up on their own there ;-)
Sure, but it's identical to Jaakko's proposal from what I can (quickly) tell. And it's an example of someone taking an interface like that and writing a proprietary driver on top.
-ilia
You are right Ilia, this is indeed what Jaakko and I had in mind, but they did not re-use the fuse/cuse framework to do the serialization of the ioctls.
Not sure what we can do against allowing proprietary drivers to use this feature though :s To be fair, nothing prevents any vendor to do this shim themselves and nvidia definitely did it, and directly called their closed-source driver.
Any proposition on how to handle this case? I guess we could limit that to screens only, no rendering. That would block any serious GPU manufacturer from using this code even if any sane person would never write a driver in the userspace...
Hm for virtual devices like this I figured there's no point exporting the full kms api to userspace, but instead we'd just need a simple kms driver with just 1 crtc and 1 connector per drm_device. Plus a special device node (v4l is probably inappropriate since it doesn't do damage) where the miracast userspace can receive events with just the following information:
- virtual screen size
- fd to the underlying shmem node for the current fb. Or maybe a dma-buf (but then we'd need the dma-buf mmap stuff to land first).
- damage tracking
If we want fancy, we could allow userspace to reply (through an ioctl) when it's done reading the previous image, which the kernel could then forward as vblank complete events.
Connector configuration could be done by forcing the outputs (we'll send out uevents nowadays for that), so the only thing we need is some configfs to instantiate new copies of this.
At least for miracst (as opposed to full-blown hw drivers in userspace) I don't think we need to export everything. Cheers, Daniel -- Daniel Vetter Software Engineer, Intel Corporation http://blog.ffwll.ch _______________________________________________ dri-devel mailing list dri-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/dri-devel
On 04/12/15 10:07, Daniel Vetter wrote:
Hm for virtual devices like this I figured there's no point exporting the full kms api to userspace, but instead we'd just need a simple kms driver with just 1 crtc and 1 connector per drm_device.
Yes, we do not need anything more. But don't forget the requirement that we should be able to hotplug new gpus when new screens become available (there may be more than one). We thus need to create a node that when opened, would create a "screen" node that will be seen as a normal gpu by X and wayland compositors (cardX?). One userspace process will likely control all the miracast screens.
Plus a special device node (v4l is probably inappropriate since it doesn't do damage) where the miracast userspace can receive events with just the following information:
Not sure it is a good idea as it would force compositors to learn about miracast, which is not necessary.
- virtual screen size
- fd to the underlying shmem node for the current fb. Or maybe a dma-buf (but then we'd need the dma-buf mmap stuff to land first).
Darn it, I was sure this had already landed. I guess it is OK as long as we expose a GEM interface that would allow us to import the dma-buf into a GEM buffer which we would then mmap through the usual API. Buffer allocation is not necessary though.
- damage tracking
If we want fancy, we could allow userspace to reply (through an ioctl) when it's done reading the previous image, which the kernel could then forward as vblank complete events.
Sounds good :)
Connector configuration could be done by forcing the outputs (we'll send out uevents nowadays for that), so the only thing we need is some configfs to instantiate new copies of this.
Are you suggesting hotplugging connectors instead of GPUs? Not sure if compositors will like that :s
At least for miracst (as opposed to full-blown hw drivers in userspace) I don't think we need to export everything.
We indeed do not need to export anything related to rendering!
Cheers, Daniel
Thanks for your feedback Daniel!
Martin
Hi
On Fri, Dec 4, 2015 at 9:07 AM, Daniel Vetter daniel@ffwll.ch wrote:
On Thu, Dec 03, 2015 at 07:26:31PM +0200, Martin Peres wrote:
You are right Ilia, this is indeed what Jaakko and I had in mind, but they did not re-use the fuse/cuse framework to do the serialization of the ioctls.
Not sure what we can do against allowing proprietary drivers to use this feature though :s To be fair, nothing prevents any vendor to do this shim themselves and nvidia definitely did it, and directly called their closed-source driver.
Any proposition on how to handle this case? I guess we could limit that to screens only, no rendering. That would block any serious GPU manufacturer from using this code even if any sane person would never write a driver in the userspace...
Hm for virtual devices like this I figured there's no point exporting the full kms api to userspace, but instead we'd just need a simple kms driver with just 1 crtc and 1 connector per drm_device. Plus a special device node (v4l is probably inappropriate since it doesn't do damage) where the miracast userspace can receive events with just the following information:
- virtual screen size
- fd to the underlying shmem node for the current fb. Or maybe a dma-buf (but then we'd need the dma-buf mmap stuff to land first).
- damage tracking
If we want fancy, we could allow userspace to reply (through an ioctl) when it's done reading the previous image, which the kernel could then forward as vblank complete events.
Connector configuration could be done by forcing the outputs (we'll send out uevents nowadays for that), so the only thing we need is some configfs to instantiate new copies of this.
At least for miracst (as opposed to full-blown hw drivers in userspace) I don't think we need to export everything.
I looked into all this when working on WFD, but I cannot recommend going down that road. First of all, you still need heavy modifications for gnome-shell, kwin, and friends, as neither of them supports seamless drm-device hotplugging. Hence, providing more devices than the main GPU just confuses them. Secondly, you really don't win much by re-using DRM for all that. On the contrary, you get very heavy overhead, need to feed all this through limited ioctl interfaces, and fake DRM crtcs/encoders/connectors, when all you really have is an mpeg stream.
I wouldn't mind if anyone writes a virtual DRM interface, it'd be really great for automated testing. However, if you want your wifi-display (or whatever else) integrated into desktop environments, then I recommend teaching those environments to accept gstreamer sinks as outputs.
Thanks David
On 08/12/15 13:59, David Herrmann wrote:
Hi
On Fri, Dec 4, 2015 at 9:07 AM, Daniel Vetter daniel@ffwll.ch wrote:
On Thu, Dec 03, 2015 at 07:26:31PM +0200, Martin Peres wrote:
You are right Ilia, this is indeed what Jaakko and I had in mind, but they did not re-use the fuse/cuse framework to do the serialization of the ioctls.
Not sure what we can do against allowing proprietary drivers to use this feature though :s To be fair, nothing prevents any vendor to do this shim themselves and nvidia definitely did it, and directly called their closed-source driver.
Any proposition on how to handle this case? I guess we could limit that to screens only, no rendering. That would block any serious GPU manufacturer from using this code even if any sane person would never write a driver in the userspace...
Hm for virtual devices like this I figured there's no point exporting the full kms api to userspace, but instead we'd just need a simple kms driver with just 1 crtc and 1 connector per drm_device. Plus a special device node (v4l is probably inappropriate since it doesn't do damage) where the miracast userspace can receive events with just the following information:
- virtual screen size
- fd to the underlying shmem node for the current fb. Or maybe a dma-buf (but then we'd need the dma-buf mmap stuff to land first).
- damage tracking
If we want fancy, we could allow userspace to reply (through an ioctl) when it's done reading the previous image, which the kernel could then forward as vblank complete events.
Connector configuration could be done by forcing the outputs (we'll send out uevents nowadays for that), so the only thing we need is some configfs to instantiate new copies of this.
At least for miracst (as opposed to full-blown hw drivers in userspace) I don't think we need to export everything.
I looked into all this when working on WFD, but I cannot recommend going down that road. First of all, you still need heavy modifications for gnome-shell, kwin, and friends, as neither of them supports seamless drm-device hotplugging.
That would still be needed for USB GPUs though. Seems like metacity had no probs in 2011, but no idea how heavily patched it was: https://www.youtube.com/watch?v=g54y80blzRU
Airlied?
Hence, providing more devices than the main GPU just confuses them. Secondly, you really don't win much by re-using DRM for all that. On the contrary, you get very heavy overhead, need to feed all this through limited ioctl interfaces, and fake DRM crtcs/encoders/connectors, when all you really have is an mpeg stream.
The overhead is just at init time, is that really relevant? The only added cost could then be the page flip ioctl which is not really relevant again since it is only up to 60 times per second in usual monitors.
I wouldn't mind if anyone writes a virtual DRM interface, it'd be really great for automated testing. However, if you want your wifi-display (or whatever else) integrated into desktop environments, then I recommend teaching those environments to accept gstreamer sinks as outputs.
That is a fair proposal but that requires a lot more work for compositors than waiting for drm udev events and reusing all the existing infrastructure for DRM to drive the new type of display.
I guess there are benefits to being able to output to a gstreamer backend, but the drm driver we propose could do just that without having to ask for a lot of new code, especially code that is already necessary for handling USB GPUs. Moreover, the gstreamer backend would not be registered as a screen by X which means that games may not be able to set themselves fullscreen on this screen only.
I am open to the idea of having compositors render to a gstreamer backend, but I never worked with gstreamer myself so I have no clue about how suited it is for output management (resolution, refresh rate) and there is the added difficulty of the X model not working well with this approach. We will have a look at this though.
Martin
Hi
On Tue, Dec 8, 2015 at 5:39 PM, Martin Peres martin.peres@free.fr wrote:
On 08/12/15 13:59, David Herrmann wrote:
I looked into all this when working on WFD, but I cannot recommend going down that road. First of all, you still need heavy modifications for gnome-shell, kwin, and friends, as neither of them supports seamless drm-device hotplugging.
That would still be needed for USB GPUs though. Seems like metacity had no probs in 2011, but no idea how heavily patched it was: https://www.youtube.com/watch?v=g54y80blzRU
Airlied?
Yes, Xorg has offload-sinks. But if you target Xorg, then you can just as well implement user-space sinks in Xorg, and you're done. But given that you talk about "compositors" here, I assume you're targeting wayland compositors. Otherwise, there is really nothing to implement in Gnome and friends to make external displays work. Supporting it in Xorg would be enough.
Long story short: offload-sinks like UDL only work properly if you use Xorg (if my information is outdated, please correct me, but I haven't seen any multi-display-controller-support in clutter or kwin or even weston).
Hence, providing more devices than the main GPU just confuses them. Secondly, you really don't win much by re-using DRM for all that. On the contrary, you get very heavy overhead, need to feed all this through limited ioctl interfaces, and fake DRM crtcs/encoders/connectors, when all you really have is an mpeg stream.
The overhead is just at init time, is that really relevant? The only added cost could then be the page flip ioctl which is not really relevant again since it is only up to 60 times per second in usual monitors.
This is not so much about overhead, but API constraints. Putting stuff into the kernel just places arbitrary constraints on your implementation, when you don't have any real gain.
I wouldn't mind if anyone writes a virtual DRM interface, it'd be really great for automated testing. However, if you want your wifi-display (or whatever else) integrated into desktop environments, then I recommend teaching those environments to accept gstreamer sinks as outputs.
That is a fair proposal but that requires a lot more work for compositors than waiting for drm udev events and reusing all the existing infrastructure for DRM to drive the new type of display.
This is not true. Again, I haven't seen any multi-display-support in any major compositors but Xorg (and even for Xorg I'm not entirely sure they support _fully_ independent display drivers, but airlied should know more).
I guess there are benefits to being able to output to a gstreamer backend, but the drm driver we propose could do just that without having to ask for a lot of new code, especially code that is already necessary for handling USB GPUs. Moreover, the gstreamer backend would not be registered as a screen by X which means that games may not be able to set themselves fullscreen on this screen only.
I am open to the idea of having compositors render to a gstreamer backend, but I never worked with gstreamer myself so I have no clue about how suited it is for output management (resolution, refresh rate) and there is the added difficulty of the X model not working well with this approach. We will have a look at this though.
As I said earlier, I'm not opposed to a virtual DRM driver. I'm just saying that you should not expect it to work out-of-the-box in any major compositor. I spent a significant amount of time hacking on it, and my recommendation is to instead do all that in user-space. It'll be less work.
Thanks David
On 08/12/15 19:24, David Herrmann wrote:
Hi
On Tue, Dec 8, 2015 at 5:39 PM, Martin Peres martin.peres@free.fr wrote:
On 08/12/15 13:59, David Herrmann wrote:
I looked into all this when working on WFD, but I cannot recommend going down that road. First of all, you still need heavy modifications for gnome-shell, kwin, and friends, as neither of them supports seamless drm-device hotplugging.
That would still be needed for USB GPUs though. Seems like metacity had no probs in 2011, but no idea how heavily patched it was: https://www.youtube.com/watch?v=g54y80blzRU
Airlied?
Yes, Xorg has offload-sinks. But if you target Xorg, then you can just as well implement user-space sinks in Xorg, and you're done. But given that you talk about "compositors" here, I assume you're targeting wayland compositors. Otherwise, there is really nothing to implement in Gnome and friends to make external displays work. Supporting it in Xorg would be enough.
We would like to have a solution that works for as many display systems as possible, X and Wayland are of course the main goal. Surface flinger support would be nice too but no idea how it works.
So, we tested the following case, 3 GPUs (Intel, Nouveau, Nouveau), 3 screens each connected to a different GPU. Screen 0 uses Intel. Then we ran: xrandr --setprovideroutputsource 1 Intel xrandr --setprovideroutputsource 2 Intel
And got the 3 screens exposed by screen 0. xrandr --auto then did exactly what it is supposed to do. So, for the X case, there is nothing else to do than run the setprovideroutputsource xrandr command to add the new miracast screen, after creating the node.
Now that I think of it, we did not try with the modesetting driver but we could always add support for it.
Long story short: offload-sinks like UDL only work properly if you use Xorg (if my information is outdated, please correct me, but I haven't seen any multi-display-controller-support in clutter or kwin or even weston).
They will have to be fixed at some point if they want to support USB GPUs and Optimus (and miracast?). So, why require them to add specific code for Miracast?
Dave, what did you do to make it work automatically on metacity?
That is a fair proposal but that requires a lot more work for compositors than waiting for drm udev events and reusing all the existing infrastructure for DRM to drive the new type of display.
This is not true. Again, I haven't seen any multi-display-support in any major compositors but Xorg (and even for Xorg I'm not entirely sure they support _fully_ independent display drivers, but airlied should know more).
Sounds about right, but as we said before, there are other important cases, Optimus being the most important one, that require this support. So, why not ride on this for the less-than-usual case which is Miracast?
I guess there are benefits to being able to output to a gstreamer backend, but the drm driver we propose could do just that without having to ask for a lot of new code, especially code that is already necessary for handling USB GPUs. Moreover, the gstreamer backend would not be registered as a screen by X which means that games may not be able to set themselves fullscreen on this screen only.
I am open to the idea of having compositors render to a gstreamer backend, but I never worked with gstreamer myself so I have no clue about how suited it is for output management (resolution, refresh rate) and there is the added difficulty of the X model not working well with this approach. We will have a look at this though.
As I said earlier, I'm not opposed to a virtual DRM driver. I'm just saying that you should not expect it to work out-of-the-box in any major compositor. I spent a significant amount of time hacking on it, and my recommendation is to instead do all that in user-space. It'll be less work.
Yes, you are right, it will require changes for the non-X case.
Since you spent a lot of time on it, could you share with us some of the issues you found? We still think that using the DRM interface may be more work, but at least it would improve the state of the graphics stack.
Thanks,
Jaakko and Martin
Hi
On Thu, Dec 10, 2015 at 2:28 PM, Martin Peres martin.peres@linux.intel.com wrote:
Yes, you are right, it will require changes for the non-X case.
Since you spent a lot of time on it, could you share with us some of the issues you found? We still think that using the DRM interface may be more work, but at least it would improve the state of the graphics stack.
The biggest issue is that most compositors are built around the assumption that one/_the_ DRM card is always accessible and usable. That is, they hard-code a fixed path to /dev/dri/cardX and use it. They cannot deal with hotplugging of DRM cards, they cannot deal with no card being present and they cannot rate cards and evaluate whether a card is something they want to use or not. That infrastructure is just not available in any compositor I have seen, and it is non-trivial to write.
Apart from that, the biggest problem I see is that multi-GPU systems are usually non-standard. You *have* to know the exact setup beforehand to make it work properly. It is hard to write a heuristic that properly detects which cards should be used in what way.
Thanks David
dri-devel@lists.freedesktop.org