Hi all,
Currently I am trying to develop a DRM driver that will use Xilinx VDMA to transfer video data to a HDMI TX Phy and I am facing a difficulty regarding the understanding of the DRM DMA Engine. I looked at several sources and at the DRM core source but the flow of creating and interfacing with the DMA controller is still not clear to me.
At DRI web page the X server is mentioned. Does it mean that the channel creation and handling is done by the X server? If so, what is the DRM driver responsible to do then and what exactly does the DRM core do? As I am using Xilinx VDMA do you foresee any special implementation details?
Just for reference here is the description of the Xilinx VDMA: "The Advanced eXtensible Interface Video Direct Memory Access (AXI VDMA) core is a soft Xilinx Intellectual Property (IP) core providing high-bandwidth direct memory access between memory and AXI4-Stream video type target peripherals including peripherals which support AXI4-Stream Video Protocol." The driver is available at "drivers/dma/xilinx/xilinx_vdma.c".
Another important point: I am using PCI Express connected to a FPGA which has all the necessary components (Xilinx VDMA, I2S, ...) and the HDMI TX Phy.
Looking forward to you help.
Best regards, Jose Miguel Abreu
On Wed, May 25, 2016 at 04:46:15PM +0100, Jose Abreu wrote:
Hi all,
Currently I am trying to develop a DRM driver that will use Xilinx VDMA to transfer video data to a HDMI TX Phy and I am facing a difficulty regarding the understanding of the DRM DMA Engine. I looked at several sources and at the DRM core source but the flow of creating and interfacing with the DMA controller is still not clear to me.
At DRI web page the X server is mentioned. Does it mean that the channel creation and handling is done by the X server? If so, what is the DRM driver responsible to do then and what exactly does the DRM core do? As I am using Xilinx VDMA do you foresee any special implementation details?
Just for reference here is the description of the Xilinx VDMA: "The Advanced eXtensible Interface Video Direct Memory Access (AXI VDMA) core is a soft Xilinx Intellectual Property (IP) core providing high-bandwidth direct memory access between memory and AXI4-Stream video type target peripherals including peripherals which support AXI4-Stream Video Protocol." The driver is available at "drivers/dma/xilinx/xilinx_vdma.c".
Another important point: I am using PCI Express connected to a FPGA which has all the necessary components (Xilinx VDMA, I2S, ...) and the HDMI TX Phy.
Looking forward to you help.
If your dma engine is just for HDMI display, forget all the stuff you find about DRI and X server on the various wikis. That's for opengl rendering.
The only thing you need is a kernel-modesetting driver, and nowadays those are written using the atomic modeset framework. There's plenty of introductory talks and stuff all over the web (I suggest the latest version of Laurent Pinchart's talk as a good starting point). -Daniel
Best regards, Jose Miguel Abreu _______________________________________________ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel
Hi Daniel,
Thanks for your answer.
On 26-05-2016 09:06, Daniel Vetter wrote:
On Wed, May 25, 2016 at 04:46:15PM +0100, Jose Abreu wrote:
Hi all,
Currently I am trying to develop a DRM driver that will use Xilinx VDMA to transfer video data to a HDMI TX Phy and I am facing a difficulty regarding the understanding of the DRM DMA Engine. I looked at several sources and at the DRM core source but the flow of creating and interfacing with the DMA controller is still not clear to me.
At DRI web page the X server is mentioned. Does it mean that the channel creation and handling is done by the X server? If so, what is the DRM driver responsible to do then and what exactly does the DRM core do? As I am using Xilinx VDMA do you foresee any special implementation details?
Just for reference here is the description of the Xilinx VDMA: "The Advanced eXtensible Interface Video Direct Memory Access (AXI VDMA) core is a soft Xilinx Intellectual Property (IP) core providing high-bandwidth direct memory access between memory and AXI4-Stream video type target peripherals including peripherals which support AXI4-Stream Video Protocol." The driver is available at "drivers/dma/xilinx/xilinx_vdma.c".
Another important point: I am using PCI Express connected to a FPGA which has all the necessary components (Xilinx VDMA, I2S, ...) and the HDMI TX Phy.
Looking forward to you help.
If your dma engine is just for HDMI display, forget all the stuff you find about DRI and X server on the various wikis. That's for opengl rendering.
The only thing you need is a kernel-modesetting driver, and nowadays those are written using the atomic modeset framework. There's plenty of introductory talks and stuff all over the web (I suggest the latest version of Laurent Pinchart's talk as a good starting point). -Daniel
I watched the talk of Laurent and I already have a simple KMS driver with an encoder (which is bridge dw-hdmi), a connector and a crtc. My doubt now is how do I setup the video path so that video samples are sent using the Xilinx VDMA to our hdmi phy.
Sorry if I am making some mistake (I am quite new to DRM and DMA) but here is my thoughts: - A DMA channel or some kind of mapping must be done so that the DRM driver knows where to send samples; - The Xilinx VDMA driver must be instantiated (which I am already doing); - Some kind of association between the DRM DMA engine and Xilinx VDMA must be done; - A callback should exist that is called on each frame and updates the data that is sent to Xilinx VDMA.
Does this looks okay to you or am I missing something? I still haven't figured out how should I associate the VDMA to the DRM DMA engine and how should I map the DMA to the DRM driver.
Can you give me some help or refer me to someone who can? Also, is there a DRM driver that uses a similar architecture?
Best regards, Jose Miguel Abreu _______________________________________________ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel
Best regards, Jose Miguel Abreu
++ Daniel
On 30-05-2016 09:44, Jose Abreu wrote:
Hi Daniel,
Thanks for your answer.
On 26-05-2016 09:06, Daniel Vetter wrote:
On Wed, May 25, 2016 at 04:46:15PM +0100, Jose Abreu wrote:
Hi all,
Currently I am trying to develop a DRM driver that will use Xilinx VDMA to transfer video data to a HDMI TX Phy and I am facing a difficulty regarding the understanding of the DRM DMA Engine. I looked at several sources and at the DRM core source but the flow of creating and interfacing with the DMA controller is still not clear to me.
At DRI web page the X server is mentioned. Does it mean that the channel creation and handling is done by the X server? If so, what is the DRM driver responsible to do then and what exactly does the DRM core do? As I am using Xilinx VDMA do you foresee any special implementation details?
Just for reference here is the description of the Xilinx VDMA: "The Advanced eXtensible Interface Video Direct Memory Access (AXI VDMA) core is a soft Xilinx Intellectual Property (IP) core providing high-bandwidth direct memory access between memory and AXI4-Stream video type target peripherals including peripherals which support AXI4-Stream Video Protocol." The driver is available at "drivers/dma/xilinx/xilinx_vdma.c".
Another important point: I am using PCI Express connected to a FPGA which has all the necessary components (Xilinx VDMA, I2S, ...) and the HDMI TX Phy.
Looking forward to you help.
If your dma engine is just for HDMI display, forget all the stuff you find about DRI and X server on the various wikis. That's for opengl rendering.
The only thing you need is a kernel-modesetting driver, and nowadays those are written using the atomic modeset framework. There's plenty of introductory talks and stuff all over the web (I suggest the latest version of Laurent Pinchart's talk as a good starting point). -Daniel
I watched the talk of Laurent and I already have a simple KMS driver with an encoder (which is bridge dw-hdmi), a connector and a crtc. My doubt now is how do I setup the video path so that video samples are sent using the Xilinx VDMA to our hdmi phy.
Sorry if I am making some mistake (I am quite new to DRM and DMA) but here is my thoughts: - A DMA channel or some kind of mapping must be done so that the DRM driver knows where to send samples; - The Xilinx VDMA driver must be instantiated (which I am already doing); - Some kind of association between the DRM DMA engine and Xilinx VDMA must be done; - A callback should exist that is called on each frame and updates the data that is sent to Xilinx VDMA.
Does this looks okay to you or am I missing something? I still haven't figured out how should I associate the VDMA to the DRM DMA engine and how should I map the DMA to the DRM driver.
Can you give me some help or refer me to someone who can? Also, is there a DRM driver that uses a similar architecture?
Best regards, Jose Miguel Abreu _______________________________________________ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel
Best regards, Jose Miguel Abreu
On Mon, May 30, 2016 at 10:00:56AM +0100, Jose Abreu wrote:
++ Daniel
On 30-05-2016 09:44, Jose Abreu wrote:
Hi Daniel,
Thanks for your answer.
On 26-05-2016 09:06, Daniel Vetter wrote:
On Wed, May 25, 2016 at 04:46:15PM +0100, Jose Abreu wrote:
Hi all,
Currently I am trying to develop a DRM driver that will use Xilinx VDMA to transfer video data to a HDMI TX Phy and I am facing a difficulty regarding the understanding of the DRM DMA Engine. I looked at several sources and at the DRM core source but the flow of creating and interfacing with the DMA controller is still not clear to me.
At DRI web page the X server is mentioned. Does it mean that the channel creation and handling is done by the X server? If so, what is the DRM driver responsible to do then and what exactly does the DRM core do? As I am using Xilinx VDMA do you foresee any special implementation details?
Just for reference here is the description of the Xilinx VDMA: "The Advanced eXtensible Interface Video Direct Memory Access (AXI VDMA) core is a soft Xilinx Intellectual Property (IP) core providing high-bandwidth direct memory access between memory and AXI4-Stream video type target peripherals including peripherals which support AXI4-Stream Video Protocol." The driver is available at "drivers/dma/xilinx/xilinx_vdma.c".
Another important point: I am using PCI Express connected to a FPGA which has all the necessary components (Xilinx VDMA, I2S, ...) and the HDMI TX Phy.
Looking forward to you help.
If your dma engine is just for HDMI display, forget all the stuff you find about DRI and X server on the various wikis. That's for opengl rendering.
The only thing you need is a kernel-modesetting driver, and nowadays those are written using the atomic modeset framework. There's plenty of introductory talks and stuff all over the web (I suggest the latest version of Laurent Pinchart's talk as a good starting point). -Daniel
I watched the talk of Laurent and I already have a simple KMS driver with an encoder (which is bridge dw-hdmi), a connector and a crtc. My doubt now is how do I setup the video path so that video samples are sent using the Xilinx VDMA to our hdmi phy.
Sorry if I am making some mistake (I am quite new to DRM and DMA) but here is my thoughts: - A DMA channel or some kind of mapping must be done so that the DRM driver knows where to send samples; - The Xilinx VDMA driver must be instantiated (which I am already doing); - Some kind of association between the DRM DMA engine and Xilinx VDMA must be done; - A callback should exist that is called on each frame and updates the data that is sent to Xilinx VDMA.
Does this looks okay to you or am I missing something? I still haven't figured out how should I associate the VDMA to the DRM DMA engine and how should I map the DMA to the DRM driver.
Can you give me some help or refer me to someone who can? Also, is there a DRM driver that uses a similar architecture?
I assume that xilinx VDMA is the only way to feed pixel data into your display pipeline. Under that assumption:
drm_plane should map to Xilinx VDMA, and the drm_plane->drm_crtc link would represent the dma channel. With atomic you can subclass drm_plane/crtc_state structures to store all the runtime configuration in there.
The actual buffer itsel would be represented by a drm_framebuffer, which either wraps a shmem gem or a cma gem object.
If you want to know about the callbacks used by the atomic helpers to push out plane updates, look at the hooks drm_atomic_helper_commit_planes() (and the related functions, see kerneldoc) calls.
I hope this helps a bit more. -Daniel
Hi Daniel,
On 30-05-2016 10:36, Daniel Vetter wrote:
On Mon, May 30, 2016 at 10:00:56AM +0100, Jose Abreu wrote:
++ Daniel
On 30-05-2016 09:44, Jose Abreu wrote:
Hi Daniel,
Thanks for your answer.
On 26-05-2016 09:06, Daniel Vetter wrote:
On Wed, May 25, 2016 at 04:46:15PM +0100, Jose Abreu wrote:
Hi all,
Currently I am trying to develop a DRM driver that will use Xilinx VDMA to transfer video data to a HDMI TX Phy and I am facing a difficulty regarding the understanding of the DRM DMA Engine. I looked at several sources and at the DRM core source but the flow of creating and interfacing with the DMA controller is still not clear to me.
At DRI web page the X server is mentioned. Does it mean that the channel creation and handling is done by the X server? If so, what is the DRM driver responsible to do then and what exactly does the DRM core do? As I am using Xilinx VDMA do you foresee any special implementation details?
Just for reference here is the description of the Xilinx VDMA: "The Advanced eXtensible Interface Video Direct Memory Access (AXI VDMA) core is a soft Xilinx Intellectual Property (IP) core providing high-bandwidth direct memory access between memory and AXI4-Stream video type target peripherals including peripherals which support AXI4-Stream Video Protocol." The driver is available at "drivers/dma/xilinx/xilinx_vdma.c".
Another important point: I am using PCI Express connected to a FPGA which has all the necessary components (Xilinx VDMA, I2S, ...) and the HDMI TX Phy.
Looking forward to you help.
If your dma engine is just for HDMI display, forget all the stuff you find about DRI and X server on the various wikis. That's for opengl rendering.
The only thing you need is a kernel-modesetting driver, and nowadays those are written using the atomic modeset framework. There's plenty of introductory talks and stuff all over the web (I suggest the latest version of Laurent Pinchart's talk as a good starting point). -Daniel
I watched the talk of Laurent and I already have a simple KMS driver with an encoder (which is bridge dw-hdmi), a connector and a crtc. My doubt now is how do I setup the video path so that video samples are sent using the Xilinx VDMA to our hdmi phy.
Sorry if I am making some mistake (I am quite new to DRM and DMA) but here is my thoughts: - A DMA channel or some kind of mapping must be done so that the DRM driver knows where to send samples; - The Xilinx VDMA driver must be instantiated (which I am already doing); - Some kind of association between the DRM DMA engine and Xilinx VDMA must be done; - A callback should exist that is called on each frame and updates the data that is sent to Xilinx VDMA.
Does this looks okay to you or am I missing something? I still haven't figured out how should I associate the VDMA to the DRM DMA engine and how should I map the DMA to the DRM driver.
Can you give me some help or refer me to someone who can? Also, is there a DRM driver that uses a similar architecture?
I assume that xilinx VDMA is the only way to feed pixel data into your display pipeline. Under that assumption:
drm_plane should map to Xilinx VDMA, and the drm_plane->drm_crtc link would represent the dma channel. With atomic you can subclass drm_plane/crtc_state structures to store all the runtime configuration in there.
The actual buffer itsel would be represented by a drm_framebuffer, which either wraps a shmem gem or a cma gem object.
If you want to know about the callbacks used by the atomic helpers to push out plane updates, look at the hooks drm_atomic_helper_commit_planes() (and the related functions, see kerneldoc) calls.
I hope this helps a bit more. -Daniel
Thanks a lot! With your help I was able to implement all the needed logic. Sorry to bother you but I have one more question. Right now I can initialize and configure the vdma correctly but I can only send one frame. I guess when the dma completes transmission I need to ask drm for a new frame, right? Because the commit function starts the vdma correctly but then the dma halts waiting for a new descriptor.
Best regards, Jose Miguel Abreu
On Tue, Jun 14, 2016 at 1:19 PM, Jose Abreu Jose.Abreu@synopsys.com wrote:
I assume that xilinx VDMA is the only way to feed pixel data into your display pipeline. Under that assumption:
drm_plane should map to Xilinx VDMA, and the drm_plane->drm_crtc link would represent the dma channel. With atomic you can subclass drm_plane/crtc_state structures to store all the runtime configuration in there.
The actual buffer itsel would be represented by a drm_framebuffer, which either wraps a shmem gem or a cma gem object.
If you want to know about the callbacks used by the atomic helpers to push out plane updates, look at the hooks drm_atomic_helper_commit_planes() (and the related functions, see kerneldoc) calls.
I hope this helps a bit more. -Daniel
Thanks a lot! With your help I was able to implement all the needed logic. Sorry to bother you but I have one more question. Right now I can initialize and configure the vdma correctly but I can only send one frame. I guess when the dma completes transmission I need to ask drm for a new frame, right? Because the commit function starts the vdma correctly but then the dma halts waiting for a new descriptor.
DRM has a continuous scanout model, i.e. when userspace doesn't give you a new frame you're supposed to keep scanning out the current one. So you need to rearm your upload code with the same drm_framebuffer if userspace hasn't supplied a new one since the last time before the vblank period starts.
This is different to v4l, where userspace has to supply each frame (and the kernel gets angry when there's not enough frames and signals an underrun of the queue). This is because drm is geared at desktops, and there it's perfectly normal to show the exact same frame for a long time. -Daniel
Hi Daniel,
On 15-06-2016 09:52, Daniel Vetter wrote:
On Tue, Jun 14, 2016 at 1:19 PM, Jose Abreu Jose.Abreu@synopsys.com wrote:
I assume that xilinx VDMA is the only way to feed pixel data into your display pipeline. Under that assumption:
drm_plane should map to Xilinx VDMA, and the drm_plane->drm_crtc link would represent the dma channel. With atomic you can subclass drm_plane/crtc_state structures to store all the runtime configuration in there.
The actual buffer itsel would be represented by a drm_framebuffer, which either wraps a shmem gem or a cma gem object.
If you want to know about the callbacks used by the atomic helpers to push out plane updates, look at the hooks drm_atomic_helper_commit_planes() (and the related functions, see kerneldoc) calls.
I hope this helps a bit more. -Daniel
Thanks a lot! With your help I was able to implement all the needed logic. Sorry to bother you but I have one more question. Right now I can initialize and configure the vdma correctly but I can only send one frame. I guess when the dma completes transmission I need to ask drm for a new frame, right? Because the commit function starts the vdma correctly but then the dma halts waiting for a new descriptor.
DRM has a continuous scanout model, i.e. when userspace doesn't give you a new frame you're supposed to keep scanning out the current one. So you need to rearm your upload code with the same drm_framebuffer if userspace hasn't supplied a new one since the last time before the vblank period starts.
This is different to v4l, where userspace has to supply each frame (and the kernel gets angry when there's not enough frames and signals an underrun of the queue). This is because drm is geared at desktops, and there it's perfectly normal to show the exact same frame for a long time. -Daniel
Thanks, I was thinking this was similar to v4l. I am now able to send multiple frames so it is finally working! I have one little implementation detail: The controller that I am using supports deep color mode but I am using FB CMA helpers to create the framebuffer and I've seen that the supported bpp in these helpers only goes up to 32, right? Does this means that with these helpers I can't use deep color? Can I implement this deep color mode (48bpp) using a custom fb or do I also need custom gem allocation functions (Right now I am using GEM CMA helpers)?
Best regards, Jose Miguel Abreu
On Wed, Jun 15, 2016 at 11:48 AM, Jose Abreu Jose.Abreu@synopsys.com wrote:
On 15-06-2016 09:52, Daniel Vetter wrote:
On Tue, Jun 14, 2016 at 1:19 PM, Jose Abreu Jose.Abreu@synopsys.com wrote:
I assume that xilinx VDMA is the only way to feed pixel data into your display pipeline. Under that assumption:
drm_plane should map to Xilinx VDMA, and the drm_plane->drm_crtc link would represent the dma channel. With atomic you can subclass drm_plane/crtc_state structures to store all the runtime configuration in there.
The actual buffer itsel would be represented by a drm_framebuffer, which either wraps a shmem gem or a cma gem object.
If you want to know about the callbacks used by the atomic helpers to push out plane updates, look at the hooks drm_atomic_helper_commit_planes() (and the related functions, see kerneldoc) calls.
I hope this helps a bit more. -Daniel
Thanks a lot! With your help I was able to implement all the needed logic. Sorry to bother you but I have one more question. Right now I can initialize and configure the vdma correctly but I can only send one frame. I guess when the dma completes transmission I need to ask drm for a new frame, right? Because the commit function starts the vdma correctly but then the dma halts waiting for a new descriptor.
DRM has a continuous scanout model, i.e. when userspace doesn't give you a new frame you're supposed to keep scanning out the current one. So you need to rearm your upload code with the same drm_framebuffer if userspace hasn't supplied a new one since the last time before the vblank period starts.
This is different to v4l, where userspace has to supply each frame (and the kernel gets angry when there's not enough frames and signals an underrun of the queue). This is because drm is geared at desktops, and there it's perfectly normal to show the exact same frame for a long time. -Daniel
Thanks, I was thinking this was similar to v4l. I am now able to send multiple frames so it is finally working! I have one little implementation detail: The controller that I am using supports deep color mode but I am using FB CMA helpers to create the framebuffer and I've seen that the supported bpp in these helpers only goes up to 32, right? Does this means that with these helpers I can't use deep color? Can I implement this deep color mode (48bpp) using a custom fb or do I also need custom gem allocation functions (Right now I am using GEM CMA helpers)?
Suprising the cma doesn't take pixel_format into account. If this really doesn't work, pls fix up the cma helpers, not roll your own copypasta ;-)
Note that the fbdev emulation itself (maybe that's what threw you off) only supports legacy rgb formats up to 32bits. But native kms can support anything, we just might need to add the DRM_FOURCC codes for that. -Daniel
Hi Daniel,
Sorry to bother you again. I promise this is the last time :)
On 15-06-2016 11:15, Daniel Vetter wrote:
On Wed, Jun 15, 2016 at 11:48 AM, Jose Abreu Jose.Abreu@synopsys.com wrote:
On 15-06-2016 09:52, Daniel Vetter wrote:
On Tue, Jun 14, 2016 at 1:19 PM, Jose Abreu Jose.Abreu@synopsys.com wrote:
I assume that xilinx VDMA is the only way to feed pixel data into your display pipeline. Under that assumption:
drm_plane should map to Xilinx VDMA, and the drm_plane->drm_crtc link would represent the dma channel. With atomic you can subclass drm_plane/crtc_state structures to store all the runtime configuration in there.
The actual buffer itsel would be represented by a drm_framebuffer, which either wraps a shmem gem or a cma gem object.
If you want to know about the callbacks used by the atomic helpers to push out plane updates, look at the hooks drm_atomic_helper_commit_planes() (and the related functions, see kerneldoc) calls.
I hope this helps a bit more. -Daniel
Thanks a lot! With your help I was able to implement all the needed logic. Sorry to bother you but I have one more question. Right now I can initialize and configure the vdma correctly but I can only send one frame. I guess when the dma completes transmission I need to ask drm for a new frame, right? Because the commit function starts the vdma correctly but then the dma halts waiting for a new descriptor.
DRM has a continuous scanout model, i.e. when userspace doesn't give you a new frame you're supposed to keep scanning out the current one. So you need to rearm your upload code with the same drm_framebuffer if userspace hasn't supplied a new one since the last time before the vblank period starts.
This is different to v4l, where userspace has to supply each frame (and the kernel gets angry when there's not enough frames and signals an underrun of the queue). This is because drm is geared at desktops, and there it's perfectly normal to show the exact same frame for a long time. -Daniel
Thanks, I was thinking this was similar to v4l. I am now able to send multiple frames so it is finally working! I have one little implementation detail: The controller that I am using supports deep color mode but I am using FB CMA helpers to create the framebuffer and I've seen that the supported bpp in these helpers only goes up to 32, right? Does this means that with these helpers I can't use deep color? Can I implement this deep color mode (48bpp) using a custom fb or do I also need custom gem allocation functions (Right now I am using GEM CMA helpers)?
Suprising the cma doesn't take pixel_format into account. If this really doesn't work, pls fix up the cma helpers, not roll your own copypasta ;-)
Note that the fbdev emulation itself (maybe that's what threw you off) only supports legacy rgb formats up to 32bits. But native kms can support anything, we just might need to add the DRM_FOURCC codes for that. -Daniel
So, I ended up using 32bits and everything is working fine! I tested using [1] and [2] but now I have kind of a dumb question: I want to use the new driver that I created as a secondary output of my desktop so that I can play videos using mplayer but I am not being able to do this. If I check in my linux settings only one display is being detected, although in /dev/dri the two video cards are present (the native one and the one I added). Does the driver needs something additional to do this or is it only in my X configuration? I tried editing this configuration but still doesn't work. I believe that because my driver is not being probed at runtime the display is not being created by X. Is this correct?
[1] https://dri.freedesktop.org/libdrm/ [2] https://github.com/dvdhrm/docs/blob/master/drm-howto/modeset.c
Thanks!
Best regards, Jose Miguel Abreu
On Thu, Jun 16, 2016 at 01:09:34PM +0100, Jose Abreu wrote:
Hi Daniel,
Sorry to bother you again. I promise this is the last time :)
On 15-06-2016 11:15, Daniel Vetter wrote:
On Wed, Jun 15, 2016 at 11:48 AM, Jose Abreu Jose.Abreu@synopsys.com wrote:
On 15-06-2016 09:52, Daniel Vetter wrote:
On Tue, Jun 14, 2016 at 1:19 PM, Jose Abreu Jose.Abreu@synopsys.com wrote:
I assume that xilinx VDMA is the only way to feed pixel data into your display pipeline. Under that assumption:
drm_plane should map to Xilinx VDMA, and the drm_plane->drm_crtc link would represent the dma channel. With atomic you can subclass drm_plane/crtc_state structures to store all the runtime configuration in there.
The actual buffer itsel would be represented by a drm_framebuffer, which either wraps a shmem gem or a cma gem object.
If you want to know about the callbacks used by the atomic helpers to push out plane updates, look at the hooks drm_atomic_helper_commit_planes() (and the related functions, see kerneldoc) calls.
I hope this helps a bit more. -Daniel
Thanks a lot! With your help I was able to implement all the needed logic. Sorry to bother you but I have one more question. Right now I can initialize and configure the vdma correctly but I can only send one frame. I guess when the dma completes transmission I need to ask drm for a new frame, right? Because the commit function starts the vdma correctly but then the dma halts waiting for a new descriptor.
DRM has a continuous scanout model, i.e. when userspace doesn't give you a new frame you're supposed to keep scanning out the current one. So you need to rearm your upload code with the same drm_framebuffer if userspace hasn't supplied a new one since the last time before the vblank period starts.
This is different to v4l, where userspace has to supply each frame (and the kernel gets angry when there's not enough frames and signals an underrun of the queue). This is because drm is geared at desktops, and there it's perfectly normal to show the exact same frame for a long time. -Daniel
Thanks, I was thinking this was similar to v4l. I am now able to send multiple frames so it is finally working! I have one little implementation detail: The controller that I am using supports deep color mode but I am using FB CMA helpers to create the framebuffer and I've seen that the supported bpp in these helpers only goes up to 32, right? Does this means that with these helpers I can't use deep color? Can I implement this deep color mode (48bpp) using a custom fb or do I also need custom gem allocation functions (Right now I am using GEM CMA helpers)?
Suprising the cma doesn't take pixel_format into account. If this really doesn't work, pls fix up the cma helpers, not roll your own copypasta ;-)
Note that the fbdev emulation itself (maybe that's what threw you off) only supports legacy rgb formats up to 32bits. But native kms can support anything, we just might need to add the DRM_FOURCC codes for that. -Daniel
So, I ended up using 32bits and everything is working fine! I tested using [1] and [2] but now I have kind of a dumb question: I want to use the new driver that I created as a secondary output of my desktop so that I can play videos using mplayer but I am not being able to do this. If I check in my linux settings only one display is being detected, although in /dev/dri the two video cards are present (the native one and the one I added). Does the driver needs something additional to do this or is it only in my X configuration? I tried editing this configuration but still doesn't work. I believe that because my driver is not being probed at runtime the display is not being created by X. Is this correct?
X with multiple drivers is kinda a bit much. I think it should work somewhat if you treat the 2nd driver as an offload engine. Afaik you can change that through xrandr, but not sure. I didn't implement this. -Daniel
On Thu, Jun 16, 2016 at 8:09 AM, Jose Abreu Jose.Abreu@synopsys.com wrote:
Hi Daniel,
Sorry to bother you again. I promise this is the last time :)
On 15-06-2016 11:15, Daniel Vetter wrote:
On Wed, Jun 15, 2016 at 11:48 AM, Jose Abreu Jose.Abreu@synopsys.com wrote:
On 15-06-2016 09:52, Daniel Vetter wrote:
On Tue, Jun 14, 2016 at 1:19 PM, Jose Abreu Jose.Abreu@synopsys.com wrote:
I assume that xilinx VDMA is the only way to feed pixel data into your display pipeline. Under that assumption:
drm_plane should map to Xilinx VDMA, and the drm_plane->drm_crtc link would represent the dma channel. With atomic you can subclass drm_plane/crtc_state structures to store all the runtime configuration in there.
The actual buffer itsel would be represented by a drm_framebuffer, which either wraps a shmem gem or a cma gem object.
If you want to know about the callbacks used by the atomic helpers to push out plane updates, look at the hooks drm_atomic_helper_commit_planes() (and the related functions, see kerneldoc) calls.
I hope this helps a bit more. -Daniel
Thanks a lot! With your help I was able to implement all the needed logic. Sorry to bother you but I have one more question. Right now I can initialize and configure the vdma correctly but I can only send one frame. I guess when the dma completes transmission I need to ask drm for a new frame, right? Because the commit function starts the vdma correctly but then the dma halts waiting for a new descriptor.
DRM has a continuous scanout model, i.e. when userspace doesn't give you a new frame you're supposed to keep scanning out the current one. So you need to rearm your upload code with the same drm_framebuffer if userspace hasn't supplied a new one since the last time before the vblank period starts.
This is different to v4l, where userspace has to supply each frame (and the kernel gets angry when there's not enough frames and signals an underrun of the queue). This is because drm is geared at desktops, and there it's perfectly normal to show the exact same frame for a long time. -Daniel
Thanks, I was thinking this was similar to v4l. I am now able to send multiple frames so it is finally working! I have one little implementation detail: The controller that I am using supports deep color mode but I am using FB CMA helpers to create the framebuffer and I've seen that the supported bpp in these helpers only goes up to 32, right? Does this means that with these helpers I can't use deep color? Can I implement this deep color mode (48bpp) using a custom fb or do I also need custom gem allocation functions (Right now I am using GEM CMA helpers)?
Suprising the cma doesn't take pixel_format into account. If this really doesn't work, pls fix up the cma helpers, not roll your own copypasta ;-)
Note that the fbdev emulation itself (maybe that's what threw you off) only supports legacy rgb formats up to 32bits. But native kms can support anything, we just might need to add the DRM_FOURCC codes for that. -Daniel
So, I ended up using 32bits and everything is working fine! I tested using [1] and [2] but now I have kind of a dumb question: I want to use the new driver that I created as a secondary output of my desktop so that I can play videos using mplayer but I am not being able to do this. If I check in my linux settings only one display is being detected, although in /dev/dri the two video cards are present (the native one and the one I added). Does the driver needs something additional to do this or is it only in my X configuration? I tried editing this configuration but still doesn't work. I believe that because my driver is not being probed at runtime the display is not being created by X. Is this correct?
Have a look at
https://nouveau.freedesktop.org/wiki/Optimus/
specifically the section titled "Using outputs on discrete GPU". If you're still having trouble, please provide an Xorg log.
Hi Ilia,
Thanks for your answer.
On 16-06-2016 13:39, Ilia Mirkin wrote:
On Thu, Jun 16, 2016 at 8:09 AM, Jose Abreu Jose.Abreu@synopsys.com wrote:
Hi Daniel,
Sorry to bother you again. I promise this is the last time :)
On 15-06-2016 11:15, Daniel Vetter wrote:
On Wed, Jun 15, 2016 at 11:48 AM, Jose Abreu Jose.Abreu@synopsys.com wrote:
On 15-06-2016 09:52, Daniel Vetter wrote:
On Tue, Jun 14, 2016 at 1:19 PM, Jose Abreu Jose.Abreu@synopsys.com wrote:
> I assume that xilinx VDMA is the only way to feed pixel data into your > display pipeline. Under that assumption: > > drm_plane should map to Xilinx VDMA, and the drm_plane->drm_crtc link > would represent the dma channel. With atomic you can subclass > drm_plane/crtc_state structures to store all the runtime configuration in > there. > > The actual buffer itsel would be represented by a drm_framebuffer, which > either wraps a shmem gem or a cma gem object. > > If you want to know about the callbacks used by the atomic helpers to push > out plane updates, look at the hooks drm_atomic_helper_commit_planes() > (and the related functions, see kerneldoc) calls. > > I hope this helps a bit more. > -Daniel Thanks a lot! With your help I was able to implement all the needed logic. Sorry to bother you but I have one more question. Right now I can initialize and configure the vdma correctly but I can only send one frame. I guess when the dma completes transmission I need to ask drm for a new frame, right? Because the commit function starts the vdma correctly but then the dma halts waiting for a new descriptor.
DRM has a continuous scanout model, i.e. when userspace doesn't give you a new frame you're supposed to keep scanning out the current one. So you need to rearm your upload code with the same drm_framebuffer if userspace hasn't supplied a new one since the last time before the vblank period starts.
This is different to v4l, where userspace has to supply each frame (and the kernel gets angry when there's not enough frames and signals an underrun of the queue). This is because drm is geared at desktops, and there it's perfectly normal to show the exact same frame for a long time. -Daniel
Thanks, I was thinking this was similar to v4l. I am now able to send multiple frames so it is finally working! I have one little implementation detail: The controller that I am using supports deep color mode but I am using FB CMA helpers to create the framebuffer and I've seen that the supported bpp in these helpers only goes up to 32, right? Does this means that with these helpers I can't use deep color? Can I implement this deep color mode (48bpp) using a custom fb or do I also need custom gem allocation functions (Right now I am using GEM CMA helpers)?
Suprising the cma doesn't take pixel_format into account. If this really doesn't work, pls fix up the cma helpers, not roll your own copypasta ;-)
Note that the fbdev emulation itself (maybe that's what threw you off) only supports legacy rgb formats up to 32bits. But native kms can support anything, we just might need to add the DRM_FOURCC codes for that. -Daniel
So, I ended up using 32bits and everything is working fine! I tested using [1] and [2] but now I have kind of a dumb question: I want to use the new driver that I created as a secondary output of my desktop so that I can play videos using mplayer but I am not being able to do this. If I check in my linux settings only one display is being detected, although in /dev/dri the two video cards are present (the native one and the one I added). Does the driver needs something additional to do this or is it only in my X configuration? I tried editing this configuration but still doesn't work. I believe that because my driver is not being probed at runtime the display is not being created by X. Is this correct?
Have a look at
https://nouveau.freedesktop.org/wiki/Optimus/
specifically the section titled "Using outputs on discrete GPU". If you're still having trouble, please provide an Xorg log.
I found another solution: using mpv I am able to specify which output card I can use (in my case its /dev/dri/card1). Although, I only can do this from command line. If I start the driver from graphical mode the link between crtc and encoder disappears and if I try to run mpv it fails to find the crtc for the given encoder, but if I start and run mpv from command line everything works fine. Also, when I do 'startx' from command line the X server fails, giving this log:
[snip] [ 4815.102] (II) xfree86: Adding drm device (/dev/dri/card1) [ 4815.127] (EE) [ 4815.132] (EE) Backtrace: [ 4815.157] (EE) 0: /usr/bin/X (xorg_backtrace+0x56) [0x7f9de4f255a6] [ 4815.162] (EE) 1: /usr/bin/X (0x7f9de4d72000+0x1b7709) [0x7f9de4f29709] [ 4815.167] (EE) 2: /lib/x86_64-linux-gnu/libc.so.6 (0x7f9de2a35000+0x352f0) [0x7f9de2a6a2f0] [ 4815.172] (EE) 3: /usr/bin/X (0x7f9de4d72000+0xb9899) [0x7f9de4e2b899] [ 4815.177] (EE) 4: /usr/bin/X (xf86BusProbe+0x9) [0x7f9de4dfe249] [ 4815.181] (EE) 5: /usr/bin/X (InitOutput+0x734) [0x7f9de4e0cd54] [ 4815.183] (EE) 6: /usr/bin/X (0x7f9de4d72000+0x5c0ba) [0x7f9de4dce0ba] [ 4815.184] (EE) 7: /lib/x86_64-linux-gnu/libc.so.6 (__libc_start_main+0xf0) [0x7f9de2a55a40] [ 4815.185] (EE) 8: /usr/bin/X (_start+0x29) [0x7f9de4db8639] [ 4815.186] (EE) [ 4815.187] (EE) Segmentation fault at address 0x0 [ 4815.188] (EE) Fatal server error: [ 4815.191] (EE) Caught signal 11 (Segmentation fault). Server aborting [ 4815.192] (EE) [ 4815.193] (EE) Please consult the The X.Org Foundation support at http://wiki.x.org for help. [ 4815.197] (EE) Please also check the log file at "/var/log/Xorg.1.log" for additional information. [ 4815.198] (EE)
Do you know of any reason why this might be happening? Notice that from command line everything works fine.
Best regards, Jose Miguel Abreu
dri-devel@lists.freedesktop.org