Hi,
I'm currently working on a new UAPI for Host1x/TegraDRM (see first draft in thread "[RFC] Host1x/TegraDRM UAPI"[1]). One question that has come up is regarding the buffer allocation mechanism. Traditionally, DRM drivers provide custom GEM allocation IOCTLs. However, we now have DMA Heaps, which would be sufficient for TegraDRM's needs, so we could skip implementing any GEM IOCTLs in the TegraDRM UAPI, and rely on importing DMA-BUFs. This would mean less code on TegraDRM's side.
However, one complication with using DMA Heaps is that it only provides DMA-BUF FDs, so it is possible that a user application could run out of free file descriptors if it is not adjusting its soft FD limit. This would especially be a problem for existing applications that might have worked with the traditional GEM model and didn't need to adjust their FD limits, but would then fail in some situations with the increased FD usage of DMA-BUF FDs.
My question is then: what is the role of DMA Heaps? If it is to be used as a central allocator, should the FD issue be left to the application, or addressed somehow? Should it be considered a potential alternative for GEM allocations?
Thanks, Mikko
On Fri, Aug 14, 2020 at 1:34 PM Mikko Perttunen cyndis@kapsi.fi wrote:
Hi,
I'm currently working on a new UAPI for Host1x/TegraDRM (see first draft in thread "[RFC] Host1x/TegraDRM UAPI"[1]). One question that has come up is regarding the buffer allocation mechanism. Traditionally, DRM drivers provide custom GEM allocation IOCTLs. However, we now have DMA Heaps, which would be sufficient for TegraDRM's needs, so we could skip implementing any GEM IOCTLs in the TegraDRM UAPI, and rely on importing DMA-BUFs. This would mean less code on TegraDRM's side.
However, one complication with using DMA Heaps is that it only provides DMA-BUF FDs, so it is possible that a user application could run out of free file descriptors if it is not adjusting its soft FD limit. This would especially be a problem for existing applications that might have worked with the traditional GEM model and didn't need to adjust their FD limits, but would then fail in some situations with the increased FD usage of DMA-BUF FDs.
My question is then: what is the role of DMA Heaps? If it is to be used as a central allocator, should the FD issue be left to the application, or addressed somehow? Should it be considered a potential alternative for GEM allocations?
Atm no one knows. What's for sure is that dma-buf fd are meant to establish sharing, and then once imported everywhere, closed again. dma-buf heaps might or might be useful for sharing the cross-device memory allocator problem (the rough idea is that in sysfs every device lists all the heaps it can use, and then you pick the common one that works for all devices). But that's for shared buffers only.
For an acceleration driver you want drm gem ids, because yes fd limits. Also constantly having to reimport dma-buf for every cs ioctl doesn't sound like a bright idea either, there's a reason we have the drm_prime cache and all that stuff.
I have also no idea why you wouldn't want to use the existing drm infrastructure, it's all there.
Cheers, Daniel
Thanks, Mikko
[1] https://www.spinics.net/lists/dri-devel/msg262021.html _______________________________________________ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel
On 8/14/20 3:12 PM, Daniel Vetter wrote:
On Fri, Aug 14, 2020 at 1:34 PM Mikko Perttunen cyndis@kapsi.fi wrote:
Hi,
I'm currently working on a new UAPI for Host1x/TegraDRM (see first draft in thread "[RFC] Host1x/TegraDRM UAPI"[1]). One question that has come up is regarding the buffer allocation mechanism. Traditionally, DRM drivers provide custom GEM allocation IOCTLs. However, we now have DMA Heaps, which would be sufficient for TegraDRM's needs, so we could skip implementing any GEM IOCTLs in the TegraDRM UAPI, and rely on importing DMA-BUFs. This would mean less code on TegraDRM's side.
However, one complication with using DMA Heaps is that it only provides DMA-BUF FDs, so it is possible that a user application could run out of free file descriptors if it is not adjusting its soft FD limit. This would especially be a problem for existing applications that might have worked with the traditional GEM model and didn't need to adjust their FD limits, but would then fail in some situations with the increased FD usage of DMA-BUF FDs.
My question is then: what is the role of DMA Heaps? If it is to be used as a central allocator, should the FD issue be left to the application, or addressed somehow? Should it be considered a potential alternative for GEM allocations?
Atm no one knows. What's for sure is that dma-buf fd are meant to establish sharing, and then once imported everywhere, closed again. dma-buf heaps might or might be useful for sharing the cross-device memory allocator problem (the rough idea is that in sysfs every device lists all the heaps it can use, and then you pick the common one that works for all devices). But that's for shared buffers only.
For an acceleration driver you want drm gem ids, because yes fd limits. Also constantly having to reimport dma-buf for every cs ioctl doesn't sound like a bright idea either, there's a reason we have the drm_prime cache and all that stuff.
Couldn't we just import once, and then use the GEM handle afterwards?
I have also no idea why you wouldn't want to use the existing drm infrastructure, it's all there.
Sure; I think I'll add the normal GEM IOCTLs, since as you said, it's quite easy to do and standard. I think it was more of a question about the philosophy of DMA-BUF Heaps. In the future there may be other issues like allocation from certain carveouts, where it'd be better not to duplicate the allocation logic in multiple drivers, though there should be multiple ways to address that, too.
Cheers, Daniel
Thanks! Mikko
Thanks, Mikko
[1] https://www.spinics.net/lists/dri-devel/msg262021.html _______________________________________________ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel
On Fri, Aug 14, 2020 at 3:59 AM Mikko Perttunen cyndis@kapsi.fi wrote:
I'm currently working on a new UAPI for Host1x/TegraDRM (see first draft in thread "[RFC] Host1x/TegraDRM UAPI"[1]). One question that has come up is regarding the buffer allocation mechanism. Traditionally, DRM drivers provide custom GEM allocation IOCTLs. However, we now have DMA Heaps, which would be sufficient for TegraDRM's needs, so we could skip implementing any GEM IOCTLs in the TegraDRM UAPI, and rely on importing DMA-BUFs. This would mean less code on TegraDRM's side.
However, one complication with using DMA Heaps is that it only provides DMA-BUF FDs, so it is possible that a user application could run out of free file descriptors if it is not adjusting its soft FD limit. This would especially be a problem for existing applications that might have worked with the traditional GEM model and didn't need to adjust their FD limits, but would then fail in some situations with the increased FD usage of DMA-BUF FDs.
I'm not sure exactly if this would help, but I am working on some exploratory tweaks to DMA BUF Heaps so that there could be an in-kernel accessor that returns a struct dma_buf instead of a fd.
This is motivated as some folks want to use DMA BUF Heaps (if I understand your approach) in a similar fashion, where the driver wants to generate a DMA BUF but doesn't want to create their own DMA BUF exporter which would duplicate one of the DMA BUF Heaps.
I'm a little mixed on this as part of the reason DMA BUF Heaps exists as a userland interface is because its userland which knows the path that a buffer will take, so userland is in the best position to understand what type of buffer it needs to allocate. It seems to me that drivers should instead import a buffer provided to them from userland to fill, rather than allocating a buffer from a heap they choose (which may constraint later use of that buffer). But, I also grant that drivers implementing their own DMA BUF exporters that duplicate existing code is silly, so having in-kernel allocation interfaces may be reasonable.
However, the efforts are also somewhat blocked on having a public in-kernel user of such an interface, so they are basically only exploratory at the moment. So if you have an in-kernel user interested in something like this, I'd be glad to get further input.
This probably doesn't help answer your question wrt to GEM, and I'd defer to Daniel there. :)
thanks -john
On 8/14/20 11:53 PM, John Stultz wrote:
On Fri, Aug 14, 2020 at 3:59 AM Mikko Perttunen cyndis@kapsi.fi wrote:
I'm currently working on a new UAPI for Host1x/TegraDRM (see first draft in thread "[RFC] Host1x/TegraDRM UAPI"[1]). One question that has come up is regarding the buffer allocation mechanism. Traditionally, DRM drivers provide custom GEM allocation IOCTLs. However, we now have DMA Heaps, which would be sufficient for TegraDRM's needs, so we could skip implementing any GEM IOCTLs in the TegraDRM UAPI, and rely on importing DMA-BUFs. This would mean less code on TegraDRM's side.
However, one complication with using DMA Heaps is that it only provides DMA-BUF FDs, so it is possible that a user application could run out of free file descriptors if it is not adjusting its soft FD limit. This would especially be a problem for existing applications that might have worked with the traditional GEM model and didn't need to adjust their FD limits, but would then fail in some situations with the increased FD usage of DMA-BUF FDs.
I'm not sure exactly if this would help, but I am working on some exploratory tweaks to DMA BUF Heaps so that there could be an in-kernel accessor that returns a struct dma_buf instead of a fd.
This is motivated as some folks want to use DMA BUF Heaps (if I understand your approach) in a similar fashion, where the driver wants to generate a DMA BUF but doesn't want to create their own DMA BUF exporter which would duplicate one of the DMA BUF Heaps.
I'm a little mixed on this as part of the reason DMA BUF Heaps exists as a userland interface is because its userland which knows the path that a buffer will take, so userland is in the best position to understand what type of buffer it needs to allocate. It seems to me that drivers should instead import a buffer provided to them from userland to fill, rather than allocating a buffer from a heap they choose (which may constraint later use of that buffer). But, I also grant that drivers implementing their own DMA BUF exporters that duplicate existing code is silly, so having in-kernel allocation interfaces may be reasonable.
However, the efforts are also somewhat blocked on having a public in-kernel user of such an interface, so they are basically only exploratory at the moment. So if you have an in-kernel user interested in something like this, I'd be glad to get further input.
This probably doesn't help answer your question wrt to GEM, and I'd defer to Daniel there. :)
I think TegraDRM doesn't have a particular need for that (at least in its current state), since it already needs GEMs, and has the GEM infrastructure from DRM to lean on.
Thanks for the information, anyway :)
thanks -john
Mikko
dri-devel@lists.freedesktop.org