Hi all,
I've started this a while ago, with the idea to move shmem helpers over to dma_resv_lock. Big prep work for that was to untangle the layering between functions called by drivers, and functions used to implement drm_gem_object_funcs.
I didn't ever get to the locking part, but I think the cleanup here are worth it stand-alone still.
Comments, review and testing very much welcome.
Cheers, Daniel
Daniel Vetter (9): drm/msm: Don't call dma_buf_vunmap without _vmap drm/gem: WARN if drm_gem_get_pages is called on a private obj drm/doc: Some polish for shmem helpers drm/virtio: Call the right shmem helpers drm/udl: Don't call get/put_pages on imported dma-buf drm/shmem-helpers: Don't call get/put_pages on imported dma-buf in vmap drm/shmem-helpers: Redirect mmap for imported dma-buf drm/shmem-helpers: Ensure get_pages is not called on imported dma-buf drm/shmem-helpers: Simplify dma-buf importing
Documentation/gpu/drm-kms-helpers.rst | 12 --- Documentation/gpu/drm-mm.rst | 12 +++ drivers/gpu/drm/drm_gem.c | 8 ++ drivers/gpu/drm/drm_gem_shmem_helper.c | 128 ++++++++++++++---------- drivers/gpu/drm/msm/msm_gem.c | 3 +- drivers/gpu/drm/udl/udl_gem.c | 22 ++-- drivers/gpu/drm/virtio/virtgpu_object.c | 2 +- 7 files changed, 111 insertions(+), 76 deletions(-)
I honestly don't exactly understand what's going on here, but the current code is wrong for sure: It calls dma_buf_vunmap without ever calling dma_buf_vmap.
What I'm not sure about is whether the WARN_ON is correct: - msm imports dma-buf using drm_prime_sg_to_page_addr_arrays. Which is a pretty neat layering violation of how you shouldn't peek behind the curtain of the dma-buf exporter, but par for course. Note that all the nice new helpers don't (and we should probably have a bit a warning about this in the kerneldoc).
- but then in the get_vaddr() in msm_gem.c, and that seems to happily wrap a vmap() around any object with ->pages set (so including imported dma-buf)
- I'm not seeing any guarantees that userspace can't use an imported dma-buf for e.g. MSM_SUBMIT_CMD_BUF in a5xx_submit_in_rb, so no guarantees that an imported dma-buf won't end up with a ->vaddr set.
But even if that WARN_ON is wrong, cleaning up a vmap() done by msm by calling dma_buf_vmap is the wrong thing to do.
Signed-off-by: Daniel Vetter daniel.vetter@intel.com Cc: Rob Clark robdclark@gmail.com Cc: Sean Paul sean@poorly.run Cc: linux-arm-msm@vger.kernel.org Cc: freedreno@lists.freedesktop.org --- drivers/gpu/drm/msm/msm_gem.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 5a6a79fbc9d6..3305a457960e 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -907,8 +907,7 @@ static void free_object(struct msm_gem_object *msm_obj) put_iova(obj);
if (obj->import_attach) { - if (msm_obj->vaddr) - dma_buf_vunmap(obj->import_attach->dmabuf, msm_obj->vaddr); + WARN_ON(msm_obj->vaddr);
/* Don't drop the pages for imported dmabuf, as they are not * ours, just free the array we allocated:
On Mon, May 11, 2020 at 2:36 AM Daniel Vetter daniel.vetter@ffwll.ch wrote:
I honestly don't exactly understand what's going on here, but the current code is wrong for sure: It calls dma_buf_vunmap without ever calling dma_buf_vmap.
What I'm not sure about is whether the WARN_ON is correct:
msm imports dma-buf using drm_prime_sg_to_page_addr_arrays. Which is a pretty neat layering violation of how you shouldn't peek behind the curtain of the dma-buf exporter, but par for course. Note that all the nice new helpers don't (and we should probably have a bit a warning about this in the kerneldoc).
but then in the get_vaddr() in msm_gem.c, and that seems to happily wrap a vmap() around any object with ->pages set (so including imported dma-buf)
I'm not seeing any guarantees that userspace can't use an imported dma-buf for e.g. MSM_SUBMIT_CMD_BUF in a5xx_submit_in_rb, so no guarantees that an imported dma-buf won't end up with a ->vaddr set.
fwiw, a5xx_submit_in_rb() isn't a "normal" path (build-time disabled by default, and restricted to sudo).. it really only exists to simplify poking at fw.
There could be vmap's in the msm_gem_submit path, however. If we don't, we should probably just disallow using an imported dma-buf as cmdstream.. I don't think there is any sane reason to permit that. We should probably also disallow get_vaddr() on imported buffers.
BR, -R
But even if that WARN_ON is wrong, cleaning up a vmap() done by msm by calling dma_buf_vmap is the wrong thing to do.
Signed-off-by: Daniel Vetter daniel.vetter@intel.com Cc: Rob Clark robdclark@gmail.com Cc: Sean Paul sean@poorly.run Cc: linux-arm-msm@vger.kernel.org Cc: freedreno@lists.freedesktop.org
drivers/gpu/drm/msm/msm_gem.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 5a6a79fbc9d6..3305a457960e 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -907,8 +907,7 @@ static void free_object(struct msm_gem_object *msm_obj) put_iova(obj);
if (obj->import_attach) {
if (msm_obj->vaddr)
dma_buf_vunmap(obj->import_attach->dmabuf, msm_obj->vaddr);
WARN_ON(msm_obj->vaddr); /* Don't drop the pages for imported dmabuf, as they are not * ours, just free the array we allocated:
-- 2.26.2
On Mon, May 11, 2020 at 5:24 PM Rob Clark robdclark@gmail.com wrote:
On Mon, May 11, 2020 at 2:36 AM Daniel Vetter daniel.vetter@ffwll.ch wrote:
I honestly don't exactly understand what's going on here, but the current code is wrong for sure: It calls dma_buf_vunmap without ever calling dma_buf_vmap.
What I'm not sure about is whether the WARN_ON is correct:
msm imports dma-buf using drm_prime_sg_to_page_addr_arrays. Which is a pretty neat layering violation of how you shouldn't peek behind the curtain of the dma-buf exporter, but par for course. Note that all the nice new helpers don't (and we should probably have a bit a warning about this in the kerneldoc).
but then in the get_vaddr() in msm_gem.c, and that seems to happily wrap a vmap() around any object with ->pages set (so including imported dma-buf)
I'm not seeing any guarantees that userspace can't use an imported dma-buf for e.g. MSM_SUBMIT_CMD_BUF in a5xx_submit_in_rb, so no guarantees that an imported dma-buf won't end up with a ->vaddr set.
fwiw, a5xx_submit_in_rb() isn't a "normal" path (build-time disabled by default, and restricted to sudo).. it really only exists to simplify poking at fw.
There could be vmap's in the msm_gem_submit path, however. If we don't, we should probably just disallow using an imported dma-buf as cmdstream.. I don't think there is any sane reason to permit that. We should probably also disallow get_vaddr() on imported buffers.
Yeah if that's possible and won't blow up (I can't test) I think it'd be best. Something like if (bo->import_attach) return NULL; should do the trick I think. Should I type that up as v2 of this? -Daniel
BR, -R
But even if that WARN_ON is wrong, cleaning up a vmap() done by msm by calling dma_buf_vmap is the wrong thing to do.
Signed-off-by: Daniel Vetter daniel.vetter@intel.com Cc: Rob Clark robdclark@gmail.com Cc: Sean Paul sean@poorly.run Cc: linux-arm-msm@vger.kernel.org Cc: freedreno@lists.freedesktop.org
drivers/gpu/drm/msm/msm_gem.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 5a6a79fbc9d6..3305a457960e 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -907,8 +907,7 @@ static void free_object(struct msm_gem_object *msm_obj) put_iova(obj);
if (obj->import_attach) {
if (msm_obj->vaddr)
dma_buf_vunmap(obj->import_attach->dmabuf, msm_obj->vaddr);
WARN_ON(msm_obj->vaddr); /* Don't drop the pages for imported dmabuf, as they are not * ours, just free the array we allocated:
-- 2.26.2
On Mon, May 11, 2020 at 8:29 AM Daniel Vetter daniel.vetter@ffwll.ch wrote:
On Mon, May 11, 2020 at 5:24 PM Rob Clark robdclark@gmail.com wrote:
On Mon, May 11, 2020 at 2:36 AM Daniel Vetter daniel.vetter@ffwll.ch wrote:
I honestly don't exactly understand what's going on here, but the current code is wrong for sure: It calls dma_buf_vunmap without ever calling dma_buf_vmap.
What I'm not sure about is whether the WARN_ON is correct:
msm imports dma-buf using drm_prime_sg_to_page_addr_arrays. Which is a pretty neat layering violation of how you shouldn't peek behind the curtain of the dma-buf exporter, but par for course. Note that all the nice new helpers don't (and we should probably have a bit a warning about this in the kerneldoc).
but then in the get_vaddr() in msm_gem.c, and that seems to happily wrap a vmap() around any object with ->pages set (so including imported dma-buf)
I'm not seeing any guarantees that userspace can't use an imported dma-buf for e.g. MSM_SUBMIT_CMD_BUF in a5xx_submit_in_rb, so no guarantees that an imported dma-buf won't end up with a ->vaddr set.
fwiw, a5xx_submit_in_rb() isn't a "normal" path (build-time disabled by default, and restricted to sudo).. it really only exists to simplify poking at fw.
There could be vmap's in the msm_gem_submit path, however. If we don't, we should probably just disallow using an imported dma-buf as cmdstream.. I don't think there is any sane reason to permit that. We should probably also disallow get_vaddr() on imported buffers.
Yeah if that's possible and won't blow up (I can't test) I think it'd be best. Something like if (bo->import_attach) return NULL; should do the trick I think. Should I type that up as v2 of this?
Sure. It should probably be something like
if (obj->import_attach) return ERR_PTR(-ESOMETHING)
looks like the gem-submit path handles an IS_ERR() return
BR, -R
-Daniel
BR, -R
But even if that WARN_ON is wrong, cleaning up a vmap() done by msm by calling dma_buf_vmap is the wrong thing to do.
Signed-off-by: Daniel Vetter daniel.vetter@intel.com Cc: Rob Clark robdclark@gmail.com Cc: Sean Paul sean@poorly.run Cc: linux-arm-msm@vger.kernel.org Cc: freedreno@lists.freedesktop.org
drivers/gpu/drm/msm/msm_gem.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 5a6a79fbc9d6..3305a457960e 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -907,8 +907,7 @@ static void free_object(struct msm_gem_object *msm_obj) put_iova(obj);
if (obj->import_attach) {
if (msm_obj->vaddr)
dma_buf_vunmap(obj->import_attach->dmabuf, msm_obj->vaddr);
WARN_ON(msm_obj->vaddr); /* Don't drop the pages for imported dmabuf, as they are not * ours, just free the array we allocated:
-- 2.26.2
-- Daniel Vetter Software Engineer, Intel Corporation +41 (0) 79 365 57 48 - http://blog.ffwll.ch
I honestly don't exactly understand what's going on here, but the current code is wrong for sure: It calls dma_buf_vunmap without ever calling dma_buf_vmap.
What I'm not sure about is whether the WARN_ON is correct: - msm imports dma-buf using drm_prime_sg_to_page_addr_arrays. Which is a pretty neat layering violation of how you shouldn't peek behind the curtain of the dma-buf exporter, but par for course. Note that all the nice new helpers don't (and we should probably have a bit a warning about this in the kerneldoc).
- but then in the get_vaddr() in msm_gem.c, we seems to happily wrap a vmap() around any object with ->pages set (so including imported dma-buf).
- I'm not seeing any guarantees that userspace can't use an imported dma-buf for e.g. MSM_SUBMIT_CMD_BUF in a5xx_submit_in_rb, so no guarantees that an imported dma-buf won't end up with a ->vaddr set.
But even if that WARN_ON is wrong, cleaning up a vmap() done by msm by calling dma_buf_vunmap is the wrong thing to do.
v2: Rob said in review that we do indeed have a gap in get_vaddr() that needs to be plugged. But the users I've found aren't legit users on imported dma-buf, so we can just reject that.
Signed-off-by: Daniel Vetter daniel.vetter@intel.com Cc: Rob Clark robdclark@gmail.com Cc: Sean Paul sean@poorly.run Cc: linux-arm-msm@vger.kernel.org Cc: freedreno@lists.freedesktop.org --- drivers/gpu/drm/msm/msm_gem.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 5a6a79fbc9d6..e70abd1cde43 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -554,6 +554,9 @@ static void *get_vaddr(struct drm_gem_object *obj, unsigned madv) struct msm_gem_object *msm_obj = to_msm_bo(obj); int ret = 0;
+ if (obj->import_attach) + return ERR_PTR(-ENODEV); + mutex_lock(&msm_obj->lock);
if (WARN_ON(msm_obj->madv > madv)) { @@ -907,8 +910,7 @@ static void free_object(struct msm_gem_object *msm_obj) put_iova(obj);
if (obj->import_attach) { - if (msm_obj->vaddr) - dma_buf_vunmap(obj->import_attach->dmabuf, msm_obj->vaddr); + WARN_ON(msm_obj->vaddr);
/* Don't drop the pages for imported dmabuf, as they are not * ours, just free the array we allocated:
On Thu, May 14, 2020 at 10:11:17PM +0200, Daniel Vetter wrote:
I honestly don't exactly understand what's going on here, but the current code is wrong for sure: It calls dma_buf_vunmap without ever calling dma_buf_vmap.
What I'm not sure about is whether the WARN_ON is correct:
msm imports dma-buf using drm_prime_sg_to_page_addr_arrays. Which is a pretty neat layering violation of how you shouldn't peek behind the curtain of the dma-buf exporter, but par for course. Note that all the nice new helpers don't (and we should probably have a bit a warning about this in the kerneldoc).
but then in the get_vaddr() in msm_gem.c, we seems to happily wrap a vmap() around any object with ->pages set (so including imported dma-buf).
I'm not seeing any guarantees that userspace can't use an imported dma-buf for e.g. MSM_SUBMIT_CMD_BUF in a5xx_submit_in_rb, so no guarantees that an imported dma-buf won't end up with a ->vaddr set.
But even if that WARN_ON is wrong, cleaning up a vmap() done by msm by calling dma_buf_vunmap is the wrong thing to do.
v2: Rob said in review that we do indeed have a gap in get_vaddr() that needs to be plugged. But the users I've found aren't legit users on imported dma-buf, so we can just reject that.
Signed-off-by: Daniel Vetter daniel.vetter@intel.com Cc: Rob Clark robdclark@gmail.com Cc: Sean Paul sean@poorly.run Cc: linux-arm-msm@vger.kernel.org Cc: freedreno@lists.freedesktop.org
Ping for some review/ack so I can start landing thist stuff please?
Thanks, Daniel
drivers/gpu/drm/msm/msm_gem.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 5a6a79fbc9d6..e70abd1cde43 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -554,6 +554,9 @@ static void *get_vaddr(struct drm_gem_object *obj, unsigned madv) struct msm_gem_object *msm_obj = to_msm_bo(obj); int ret = 0;
if (obj->import_attach)
return ERR_PTR(-ENODEV);
mutex_lock(&msm_obj->lock);
if (WARN_ON(msm_obj->madv > madv)) {
@@ -907,8 +910,7 @@ static void free_object(struct msm_gem_object *msm_obj) put_iova(obj);
if (obj->import_attach) {
if (msm_obj->vaddr)
dma_buf_vunmap(obj->import_attach->dmabuf, msm_obj->vaddr);
WARN_ON(msm_obj->vaddr);
/* Don't drop the pages for imported dmabuf, as they are not
- ours, just free the array we allocated:
-- 2.26.2
On Thu, May 14, 2020 at 1:11 PM Daniel Vetter daniel.vetter@ffwll.ch wrote:
I honestly don't exactly understand what's going on here, but the current code is wrong for sure: It calls dma_buf_vunmap without ever calling dma_buf_vmap.
What I'm not sure about is whether the WARN_ON is correct:
msm imports dma-buf using drm_prime_sg_to_page_addr_arrays. Which is a pretty neat layering violation of how you shouldn't peek behind the curtain of the dma-buf exporter, but par for course. Note that all the nice new helpers don't (and we should probably have a bit a warning about this in the kerneldoc).
but then in the get_vaddr() in msm_gem.c, we seems to happily wrap a vmap() around any object with ->pages set (so including imported dma-buf).
I'm not seeing any guarantees that userspace can't use an imported dma-buf for e.g. MSM_SUBMIT_CMD_BUF in a5xx_submit_in_rb, so no guarantees that an imported dma-buf won't end up with a ->vaddr set.
But even if that WARN_ON is wrong, cleaning up a vmap() done by msm by calling dma_buf_vunmap is the wrong thing to do.
v2: Rob said in review that we do indeed have a gap in get_vaddr() that needs to be plugged. But the users I've found aren't legit users on imported dma-buf, so we can just reject that.
Signed-off-by: Daniel Vetter daniel.vetter@intel.com Cc: Rob Clark robdclark@gmail.com Cc: Sean Paul sean@poorly.run Cc: linux-arm-msm@vger.kernel.org Cc: freedreno@lists.freedesktop.org
Reviewed-by: Rob Clark robdclark@gmail.com
drivers/gpu/drm/msm/msm_gem.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 5a6a79fbc9d6..e70abd1cde43 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -554,6 +554,9 @@ static void *get_vaddr(struct drm_gem_object *obj, unsigned madv) struct msm_gem_object *msm_obj = to_msm_bo(obj); int ret = 0;
if (obj->import_attach)
return ERR_PTR(-ENODEV);
mutex_lock(&msm_obj->lock); if (WARN_ON(msm_obj->madv > madv)) {
@@ -907,8 +910,7 @@ static void free_object(struct msm_gem_object *msm_obj) put_iova(obj);
if (obj->import_attach) {
if (msm_obj->vaddr)
dma_buf_vunmap(obj->import_attach->dmabuf, msm_obj->vaddr);
WARN_ON(msm_obj->vaddr); /* Don't drop the pages for imported dmabuf, as they are not * ours, just free the array we allocated:
-- 2.26.2
On Sun, May 31, 2020 at 09:02:11AM -0700, Rob Clark wrote:
On Thu, May 14, 2020 at 1:11 PM Daniel Vetter daniel.vetter@ffwll.ch wrote:
I honestly don't exactly understand what's going on here, but the current code is wrong for sure: It calls dma_buf_vunmap without ever calling dma_buf_vmap.
What I'm not sure about is whether the WARN_ON is correct:
msm imports dma-buf using drm_prime_sg_to_page_addr_arrays. Which is a pretty neat layering violation of how you shouldn't peek behind the curtain of the dma-buf exporter, but par for course. Note that all the nice new helpers don't (and we should probably have a bit a warning about this in the kerneldoc).
but then in the get_vaddr() in msm_gem.c, we seems to happily wrap a vmap() around any object with ->pages set (so including imported dma-buf).
I'm not seeing any guarantees that userspace can't use an imported dma-buf for e.g. MSM_SUBMIT_CMD_BUF in a5xx_submit_in_rb, so no guarantees that an imported dma-buf won't end up with a ->vaddr set.
But even if that WARN_ON is wrong, cleaning up a vmap() done by msm by calling dma_buf_vunmap is the wrong thing to do.
v2: Rob said in review that we do indeed have a gap in get_vaddr() that needs to be plugged. But the users I've found aren't legit users on imported dma-buf, so we can just reject that.
Signed-off-by: Daniel Vetter daniel.vetter@intel.com Cc: Rob Clark robdclark@gmail.com Cc: Sean Paul sean@poorly.run Cc: linux-arm-msm@vger.kernel.org Cc: freedreno@lists.freedesktop.org
Reviewed-by: Rob Clark robdclark@gmail.com
Queued in drm-misc-next for 5.9, thanks for your review. -Daniel
drivers/gpu/drm/msm/msm_gem.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 5a6a79fbc9d6..e70abd1cde43 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -554,6 +554,9 @@ static void *get_vaddr(struct drm_gem_object *obj, unsigned madv) struct msm_gem_object *msm_obj = to_msm_bo(obj); int ret = 0;
if (obj->import_attach)
return ERR_PTR(-ENODEV);
mutex_lock(&msm_obj->lock); if (WARN_ON(msm_obj->madv > madv)) {
@@ -907,8 +910,7 @@ static void free_object(struct msm_gem_object *msm_obj) put_iova(obj);
if (obj->import_attach) {
if (msm_obj->vaddr)
dma_buf_vunmap(obj->import_attach->dmabuf, msm_obj->vaddr);
WARN_ON(msm_obj->vaddr); /* Don't drop the pages for imported dmabuf, as they are not * ours, just free the array we allocated:
-- 2.26.2
No real functional change, since this just converts an annoying Oops into a more harmless WARNING backtrace. It's still a driver bug.
Signed-off-by: Daniel Vetter daniel.vetter@intel.com --- drivers/gpu/drm/drm_gem.c | 8 ++++++++ 1 file changed, 8 insertions(+)
diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c index 7bf628e13023..63bfd97e69d8 100644 --- a/drivers/gpu/drm/drm_gem.c +++ b/drivers/gpu/drm/drm_gem.c @@ -548,6 +548,10 @@ static void drm_gem_check_release_pagevec(struct pagevec *pvec) * set during initialization. If you have special zone constraints, set them * after drm_gem_object_init() via mapping_set_gfp_mask(). shmem-core takes care * to keep pages in the required zone during swap-in. + * + * This function is only valid on objects initialized with + * drm_gem_object_init(), but not for those initialized with + * drm_gem_private_object_init() only. */ struct page **drm_gem_get_pages(struct drm_gem_object *obj) { @@ -556,6 +560,10 @@ struct page **drm_gem_get_pages(struct drm_gem_object *obj) struct pagevec pvec; int i, npages;
+ + if (WARN_ON(!obj->filp)) + return ERR_PTR(-EINVAL); + /* This is the shared memory object that backs the GEM resource */ mapping = obj->filp->f_mapping;
Am 11.05.20 um 11:35 schrieb Daniel Vetter:
No real functional change, since this just converts an annoying Oops into a more harmless WARNING backtrace. It's still a driver bug.
Signed-off-by: Daniel Vetter daniel.vetter@intel.com
Acked-by: Thomas Zimmermann tzimmermann@suse.de
drivers/gpu/drm/drm_gem.c | 8 ++++++++ 1 file changed, 8 insertions(+)
diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c index 7bf628e13023..63bfd97e69d8 100644 --- a/drivers/gpu/drm/drm_gem.c +++ b/drivers/gpu/drm/drm_gem.c @@ -548,6 +548,10 @@ static void drm_gem_check_release_pagevec(struct pagevec *pvec)
- set during initialization. If you have special zone constraints, set them
- after drm_gem_object_init() via mapping_set_gfp_mask(). shmem-core takes care
- to keep pages in the required zone during swap-in.
- This function is only valid on objects initialized with
- drm_gem_object_init(), but not for those initialized with
*/
- drm_gem_private_object_init() only.
struct page **drm_gem_get_pages(struct drm_gem_object *obj) { @@ -556,6 +560,10 @@ struct page **drm_gem_get_pages(struct drm_gem_object *obj) struct pagevec pvec; int i, npages;
- if (WARN_ON(!obj->filp))
return ERR_PTR(-EINVAL);
- /* This is the shared memory object that backs the GEM resource */ mapping = obj->filp->f_mapping;
- Move the shmem helper section to the drm-mm.rst file, next to the vram helpers. Makes a lot more sense there with the now wider scope. Also, that's where the all the other backing storage stuff resides. It's just the framebuffer helpers that should be in the kms helper section.
- Try to clarify which functiosn are for implementing drm_gem_object_funcs, and which for drivers to call directly. At least one driver screwed that up a bit.
Cc: Gerd Hoffmann kraxel@redhat.com Cc: Rob Herring robh@kernel.org Cc: Noralf Trønnes noralf@tronnes.org Signed-off-by: Daniel Vetter daniel.vetter@intel.com --- Documentation/gpu/drm-kms-helpers.rst | 12 -------- Documentation/gpu/drm-mm.rst | 12 ++++++++ drivers/gpu/drm/drm_gem_shmem_helper.c | 39 +++++++++++++++++++++----- 3 files changed, 44 insertions(+), 19 deletions(-)
diff --git a/Documentation/gpu/drm-kms-helpers.rst b/Documentation/gpu/drm-kms-helpers.rst index ee730457bf4e..b89ddd06dabb 100644 --- a/Documentation/gpu/drm-kms-helpers.rst +++ b/Documentation/gpu/drm-kms-helpers.rst @@ -411,15 +411,3 @@ Legacy CRTC/Modeset Helper Functions Reference
.. kernel-doc:: drivers/gpu/drm/drm_crtc_helper.c :export: - -SHMEM GEM Helper Reference -========================== - -.. kernel-doc:: drivers/gpu/drm/drm_gem_shmem_helper.c - :doc: overview - -.. kernel-doc:: include/drm/drm_gem_shmem_helper.h - :internal: - -.. kernel-doc:: drivers/gpu/drm/drm_gem_shmem_helper.c - :export: diff --git a/Documentation/gpu/drm-mm.rst b/Documentation/gpu/drm-mm.rst index 1839762044be..c01757b0ac25 100644 --- a/Documentation/gpu/drm-mm.rst +++ b/Documentation/gpu/drm-mm.rst @@ -373,6 +373,18 @@ GEM CMA Helper Functions Reference .. kernel-doc:: drivers/gpu/drm/drm_gem_cma_helper.c :export:
+GEM SHMEM Helper Function Reference +----------------------------------- + +.. kernel-doc:: drivers/gpu/drm/drm_gem_shmem_helper.c + :doc: overview + +.. kernel-doc:: include/drm/drm_gem_shmem_helper.h + :internal: + +.. kernel-doc:: drivers/gpu/drm/drm_gem_shmem_helper.c + :export: + GEM VRAM Helper Functions Reference -----------------------------------
diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index df31e5782eed..2a70159d50ef 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -103,7 +103,8 @@ EXPORT_SYMBOL_GPL(drm_gem_shmem_create); * @obj: GEM object to free * * This function cleans up the GEM object state and frees the memory used to - * store the object itself. + * store the object itself. It should be used to implement + * &drm_gem_object_funcs.free. */ void drm_gem_shmem_free_object(struct drm_gem_object *obj) { @@ -214,7 +215,8 @@ EXPORT_SYMBOL(drm_gem_shmem_put_pages); * @obj: GEM object * * This function makes sure the backing pages are pinned in memory while the - * buffer is exported. + * buffer is exported. It should only be used to implement + * &drm_gem_object_funcs.pin. * * Returns: * 0 on success or a negative error code on failure. @@ -232,7 +234,7 @@ EXPORT_SYMBOL(drm_gem_shmem_pin); * @obj: GEM object * * This function removes the requirement that the backing pages are pinned in - * memory. + * memory. It should only be used to implement &drm_gem_object_funcs.unpin. */ void drm_gem_shmem_unpin(struct drm_gem_object *obj) { @@ -285,8 +287,14 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem) * drm_gem_shmem_vmap - Create a virtual mapping for a shmem GEM object * @shmem: shmem GEM object * - * This function makes sure that a virtual address exists for the buffer backing - * the shmem GEM object. + * This function makes sure that a contiguous kernel virtual address mapping + * exists for the buffer backing the shmem GEM object. + * + * This function can be used to implement &drm_gem_object_funcs.vmap. But it can + * also be called by drivers directly, in which case it will hide the + * differences between dma-buf imported and natively allocated objects. + * + * Acquired mappings should be cleaned up by calling drm_gem_shmem_vunmap(). * * Returns: * 0 on success or a negative error code on failure. @@ -330,7 +338,13 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem) * drm_gem_shmem_vunmap - Unmap a virtual mapping fo a shmem GEM object * @shmem: shmem GEM object * - * This function removes the virtual address when use count drops to zero. + * This function cleans up a kernel virtual address mapping acquired by + * drm_gem_shmem_vmap(). The mapping is only removed when the use count drops to + * zero. + * + * This function can be used to implement &drm_gem_object_funcs.vmap. But it can + * also be called by drivers directly, in which case it will hide the + * differences between dma-buf imported and natively allocated objects. */ void drm_gem_shmem_vunmap(struct drm_gem_object *obj, void *vaddr) { @@ -559,6 +573,8 @@ EXPORT_SYMBOL_GPL(drm_gem_shmem_mmap); * @p: DRM printer * @indent: Tab indentation level * @obj: GEM object + * + * This implements the &drm_gem_object_funcs.info callback. */ void drm_gem_shmem_print_info(struct drm_printer *p, unsigned int indent, const struct drm_gem_object *obj) @@ -577,7 +593,12 @@ EXPORT_SYMBOL(drm_gem_shmem_print_info); * @obj: GEM object * * This function exports a scatter/gather table suitable for PRIME usage by - * calling the standard DMA mapping API. + * calling the standard DMA mapping API. Drivers should not call this function + * directly, instead it should only be used as an implementation for + * &drm_gem_object_funcs.get_sg_table. + * + * Drivers who need to acquire an scatter/gather table for objects need to call + * drm_gem_shmem_get_pages_sgt() instead. * * Returns: * A pointer to the scatter/gather table of pinned pages or NULL on failure. @@ -599,6 +620,10 @@ EXPORT_SYMBOL_GPL(drm_gem_shmem_get_sg_table); * the sg table doesn't exist, the pages are pinned, dma-mapped, and a sg * table created. * + * This is the main function for drivers to get at backing storage, and it hides + * and difference between dma-buf imported and natively allocated objects. + * drm_gem_shmem_get_sg_table() should not be directly called by drivers. + * * Returns: * A pointer to the scatter/gather table of pinned pages or errno on failure. */
Am 11.05.20 um 11:35 schrieb Daniel Vetter:
Move the shmem helper section to the drm-mm.rst file, next to the vram helpers. Makes a lot more sense there with the now wider scope. Also, that's where the all the other backing storage stuff resides. It's just the framebuffer helpers that should be in the kms helper section.
Try to clarify which functiosn are for implementing drm_gem_object_funcs, and which for drivers to call directly. At least one driver screwed that up a bit.
Cc: Gerd Hoffmann kraxel@redhat.com Cc: Rob Herring robh@kernel.org Cc: Noralf Trønnes noralf@tronnes.org Signed-off-by: Daniel Vetter daniel.vetter@intel.com
Reviewed-by: Thomas Zimmermann tzimmermann@suse.de
See below for a suggestion on the help text.
Documentation/gpu/drm-kms-helpers.rst | 12 -------- Documentation/gpu/drm-mm.rst | 12 ++++++++ drivers/gpu/drm/drm_gem_shmem_helper.c | 39 +++++++++++++++++++++----- 3 files changed, 44 insertions(+), 19 deletions(-)
diff --git a/Documentation/gpu/drm-kms-helpers.rst b/Documentation/gpu/drm-kms-helpers.rst index ee730457bf4e..b89ddd06dabb 100644 --- a/Documentation/gpu/drm-kms-helpers.rst +++ b/Documentation/gpu/drm-kms-helpers.rst @@ -411,15 +411,3 @@ Legacy CRTC/Modeset Helper Functions Reference
.. kernel-doc:: drivers/gpu/drm/drm_crtc_helper.c :export:
-SHMEM GEM Helper Reference
-.. kernel-doc:: drivers/gpu/drm/drm_gem_shmem_helper.c
- :doc: overview
-.. kernel-doc:: include/drm/drm_gem_shmem_helper.h
- :internal:
-.. kernel-doc:: drivers/gpu/drm/drm_gem_shmem_helper.c
- :export:
diff --git a/Documentation/gpu/drm-mm.rst b/Documentation/gpu/drm-mm.rst index 1839762044be..c01757b0ac25 100644 --- a/Documentation/gpu/drm-mm.rst +++ b/Documentation/gpu/drm-mm.rst @@ -373,6 +373,18 @@ GEM CMA Helper Functions Reference .. kernel-doc:: drivers/gpu/drm/drm_gem_cma_helper.c :export:
+GEM SHMEM Helper Function Reference +-----------------------------------
+.. kernel-doc:: drivers/gpu/drm/drm_gem_shmem_helper.c
- :doc: overview
+.. kernel-doc:: include/drm/drm_gem_shmem_helper.h
- :internal:
+.. kernel-doc:: drivers/gpu/drm/drm_gem_shmem_helper.c
- :export:
GEM VRAM Helper Functions Reference
diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index df31e5782eed..2a70159d50ef 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -103,7 +103,8 @@ EXPORT_SYMBOL_GPL(drm_gem_shmem_create);
- @obj: GEM object to free
- This function cleans up the GEM object state and frees the memory used to
- store the object itself.
- store the object itself. It should be used to implement
- &drm_gem_object_funcs.free.
It should 'only' be used? Or maybe you can say that it should be used by drivers that don't implement struct drm_driver.gem_create_object.
*/ void drm_gem_shmem_free_object(struct drm_gem_object *obj) { @@ -214,7 +215,8 @@ EXPORT_SYMBOL(drm_gem_shmem_put_pages);
- @obj: GEM object
- This function makes sure the backing pages are pinned in memory while the
- buffer is exported.
- buffer is exported. It should only be used to implement
- &drm_gem_object_funcs.pin.
- Returns:
- 0 on success or a negative error code on failure.
@@ -232,7 +234,7 @@ EXPORT_SYMBOL(drm_gem_shmem_pin);
- @obj: GEM object
- This function removes the requirement that the backing pages are pinned in
- memory.
*/
- memory. It should only be used to implement &drm_gem_object_funcs.unpin.
void drm_gem_shmem_unpin(struct drm_gem_object *obj) { @@ -285,8 +287,14 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
- drm_gem_shmem_vmap - Create a virtual mapping for a shmem GEM object
- @shmem: shmem GEM object
- This function makes sure that a virtual address exists for the buffer backing
- the shmem GEM object.
- This function makes sure that a contiguous kernel virtual address mapping
- exists for the buffer backing the shmem GEM object.
- This function can be used to implement &drm_gem_object_funcs.vmap. But it can
- also be called by drivers directly, in which case it will hide the
- differences between dma-buf imported and natively allocated objects.
- Acquired mappings should be cleaned up by calling drm_gem_shmem_vunmap().
- Returns:
- 0 on success or a negative error code on failure.
@@ -330,7 +338,13 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem)
- drm_gem_shmem_vunmap - Unmap a virtual mapping fo a shmem GEM object
- @shmem: shmem GEM object
- This function removes the virtual address when use count drops to zero.
- This function cleans up a kernel virtual address mapping acquired by
- drm_gem_shmem_vmap(). The mapping is only removed when the use count drops to
- zero.
- This function can be used to implement &drm_gem_object_funcs.vmap. But it can
- also be called by drivers directly, in which case it will hide the
*/
- differences between dma-buf imported and natively allocated objects.
void drm_gem_shmem_vunmap(struct drm_gem_object *obj, void *vaddr) { @@ -559,6 +573,8 @@ EXPORT_SYMBOL_GPL(drm_gem_shmem_mmap);
- @p: DRM printer
- @indent: Tab indentation level
- @obj: GEM object
*/
- This implements the &drm_gem_object_funcs.info callback.
void drm_gem_shmem_print_info(struct drm_printer *p, unsigned int indent, const struct drm_gem_object *obj) @@ -577,7 +593,12 @@ EXPORT_SYMBOL(drm_gem_shmem_print_info);
- @obj: GEM object
- This function exports a scatter/gather table suitable for PRIME usage by
- calling the standard DMA mapping API.
- calling the standard DMA mapping API. Drivers should not call this function
- directly, instead it should only be used as an implementation for
- &drm_gem_object_funcs.get_sg_table.
- Drivers who need to acquire an scatter/gather table for objects need to call
- drm_gem_shmem_get_pages_sgt() instead.
- Returns:
- A pointer to the scatter/gather table of pinned pages or NULL on failure.
@@ -599,6 +620,10 @@ EXPORT_SYMBOL_GPL(drm_gem_shmem_get_sg_table);
- the sg table doesn't exist, the pages are pinned, dma-mapped, and a sg
- table created.
- This is the main function for drivers to get at backing storage, and it hides
- and difference between dma-buf imported and natively allocated objects.
- drm_gem_shmem_get_sg_table() should not be directly called by drivers.
*/
- Returns:
- A pointer to the scatter/gather table of pinned pages or errno on failure.
On Mon, May 11, 2020 at 01:12:49PM +0200, Thomas Zimmermann wrote:
Am 11.05.20 um 11:35 schrieb Daniel Vetter:
Move the shmem helper section to the drm-mm.rst file, next to the vram helpers. Makes a lot more sense there with the now wider scope. Also, that's where the all the other backing storage stuff resides. It's just the framebuffer helpers that should be in the kms helper section.
Try to clarify which functiosn are for implementing drm_gem_object_funcs, and which for drivers to call directly. At least one driver screwed that up a bit.
Cc: Gerd Hoffmann kraxel@redhat.com Cc: Rob Herring robh@kernel.org Cc: Noralf Trønnes noralf@tronnes.org Signed-off-by: Daniel Vetter daniel.vetter@intel.com
Reviewed-by: Thomas Zimmermann tzimmermann@suse.de
See below for a suggestion on the help text.
Documentation/gpu/drm-kms-helpers.rst | 12 -------- Documentation/gpu/drm-mm.rst | 12 ++++++++ drivers/gpu/drm/drm_gem_shmem_helper.c | 39 +++++++++++++++++++++----- 3 files changed, 44 insertions(+), 19 deletions(-)
diff --git a/Documentation/gpu/drm-kms-helpers.rst b/Documentation/gpu/drm-kms-helpers.rst index ee730457bf4e..b89ddd06dabb 100644 --- a/Documentation/gpu/drm-kms-helpers.rst +++ b/Documentation/gpu/drm-kms-helpers.rst @@ -411,15 +411,3 @@ Legacy CRTC/Modeset Helper Functions Reference
.. kernel-doc:: drivers/gpu/drm/drm_crtc_helper.c :export:
-SHMEM GEM Helper Reference
-.. kernel-doc:: drivers/gpu/drm/drm_gem_shmem_helper.c
- :doc: overview
-.. kernel-doc:: include/drm/drm_gem_shmem_helper.h
- :internal:
-.. kernel-doc:: drivers/gpu/drm/drm_gem_shmem_helper.c
- :export:
diff --git a/Documentation/gpu/drm-mm.rst b/Documentation/gpu/drm-mm.rst index 1839762044be..c01757b0ac25 100644 --- a/Documentation/gpu/drm-mm.rst +++ b/Documentation/gpu/drm-mm.rst @@ -373,6 +373,18 @@ GEM CMA Helper Functions Reference .. kernel-doc:: drivers/gpu/drm/drm_gem_cma_helper.c :export:
+GEM SHMEM Helper Function Reference +-----------------------------------
+.. kernel-doc:: drivers/gpu/drm/drm_gem_shmem_helper.c
- :doc: overview
+.. kernel-doc:: include/drm/drm_gem_shmem_helper.h
- :internal:
+.. kernel-doc:: drivers/gpu/drm/drm_gem_shmem_helper.c
- :export:
GEM VRAM Helper Functions Reference
diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index df31e5782eed..2a70159d50ef 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -103,7 +103,8 @@ EXPORT_SYMBOL_GPL(drm_gem_shmem_create);
- @obj: GEM object to free
- This function cleans up the GEM object state and frees the memory used to
- store the object itself.
- store the object itself. It should be used to implement
- &drm_gem_object_funcs.free.
It should 'only' be used? Or maybe you can say that it should be used by drivers that don't implement struct drm_driver.gem_create_object.
Just looked at this, and I'm not clear what you're aiming for. There doesn't seem to be any misuse for this for other places than the free hook. And I can't really come up with ideas where that would even work.
What kind of confusion are you trying to clarify with your suggestion? Maybe I can then reword that into something that also makes sense for me.
Thanks, Daniel
*/ void drm_gem_shmem_free_object(struct drm_gem_object *obj) { @@ -214,7 +215,8 @@ EXPORT_SYMBOL(drm_gem_shmem_put_pages);
- @obj: GEM object
- This function makes sure the backing pages are pinned in memory while the
- buffer is exported.
- buffer is exported. It should only be used to implement
- &drm_gem_object_funcs.pin.
- Returns:
- 0 on success or a negative error code on failure.
@@ -232,7 +234,7 @@ EXPORT_SYMBOL(drm_gem_shmem_pin);
- @obj: GEM object
- This function removes the requirement that the backing pages are pinned in
- memory.
*/
- memory. It should only be used to implement &drm_gem_object_funcs.unpin.
void drm_gem_shmem_unpin(struct drm_gem_object *obj) { @@ -285,8 +287,14 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
- drm_gem_shmem_vmap - Create a virtual mapping for a shmem GEM object
- @shmem: shmem GEM object
- This function makes sure that a virtual address exists for the buffer backing
- the shmem GEM object.
- This function makes sure that a contiguous kernel virtual address mapping
- exists for the buffer backing the shmem GEM object.
- This function can be used to implement &drm_gem_object_funcs.vmap. But it can
- also be called by drivers directly, in which case it will hide the
- differences between dma-buf imported and natively allocated objects.
- Acquired mappings should be cleaned up by calling drm_gem_shmem_vunmap().
- Returns:
- 0 on success or a negative error code on failure.
@@ -330,7 +338,13 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem)
- drm_gem_shmem_vunmap - Unmap a virtual mapping fo a shmem GEM object
- @shmem: shmem GEM object
- This function removes the virtual address when use count drops to zero.
- This function cleans up a kernel virtual address mapping acquired by
- drm_gem_shmem_vmap(). The mapping is only removed when the use count drops to
- zero.
- This function can be used to implement &drm_gem_object_funcs.vmap. But it can
- also be called by drivers directly, in which case it will hide the
*/
- differences between dma-buf imported and natively allocated objects.
void drm_gem_shmem_vunmap(struct drm_gem_object *obj, void *vaddr) { @@ -559,6 +573,8 @@ EXPORT_SYMBOL_GPL(drm_gem_shmem_mmap);
- @p: DRM printer
- @indent: Tab indentation level
- @obj: GEM object
*/
- This implements the &drm_gem_object_funcs.info callback.
void drm_gem_shmem_print_info(struct drm_printer *p, unsigned int indent, const struct drm_gem_object *obj) @@ -577,7 +593,12 @@ EXPORT_SYMBOL(drm_gem_shmem_print_info);
- @obj: GEM object
- This function exports a scatter/gather table suitable for PRIME usage by
- calling the standard DMA mapping API.
- calling the standard DMA mapping API. Drivers should not call this function
- directly, instead it should only be used as an implementation for
- &drm_gem_object_funcs.get_sg_table.
- Drivers who need to acquire an scatter/gather table for objects need to call
- drm_gem_shmem_get_pages_sgt() instead.
- Returns:
- A pointer to the scatter/gather table of pinned pages or NULL on failure.
@@ -599,6 +620,10 @@ EXPORT_SYMBOL_GPL(drm_gem_shmem_get_sg_table);
- the sg table doesn't exist, the pages are pinned, dma-mapped, and a sg
- table created.
- This is the main function for drivers to get at backing storage, and it hides
- and difference between dma-buf imported and natively allocated objects.
- drm_gem_shmem_get_sg_table() should not be directly called by drivers.
*/
- Returns:
- A pointer to the scatter/gather table of pinned pages or errno on failure.
-- Thomas Zimmermann Graphics Driver Developer SUSE Software Solutions Germany GmbH Maxfeldstr. 5, 90409 Nürnberg, Germany (HRB 36809, AG Nürnberg) Geschäftsführer: Felix Imendörffer
Hi
Am 14.05.20 um 22:05 schrieb Daniel Vetter:
On Mon, May 11, 2020 at 01:12:49PM +0200, Thomas Zimmermann wrote:
Am 11.05.20 um 11:35 schrieb Daniel Vetter:
Move the shmem helper section to the drm-mm.rst file, next to the vram helpers. Makes a lot more sense there with the now wider scope. Also, that's where the all the other backing storage stuff resides. It's just the framebuffer helpers that should be in the kms helper section.
Try to clarify which functiosn are for implementing drm_gem_object_funcs, and which for drivers to call directly. At least one driver screwed that up a bit.
Cc: Gerd Hoffmann kraxel@redhat.com Cc: Rob Herring robh@kernel.org Cc: Noralf Trønnes noralf@tronnes.org Signed-off-by: Daniel Vetter daniel.vetter@intel.com
Reviewed-by: Thomas Zimmermann tzimmermann@suse.de
See below for a suggestion on the help text.
Documentation/gpu/drm-kms-helpers.rst | 12 -------- Documentation/gpu/drm-mm.rst | 12 ++++++++ drivers/gpu/drm/drm_gem_shmem_helper.c | 39 +++++++++++++++++++++----- 3 files changed, 44 insertions(+), 19 deletions(-)
diff --git a/Documentation/gpu/drm-kms-helpers.rst b/Documentation/gpu/drm-kms-helpers.rst index ee730457bf4e..b89ddd06dabb 100644 --- a/Documentation/gpu/drm-kms-helpers.rst +++ b/Documentation/gpu/drm-kms-helpers.rst @@ -411,15 +411,3 @@ Legacy CRTC/Modeset Helper Functions Reference
.. kernel-doc:: drivers/gpu/drm/drm_crtc_helper.c :export:
-SHMEM GEM Helper Reference
-.. kernel-doc:: drivers/gpu/drm/drm_gem_shmem_helper.c
- :doc: overview
-.. kernel-doc:: include/drm/drm_gem_shmem_helper.h
- :internal:
-.. kernel-doc:: drivers/gpu/drm/drm_gem_shmem_helper.c
- :export:
diff --git a/Documentation/gpu/drm-mm.rst b/Documentation/gpu/drm-mm.rst index 1839762044be..c01757b0ac25 100644 --- a/Documentation/gpu/drm-mm.rst +++ b/Documentation/gpu/drm-mm.rst @@ -373,6 +373,18 @@ GEM CMA Helper Functions Reference .. kernel-doc:: drivers/gpu/drm/drm_gem_cma_helper.c :export:
+GEM SHMEM Helper Function Reference +-----------------------------------
+.. kernel-doc:: drivers/gpu/drm/drm_gem_shmem_helper.c
- :doc: overview
+.. kernel-doc:: include/drm/drm_gem_shmem_helper.h
- :internal:
+.. kernel-doc:: drivers/gpu/drm/drm_gem_shmem_helper.c
- :export:
GEM VRAM Helper Functions Reference
diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index df31e5782eed..2a70159d50ef 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -103,7 +103,8 @@ EXPORT_SYMBOL_GPL(drm_gem_shmem_create);
- @obj: GEM object to free
- This function cleans up the GEM object state and frees the memory used to
- store the object itself.
- store the object itself. It should be used to implement
- &drm_gem_object_funcs.free.
It should 'only' be used? Or maybe you can say that it should be used by drivers that don't implement struct drm_driver.gem_create_object.
Just looked at this, and I'm not clear what you're aiming for. There doesn't seem to be any misuse for this for other places than the free hook. And I can't really come up with ideas where that would even work.
What kind of confusion are you trying to clarify with your suggestion? Maybe I can then reword that into something that also makes sense for me.
It was just nit picking.
I just worried that drivers might try to call this function for cleaning up an embedded instance of the structure; although the function's name clearly indicates otherwise.
Kind of related, I think we should be more strict to use the abstract GEM interfaces. For example, several drivers call drm_gem_shmem_vmap() instead of drm_gem_vmap(). For this free function, we should require drivers to call drm_gem_object_free() instead of the shmem function.
Best regards Thomas
Thanks, Daniel
*/ void drm_gem_shmem_free_object(struct drm_gem_object *obj) { @@ -214,7 +215,8 @@ EXPORT_SYMBOL(drm_gem_shmem_put_pages);
- @obj: GEM object
- This function makes sure the backing pages are pinned in memory while the
- buffer is exported.
- buffer is exported. It should only be used to implement
- &drm_gem_object_funcs.pin.
- Returns:
- 0 on success or a negative error code on failure.
@@ -232,7 +234,7 @@ EXPORT_SYMBOL(drm_gem_shmem_pin);
- @obj: GEM object
- This function removes the requirement that the backing pages are pinned in
- memory.
*/
- memory. It should only be used to implement &drm_gem_object_funcs.unpin.
void drm_gem_shmem_unpin(struct drm_gem_object *obj) { @@ -285,8 +287,14 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
- drm_gem_shmem_vmap - Create a virtual mapping for a shmem GEM object
- @shmem: shmem GEM object
- This function makes sure that a virtual address exists for the buffer backing
- the shmem GEM object.
- This function makes sure that a contiguous kernel virtual address mapping
- exists for the buffer backing the shmem GEM object.
- This function can be used to implement &drm_gem_object_funcs.vmap. But it can
- also be called by drivers directly, in which case it will hide the
- differences between dma-buf imported and natively allocated objects.
- Acquired mappings should be cleaned up by calling drm_gem_shmem_vunmap().
- Returns:
- 0 on success or a negative error code on failure.
@@ -330,7 +338,13 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem)
- drm_gem_shmem_vunmap - Unmap a virtual mapping fo a shmem GEM object
- @shmem: shmem GEM object
- This function removes the virtual address when use count drops to zero.
- This function cleans up a kernel virtual address mapping acquired by
- drm_gem_shmem_vmap(). The mapping is only removed when the use count drops to
- zero.
- This function can be used to implement &drm_gem_object_funcs.vmap. But it can
- also be called by drivers directly, in which case it will hide the
*/
- differences between dma-buf imported and natively allocated objects.
void drm_gem_shmem_vunmap(struct drm_gem_object *obj, void *vaddr) { @@ -559,6 +573,8 @@ EXPORT_SYMBOL_GPL(drm_gem_shmem_mmap);
- @p: DRM printer
- @indent: Tab indentation level
- @obj: GEM object
*/
- This implements the &drm_gem_object_funcs.info callback.
void drm_gem_shmem_print_info(struct drm_printer *p, unsigned int indent, const struct drm_gem_object *obj) @@ -577,7 +593,12 @@ EXPORT_SYMBOL(drm_gem_shmem_print_info);
- @obj: GEM object
- This function exports a scatter/gather table suitable for PRIME usage by
- calling the standard DMA mapping API.
- calling the standard DMA mapping API. Drivers should not call this function
- directly, instead it should only be used as an implementation for
- &drm_gem_object_funcs.get_sg_table.
- Drivers who need to acquire an scatter/gather table for objects need to call
- drm_gem_shmem_get_pages_sgt() instead.
- Returns:
- A pointer to the scatter/gather table of pinned pages or NULL on failure.
@@ -599,6 +620,10 @@ EXPORT_SYMBOL_GPL(drm_gem_shmem_get_sg_table);
- the sg table doesn't exist, the pages are pinned, dma-mapped, and a sg
- table created.
- This is the main function for drivers to get at backing storage, and it hides
- and difference between dma-buf imported and natively allocated objects.
- drm_gem_shmem_get_sg_table() should not be directly called by drivers.
*/
- Returns:
- A pointer to the scatter/gather table of pinned pages or errno on failure.
-- Thomas Zimmermann Graphics Driver Developer SUSE Software Solutions Germany GmbH Maxfeldstr. 5, 90409 Nürnberg, Germany (HRB 36809, AG Nürnberg) Geschäftsführer: Felix Imendörffer
drm_gem_shmem_get_sg_table is meant to implement obj->funcs->get_sg_table, for prime exporting. The one we want is drm_gem_shmem_get_pages_sgt, which also handles imported dma-buf, not just native objects.
v2: Rebase, this stuff moved around in
commit 2f2aa13724d56829d910b2fa8e80c502d388f106 Author: Gerd Hoffmann kraxel@redhat.com Date: Fri Feb 7 08:46:38 2020 +0100
drm/virtio: move virtio_gpu_mem_entry initialization to new function
Signed-off-by: Daniel Vetter daniel.vetter@intel.com Cc: David Airlie airlied@linux.ie Cc: Gerd Hoffmann kraxel@redhat.com Cc: virtualization@lists.linux-foundation.org --- drivers/gpu/drm/virtio/virtgpu_object.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/virtio/virtgpu_object.c b/drivers/gpu/drm/virtio/virtgpu_object.c index 6ccbd01cd888..346cef5ce251 100644 --- a/drivers/gpu/drm/virtio/virtgpu_object.c +++ b/drivers/gpu/drm/virtio/virtgpu_object.c @@ -150,7 +150,7 @@ static int virtio_gpu_object_shmem_init(struct virtio_gpu_device *vgdev, if (ret < 0) return -EINVAL;
- shmem->pages = drm_gem_shmem_get_sg_table(&bo->base.base); + shmem->pages = drm_gem_shmem_get_pages_sgt(&bo->base.base); if (!shmem->pages) { drm_gem_shmem_unpin(&bo->base.base); return -EINVAL;
Am 11.05.20 um 11:35 schrieb Daniel Vetter:
drm_gem_shmem_get_sg_table is meant to implement obj->funcs->get_sg_table, for prime exporting. The one we want is drm_gem_shmem_get_pages_sgt, which also handles imported dma-buf, not just native objects.
v2: Rebase, this stuff moved around in
commit 2f2aa13724d56829d910b2fa8e80c502d388f106 Author: Gerd Hoffmann kraxel@redhat.com Date: Fri Feb 7 08:46:38 2020 +0100
drm/virtio: move virtio_gpu_mem_entry initialization to new function
Signed-off-by: Daniel Vetter daniel.vetter@intel.com
Acked-by: Thomas Zimmermann tzimmermann@suse.de
Cc: David Airlie airlied@linux.ie Cc: Gerd Hoffmann kraxel@redhat.com Cc: virtualization@lists.linux-foundation.org
drivers/gpu/drm/virtio/virtgpu_object.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/virtio/virtgpu_object.c b/drivers/gpu/drm/virtio/virtgpu_object.c index 6ccbd01cd888..346cef5ce251 100644 --- a/drivers/gpu/drm/virtio/virtgpu_object.c +++ b/drivers/gpu/drm/virtio/virtgpu_object.c @@ -150,7 +150,7 @@ static int virtio_gpu_object_shmem_init(struct virtio_gpu_device *vgdev, if (ret < 0) return -EINVAL;
- shmem->pages = drm_gem_shmem_get_sg_table(&bo->base.base);
- shmem->pages = drm_gem_shmem_get_pages_sgt(&bo->base.base); if (!shmem->pages) { drm_gem_shmem_unpin(&bo->base.base); return -EINVAL;
There's no direct harm, because for the shmem helpers these are noops on imported buffers. The trouble is in the locks these take - I want to change dma_buf_vmap locking, and so need to make sure that we only ever take certain locks on one side of the dma-buf interface: Either for exporters, or for importers.
Signed-off-by: Daniel Vetter daniel.vetter@intel.com Cc: Dave Airlie airlied@redhat.com Cc: Sean Paul sean@poorly.run Cc: Gerd Hoffmann kraxel@redhat.com Cc: Thomas Zimmermann tzimmermann@suse.de Cc: Alex Deucher alexander.deucher@amd.com Cc: Daniel Vetter daniel.vetter@ffwll.ch Cc: Thomas Gleixner tglx@linutronix.de Cc: Sam Ravnborg sam@ravnborg.org --- drivers/gpu/drm/udl/udl_gem.c | 22 ++++++++++++---------- 1 file changed, 12 insertions(+), 10 deletions(-)
diff --git a/drivers/gpu/drm/udl/udl_gem.c b/drivers/gpu/drm/udl/udl_gem.c index b6e26f98aa0a..c68d3e265329 100644 --- a/drivers/gpu/drm/udl/udl_gem.c +++ b/drivers/gpu/drm/udl/udl_gem.c @@ -46,29 +46,31 @@ static void *udl_gem_object_vmap(struct drm_gem_object *obj) if (shmem->vmap_use_count++ > 0) goto out;
- ret = drm_gem_shmem_get_pages(shmem); - if (ret) - goto err_zero_use; - - if (obj->import_attach) + if (obj->import_attach) { shmem->vaddr = dma_buf_vmap(obj->import_attach->dmabuf); - else + } else { + ret = drm_gem_shmem_get_pages(shmem); + if (ret) + goto err; + shmem->vaddr = vmap(shmem->pages, obj->size >> PAGE_SHIFT, VM_MAP, PAGE_KERNEL);
+ if (!shmem->vaddr) + drm_gem_shmem_put_pages(shmem); + } + if (!shmem->vaddr) { DRM_DEBUG_KMS("Failed to vmap pages\n"); ret = -ENOMEM; - goto err_put_pages; + goto err; }
out: mutex_unlock(&shmem->vmap_lock); return shmem->vaddr;
-err_put_pages: - drm_gem_shmem_put_pages(shmem); -err_zero_use: +err: shmem->vmap_use_count = 0; mutex_unlock(&shmem->vmap_lock); return ERR_PTR(ret);
Hi
Am 11.05.20 um 11:35 schrieb Daniel Vetter:
There's no direct harm, because for the shmem helpers these are noops on imported buffers. The trouble is in the locks these take - I want to change dma_buf_vmap locking, and so need to make sure that we only ever take certain locks on one side of the dma-buf interface: Either for exporters, or for importers.
Signed-off-by: Daniel Vetter daniel.vetter@intel.com Cc: Dave Airlie airlied@redhat.com Cc: Sean Paul sean@poorly.run Cc: Gerd Hoffmann kraxel@redhat.com Cc: Thomas Zimmermann tzimmermann@suse.de Cc: Alex Deucher alexander.deucher@amd.com Cc: Daniel Vetter daniel.vetter@ffwll.ch Cc: Thomas Gleixner tglx@linutronix.de Cc: Sam Ravnborg sam@ravnborg.org
drivers/gpu/drm/udl/udl_gem.c | 22 ++++++++++++---------- 1 file changed, 12 insertions(+), 10 deletions(-)
diff --git a/drivers/gpu/drm/udl/udl_gem.c b/drivers/gpu/drm/udl/udl_gem.c index b6e26f98aa0a..c68d3e265329 100644 --- a/drivers/gpu/drm/udl/udl_gem.c +++ b/drivers/gpu/drm/udl/udl_gem.c @@ -46,29 +46,31 @@ static void *udl_gem_object_vmap(struct drm_gem_object *obj)
It's still not clear to me why this function exists in the first place. It's the same code as the default implementation, except that it doesn't support cached mappings.
I don't see why udl is special. I'd suggest to try to use the original shmem function and remove this one. Same for the mmap code.
Best regards Thomas
if (shmem->vmap_use_count++ > 0) goto out;
- ret = drm_gem_shmem_get_pages(shmem);
- if (ret)
goto err_zero_use;
- if (obj->import_attach)
- if (obj->import_attach) { shmem->vaddr = dma_buf_vmap(obj->import_attach->dmabuf);
- else
} else {
ret = drm_gem_shmem_get_pages(shmem);
if (ret)
goto err;
shmem->vaddr = vmap(shmem->pages, obj->size >> PAGE_SHIFT, VM_MAP, PAGE_KERNEL);
if (!shmem->vaddr)
drm_gem_shmem_put_pages(shmem);
}
if (!shmem->vaddr) { DRM_DEBUG_KMS("Failed to vmap pages\n"); ret = -ENOMEM;
goto err_put_pages;
}goto err;
out: mutex_unlock(&shmem->vmap_lock); return shmem->vaddr;
-err_put_pages:
- drm_gem_shmem_put_pages(shmem);
-err_zero_use: +err: shmem->vmap_use_count = 0; mutex_unlock(&shmem->vmap_lock); return ERR_PTR(ret);
On Mon, May 11, 2020 at 01:23:38PM +0200, Thomas Zimmermann wrote:
Hi
Am 11.05.20 um 11:35 schrieb Daniel Vetter:
There's no direct harm, because for the shmem helpers these are noops on imported buffers. The trouble is in the locks these take - I want to change dma_buf_vmap locking, and so need to make sure that we only ever take certain locks on one side of the dma-buf interface: Either for exporters, or for importers.
Signed-off-by: Daniel Vetter daniel.vetter@intel.com Cc: Dave Airlie airlied@redhat.com Cc: Sean Paul sean@poorly.run Cc: Gerd Hoffmann kraxel@redhat.com Cc: Thomas Zimmermann tzimmermann@suse.de Cc: Alex Deucher alexander.deucher@amd.com Cc: Daniel Vetter daniel.vetter@ffwll.ch Cc: Thomas Gleixner tglx@linutronix.de Cc: Sam Ravnborg sam@ravnborg.org
drivers/gpu/drm/udl/udl_gem.c | 22 ++++++++++++---------- 1 file changed, 12 insertions(+), 10 deletions(-)
diff --git a/drivers/gpu/drm/udl/udl_gem.c b/drivers/gpu/drm/udl/udl_gem.c index b6e26f98aa0a..c68d3e265329 100644 --- a/drivers/gpu/drm/udl/udl_gem.c +++ b/drivers/gpu/drm/udl/udl_gem.c @@ -46,29 +46,31 @@ static void *udl_gem_object_vmap(struct drm_gem_object *obj)
It's still not clear to me why this function exists in the first place. It's the same code as the default implementation, except that it doesn't support cached mappings.
I don't see why udl is special. I'd suggest to try to use the original shmem function and remove this one. Same for the mmap code.
tbh no idea, could be that the usb code is then a bit too inefficient at uploading stuff if it needs to cache flush.
But then on x86 at least everything (except real gpus) is coherent, so cached mappings should be faster.
No idea, but also don't have the hw so not going to touch udl that much. -Daniel
Best regards Thomas
if (shmem->vmap_use_count++ > 0) goto out;
- ret = drm_gem_shmem_get_pages(shmem);
- if (ret)
goto err_zero_use;
- if (obj->import_attach)
- if (obj->import_attach) { shmem->vaddr = dma_buf_vmap(obj->import_attach->dmabuf);
- else
} else {
ret = drm_gem_shmem_get_pages(shmem);
if (ret)
goto err;
shmem->vaddr = vmap(shmem->pages, obj->size >> PAGE_SHIFT, VM_MAP, PAGE_KERNEL);
if (!shmem->vaddr)
drm_gem_shmem_put_pages(shmem);
}
if (!shmem->vaddr) { DRM_DEBUG_KMS("Failed to vmap pages\n"); ret = -ENOMEM;
goto err_put_pages;
}goto err;
out: mutex_unlock(&shmem->vmap_lock); return shmem->vaddr;
-err_put_pages:
- drm_gem_shmem_put_pages(shmem);
-err_zero_use: +err: shmem->vmap_use_count = 0; mutex_unlock(&shmem->vmap_lock); return ERR_PTR(ret);
-- Thomas Zimmermann Graphics Driver Developer SUSE Software Solutions Germany GmbH Maxfeldstr. 5, 90409 Nürnberg, Germany (HRB 36809, AG Nürnberg) Geschäftsführer: Felix Imendörffer
Hi
Am 11.05.20 um 13:37 schrieb Daniel Vetter:
On Mon, May 11, 2020 at 01:23:38PM +0200, Thomas Zimmermann wrote:
Hi
Am 11.05.20 um 11:35 schrieb Daniel Vetter:
There's no direct harm, because for the shmem helpers these are noops on imported buffers. The trouble is in the locks these take - I want to change dma_buf_vmap locking, and so need to make sure that we only ever take certain locks on one side of the dma-buf interface: Either for exporters, or for importers.
Signed-off-by: Daniel Vetter daniel.vetter@intel.com Cc: Dave Airlie airlied@redhat.com Cc: Sean Paul sean@poorly.run Cc: Gerd Hoffmann kraxel@redhat.com Cc: Thomas Zimmermann tzimmermann@suse.de Cc: Alex Deucher alexander.deucher@amd.com Cc: Daniel Vetter daniel.vetter@ffwll.ch Cc: Thomas Gleixner tglx@linutronix.de Cc: Sam Ravnborg sam@ravnborg.org
drivers/gpu/drm/udl/udl_gem.c | 22 ++++++++++++---------- 1 file changed, 12 insertions(+), 10 deletions(-)
diff --git a/drivers/gpu/drm/udl/udl_gem.c b/drivers/gpu/drm/udl/udl_gem.c index b6e26f98aa0a..c68d3e265329 100644 --- a/drivers/gpu/drm/udl/udl_gem.c +++ b/drivers/gpu/drm/udl/udl_gem.c @@ -46,29 +46,31 @@ static void *udl_gem_object_vmap(struct drm_gem_object *obj)
It's still not clear to me why this function exists in the first place. It's the same code as the default implementation, except that it doesn't support cached mappings.
I don't see why udl is special. I'd suggest to try to use the original shmem function and remove this one. Same for the mmap code.
tbh no idea, could be that the usb code is then a bit too inefficient at uploading stuff if it needs to cache flush.
IIRC I was told that some other component (userspace, dma-buf provider) might not work well with cached mappings. But that problem should affect all other shmem-based drivers as well. I'm not aware of any problems here.
The upload code is in udl_render_hline. It's an elaborate format-conversion helper that packs the framebuffer into USB URBs and sends them out. Again, I don't see much of a conceptual difference to other drivers that do similar things on device memory.
But then on x86 at least everything (except real gpus) is coherent, so cached mappings should be faster.
No idea, but also don't have the hw so not going to touch udl that much.
I can help with testing.
Best regards Thomas
-Daniel
Best regards Thomas
if (shmem->vmap_use_count++ > 0) goto out;
- ret = drm_gem_shmem_get_pages(shmem);
- if (ret)
goto err_zero_use;
- if (obj->import_attach)
- if (obj->import_attach) { shmem->vaddr = dma_buf_vmap(obj->import_attach->dmabuf);
- else
} else {
ret = drm_gem_shmem_get_pages(shmem);
if (ret)
goto err;
shmem->vaddr = vmap(shmem->pages, obj->size >> PAGE_SHIFT, VM_MAP, PAGE_KERNEL);
if (!shmem->vaddr)
drm_gem_shmem_put_pages(shmem);
}
if (!shmem->vaddr) { DRM_DEBUG_KMS("Failed to vmap pages\n"); ret = -ENOMEM;
goto err_put_pages;
}goto err;
out: mutex_unlock(&shmem->vmap_lock); return shmem->vaddr;
-err_put_pages:
- drm_gem_shmem_put_pages(shmem);
-err_zero_use: +err: shmem->vmap_use_count = 0; mutex_unlock(&shmem->vmap_lock); return ERR_PTR(ret);
-- Thomas Zimmermann Graphics Driver Developer SUSE Software Solutions Germany GmbH Maxfeldstr. 5, 90409 Nürnberg, Germany (HRB 36809, AG Nürnberg) Geschäftsführer: Felix Imendörffer
Hi,
given the upcoming removal of this file, I don't know if you really want to merge this patch. If so, please see my comment on patch 6 and
Acked-by: Thomas Zimmermann tzimmermann@suse.de
Am 11.05.20 um 11:35 schrieb Daniel Vetter:
There's no direct harm, because for the shmem helpers these are noops on imported buffers. The trouble is in the locks these take - I want to change dma_buf_vmap locking, and so need to make sure that we only ever take certain locks on one side of the dma-buf interface: Either for exporters, or for importers.
Signed-off-by: Daniel Vetter daniel.vetter@intel.com Cc: Dave Airlie airlied@redhat.com Cc: Sean Paul sean@poorly.run Cc: Gerd Hoffmann kraxel@redhat.com Cc: Thomas Zimmermann tzimmermann@suse.de Cc: Alex Deucher alexander.deucher@amd.com Cc: Daniel Vetter daniel.vetter@ffwll.ch Cc: Thomas Gleixner tglx@linutronix.de Cc: Sam Ravnborg sam@ravnborg.org
drivers/gpu/drm/udl/udl_gem.c | 22 ++++++++++++---------- 1 file changed, 12 insertions(+), 10 deletions(-)
diff --git a/drivers/gpu/drm/udl/udl_gem.c b/drivers/gpu/drm/udl/udl_gem.c index b6e26f98aa0a..c68d3e265329 100644 --- a/drivers/gpu/drm/udl/udl_gem.c +++ b/drivers/gpu/drm/udl/udl_gem.c @@ -46,29 +46,31 @@ static void *udl_gem_object_vmap(struct drm_gem_object *obj) if (shmem->vmap_use_count++ > 0) goto out;
- ret = drm_gem_shmem_get_pages(shmem);
- if (ret)
goto err_zero_use;
- if (obj->import_attach)
- if (obj->import_attach) { shmem->vaddr = dma_buf_vmap(obj->import_attach->dmabuf);
- else
} else {
ret = drm_gem_shmem_get_pages(shmem);
if (ret)
goto err;
shmem->vaddr = vmap(shmem->pages, obj->size >> PAGE_SHIFT, VM_MAP, PAGE_KERNEL);
if (!shmem->vaddr)
drm_gem_shmem_put_pages(shmem);
}
if (!shmem->vaddr) { DRM_DEBUG_KMS("Failed to vmap pages\n"); ret = -ENOMEM;
goto err_put_pages;
}goto err;
out: mutex_unlock(&shmem->vmap_lock); return shmem->vaddr;
-err_put_pages:
- drm_gem_shmem_put_pages(shmem);
-err_zero_use: +err: shmem->vmap_use_count = 0; mutex_unlock(&shmem->vmap_lock); return ERR_PTR(ret);
On Thu, May 14, 2020 at 09:25:18AM +0200, Thomas Zimmermann wrote:
Hi,
given the upcoming removal of this file, I don't know if you really want to merge this patch. If so, please see my comment on patch 6 and
Yeah I can wait for your patch to land, I just looked at that series. I'm kinda just keeping this around as a reminder locally. -Daniel
Acked-by: Thomas Zimmermann tzimmermann@suse.de
Am 11.05.20 um 11:35 schrieb Daniel Vetter:
There's no direct harm, because for the shmem helpers these are noops on imported buffers. The trouble is in the locks these take - I want to change dma_buf_vmap locking, and so need to make sure that we only ever take certain locks on one side of the dma-buf interface: Either for exporters, or for importers.
Signed-off-by: Daniel Vetter daniel.vetter@intel.com Cc: Dave Airlie airlied@redhat.com Cc: Sean Paul sean@poorly.run Cc: Gerd Hoffmann kraxel@redhat.com Cc: Thomas Zimmermann tzimmermann@suse.de Cc: Alex Deucher alexander.deucher@amd.com Cc: Daniel Vetter daniel.vetter@ffwll.ch Cc: Thomas Gleixner tglx@linutronix.de Cc: Sam Ravnborg sam@ravnborg.org
drivers/gpu/drm/udl/udl_gem.c | 22 ++++++++++++---------- 1 file changed, 12 insertions(+), 10 deletions(-)
diff --git a/drivers/gpu/drm/udl/udl_gem.c b/drivers/gpu/drm/udl/udl_gem.c index b6e26f98aa0a..c68d3e265329 100644 --- a/drivers/gpu/drm/udl/udl_gem.c +++ b/drivers/gpu/drm/udl/udl_gem.c @@ -46,29 +46,31 @@ static void *udl_gem_object_vmap(struct drm_gem_object *obj) if (shmem->vmap_use_count++ > 0) goto out;
- ret = drm_gem_shmem_get_pages(shmem);
- if (ret)
goto err_zero_use;
- if (obj->import_attach)
- if (obj->import_attach) { shmem->vaddr = dma_buf_vmap(obj->import_attach->dmabuf);
- else
} else {
ret = drm_gem_shmem_get_pages(shmem);
if (ret)
goto err;
shmem->vaddr = vmap(shmem->pages, obj->size >> PAGE_SHIFT, VM_MAP, PAGE_KERNEL);
if (!shmem->vaddr)
drm_gem_shmem_put_pages(shmem);
}
if (!shmem->vaddr) { DRM_DEBUG_KMS("Failed to vmap pages\n"); ret = -ENOMEM;
goto err_put_pages;
}goto err;
out: mutex_unlock(&shmem->vmap_lock); return shmem->vaddr;
-err_put_pages:
- drm_gem_shmem_put_pages(shmem);
-err_zero_use: +err: shmem->vmap_use_count = 0; mutex_unlock(&shmem->vmap_lock); return ERR_PTR(ret);
-- Thomas Zimmermann Graphics Driver Developer SUSE Software Solutions Germany GmbH Maxfeldstr. 5, 90409 Nürnberg, Germany (HRB 36809, AG Nürnberg) Geschäftsführer: Felix Imendörffer
On Thu, May 14, 2020 at 02:47:57PM +0200, Daniel Vetter wrote:
On Thu, May 14, 2020 at 09:25:18AM +0200, Thomas Zimmermann wrote:
Hi,
given the upcoming removal of this file, I don't know if you really want to merge this patch. If so, please see my comment on patch 6 and
Yeah I can wait for your patch to land, I just looked at that series. I'm kinda just keeping this around as a reminder locally.
Still applied cleanly to drm-misc-next, so I applied it. -Daniel
-Daniel
Acked-by: Thomas Zimmermann tzimmermann@suse.de
Am 11.05.20 um 11:35 schrieb Daniel Vetter:
There's no direct harm, because for the shmem helpers these are noops on imported buffers. The trouble is in the locks these take - I want to change dma_buf_vmap locking, and so need to make sure that we only ever take certain locks on one side of the dma-buf interface: Either for exporters, or for importers.
Signed-off-by: Daniel Vetter daniel.vetter@intel.com Cc: Dave Airlie airlied@redhat.com Cc: Sean Paul sean@poorly.run Cc: Gerd Hoffmann kraxel@redhat.com Cc: Thomas Zimmermann tzimmermann@suse.de Cc: Alex Deucher alexander.deucher@amd.com Cc: Daniel Vetter daniel.vetter@ffwll.ch Cc: Thomas Gleixner tglx@linutronix.de Cc: Sam Ravnborg sam@ravnborg.org
drivers/gpu/drm/udl/udl_gem.c | 22 ++++++++++++---------- 1 file changed, 12 insertions(+), 10 deletions(-)
diff --git a/drivers/gpu/drm/udl/udl_gem.c b/drivers/gpu/drm/udl/udl_gem.c index b6e26f98aa0a..c68d3e265329 100644 --- a/drivers/gpu/drm/udl/udl_gem.c +++ b/drivers/gpu/drm/udl/udl_gem.c @@ -46,29 +46,31 @@ static void *udl_gem_object_vmap(struct drm_gem_object *obj) if (shmem->vmap_use_count++ > 0) goto out;
- ret = drm_gem_shmem_get_pages(shmem);
- if (ret)
goto err_zero_use;
- if (obj->import_attach)
- if (obj->import_attach) { shmem->vaddr = dma_buf_vmap(obj->import_attach->dmabuf);
- else
} else {
ret = drm_gem_shmem_get_pages(shmem);
if (ret)
goto err;
shmem->vaddr = vmap(shmem->pages, obj->size >> PAGE_SHIFT, VM_MAP, PAGE_KERNEL);
if (!shmem->vaddr)
drm_gem_shmem_put_pages(shmem);
}
if (!shmem->vaddr) { DRM_DEBUG_KMS("Failed to vmap pages\n"); ret = -ENOMEM;
goto err_put_pages;
}goto err;
out: mutex_unlock(&shmem->vmap_lock); return shmem->vaddr;
-err_put_pages:
- drm_gem_shmem_put_pages(shmem);
-err_zero_use: +err: shmem->vmap_use_count = 0; mutex_unlock(&shmem->vmap_lock); return ERR_PTR(ret);
-- Thomas Zimmermann Graphics Driver Developer SUSE Software Solutions Germany GmbH Maxfeldstr. 5, 90409 Nürnberg, Germany (HRB 36809, AG Nürnberg) Geschäftsführer: Felix Imendörffer
-- Daniel Vetter Software Engineer, Intel Corporation http://blog.ffwll.ch
There's no direct harm, because for the shmem helpers these are noops on imported buffers. The trouble is in the locks these take - I want to change dma_buf_vmap locking, and so need to make sure that we only ever take certain locks on one side of the dma-buf interface: Either for exporters, or for importers.
Cc: Gerd Hoffmann kraxel@redhat.com Cc: Rob Herring robh@kernel.org Cc: Noralf Trønnes noralf@tronnes.org Signed-off-by: Daniel Vetter daniel.vetter@intel.com --- drivers/gpu/drm/drm_gem_shmem_helper.c | 17 +++++++++-------- 1 file changed, 9 insertions(+), 8 deletions(-)
diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index 2a70159d50ef..b9cba5cc61c3 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -252,32 +252,33 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem) if (shmem->vmap_use_count++ > 0) return shmem->vaddr;
- ret = drm_gem_shmem_get_pages(shmem); - if (ret) - goto err_zero_use; - if (obj->import_attach) { shmem->vaddr = dma_buf_vmap(obj->import_attach->dmabuf); } else { pgprot_t prot = PAGE_KERNEL;
+ ret = drm_gem_shmem_get_pages(shmem); + if (ret) + goto err; + if (!shmem->map_cached) prot = pgprot_writecombine(prot); shmem->vaddr = vmap(shmem->pages, obj->size >> PAGE_SHIFT, VM_MAP, prot); + + if (!shmem->vaddr) + drm_gem_shmem_put_pages(shmem); }
if (!shmem->vaddr) { DRM_DEBUG_KMS("Failed to vmap pages\n"); ret = -ENOMEM; - goto err_put_pages; + goto err; }
return shmem->vaddr;
-err_put_pages: - drm_gem_shmem_put_pages(shmem); -err_zero_use: +err: shmem->vmap_use_count = 0;
return ERR_PTR(ret);
Hi
Am 11.05.20 um 11:35 schrieb Daniel Vetter:
There's no direct harm, because for the shmem helpers these are noops on imported buffers. The trouble is in the locks these take - I want to change dma_buf_vmap locking, and so need to make sure that we only ever take certain locks on one side of the dma-buf interface: Either for exporters, or for importers.
Cc: Gerd Hoffmann kraxel@redhat.com Cc: Rob Herring robh@kernel.org Cc: Noralf Trønnes noralf@tronnes.org Signed-off-by: Daniel Vetter daniel.vetter@intel.com
drivers/gpu/drm/drm_gem_shmem_helper.c | 17 +++++++++-------- 1 file changed, 9 insertions(+), 8 deletions(-)
diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index 2a70159d50ef..b9cba5cc61c3 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -252,32 +252,33 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem) if (shmem->vmap_use_count++ > 0) return shmem->vaddr;
- ret = drm_gem_shmem_get_pages(shmem);
- if (ret)
goto err_zero_use;
- if (obj->import_attach) { shmem->vaddr = dma_buf_vmap(obj->import_attach->dmabuf); } else { pgprot_t prot = PAGE_KERNEL;
ret = drm_gem_shmem_get_pages(shmem);
if (ret)
goto err;
if (!shmem->map_cached) prot = pgprot_writecombine(prot); shmem->vaddr = vmap(shmem->pages, obj->size >> PAGE_SHIFT, VM_MAP, prot);
if (!shmem->vaddr)
drm_gem_shmem_put_pages(shmem);
}
if (!shmem->vaddr) { DRM_DEBUG_KMS("Failed to vmap pages\n"); ret = -ENOMEM;
goto err_put_pages;
goto err;
}
return shmem->vaddr;
-err_put_pages:
- drm_gem_shmem_put_pages(shmem);
I found the new code to be less readable. Maybe keep the error rollback as-is and protect _put_pages() with if (!import_attach).
In any case
Acked-by: Thomas Zimmermann tzimmermann@suse.de
-err_zero_use: +err: shmem->vmap_use_count = 0;
return ERR_PTR(ret);
On Thu, May 14, 2020 at 09:16:54AM +0200, Thomas Zimmermann wrote:
Hi
Am 11.05.20 um 11:35 schrieb Daniel Vetter:
There's no direct harm, because for the shmem helpers these are noops on imported buffers. The trouble is in the locks these take - I want to change dma_buf_vmap locking, and so need to make sure that we only ever take certain locks on one side of the dma-buf interface: Either for exporters, or for importers.
Cc: Gerd Hoffmann kraxel@redhat.com Cc: Rob Herring robh@kernel.org Cc: Noralf Trønnes noralf@tronnes.org Signed-off-by: Daniel Vetter daniel.vetter@intel.com
drivers/gpu/drm/drm_gem_shmem_helper.c | 17 +++++++++-------- 1 file changed, 9 insertions(+), 8 deletions(-)
diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index 2a70159d50ef..b9cba5cc61c3 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -252,32 +252,33 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem) if (shmem->vmap_use_count++ > 0) return shmem->vaddr;
- ret = drm_gem_shmem_get_pages(shmem);
- if (ret)
goto err_zero_use;
- if (obj->import_attach) { shmem->vaddr = dma_buf_vmap(obj->import_attach->dmabuf); } else { pgprot_t prot = PAGE_KERNEL;
ret = drm_gem_shmem_get_pages(shmem);
if (ret)
goto err;
if (!shmem->map_cached) prot = pgprot_writecombine(prot); shmem->vaddr = vmap(shmem->pages, obj->size >> PAGE_SHIFT, VM_MAP, prot);
if (!shmem->vaddr)
drm_gem_shmem_put_pages(shmem);
}
if (!shmem->vaddr) { DRM_DEBUG_KMS("Failed to vmap pages\n"); ret = -ENOMEM;
goto err_put_pages;
goto err;
}
return shmem->vaddr;
-err_put_pages:
- drm_gem_shmem_put_pages(shmem);
I found the new code to be less readable. Maybe keep the error rollback as-is and protect _put_pages() with if (!import_attach).
Hm yeah I guess I can leave this as-is mostly, makes at least the diff smaller. Imo it all looks a bit awkward, but what I've done isn't clearly better than just leaving stuff mostly where it was. -Daniel
In any case
Acked-by: Thomas Zimmermann tzimmermann@suse.de
-err_zero_use: +err: shmem->vmap_use_count = 0;
return ERR_PTR(ret);
-- Thomas Zimmermann Graphics Driver Developer SUSE Software Solutions Germany GmbH Maxfeldstr. 5, 90409 Nürnberg, Germany (HRB 36809, AG Nürnberg) Geschäftsführer: Felix Imendörffer
There's no direct harm, because for the shmem helpers these are noops on imported buffers. The trouble is in the locks these take - I want to change dma_buf_vmap locking, and so need to make sure that we only ever take certain locks on one side of the dma-buf interface: Either for exporters, or for importers.
v2: Change the control flow less compared to what's there (Thomas)
Cc: Thomas Zimmermann tzimmermann@suse.de Cc: Gerd Hoffmann kraxel@redhat.com Cc: Rob Herring robh@kernel.org Cc: Noralf Trønnes noralf@tronnes.org Acked-by: Thomas Zimmermann tzimmermann@suse.de Signed-off-by: Daniel Vetter daniel.vetter@intel.com --- drivers/gpu/drm/drm_gem_shmem_helper.c | 11 ++++++----- 1 file changed, 6 insertions(+), 5 deletions(-)
diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index 2a70159d50ef..9540478e0434 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -252,15 +252,15 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem) if (shmem->vmap_use_count++ > 0) return shmem->vaddr;
- ret = drm_gem_shmem_get_pages(shmem); - if (ret) - goto err_zero_use; - if (obj->import_attach) { shmem->vaddr = dma_buf_vmap(obj->import_attach->dmabuf); } else { pgprot_t prot = PAGE_KERNEL;
+ ret = drm_gem_shmem_get_pages(shmem); + if (ret) + goto err_zero_use; + if (!shmem->map_cached) prot = pgprot_writecombine(prot); shmem->vaddr = vmap(shmem->pages, obj->size >> PAGE_SHIFT, @@ -276,7 +276,8 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem) return shmem->vaddr;
err_put_pages: - drm_gem_shmem_put_pages(shmem); + if (!obj->import_attach) + drm_gem_shmem_put_pages(shmem); err_zero_use: shmem->vmap_use_count = 0;
Currently this seems to work by converting the sgt into a pages array, and then treating it like a native object. Do the right thing and redirect mmap to the exporter.
With this nothing is calling get_pages anymore on imported dma-buf, and we can start to remove the use of the ->pages array for that case.
v2: Rebase
Cc: Gerd Hoffmann kraxel@redhat.com Cc: Rob Herring robh@kernel.org Cc: Noralf Trønnes noralf@tronnes.org Signed-off-by: Daniel Vetter daniel.vetter@intel.com --- drivers/gpu/drm/drm_gem_shmem_helper.c | 3 +++ 1 file changed, 3 insertions(+)
diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index b9cba5cc61c3..117a7841e284 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -551,6 +551,9 @@ int drm_gem_shmem_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma) /* Remove the fake offset */ vma->vm_pgoff -= drm_vma_node_start(&obj->vma_node);
+ if (obj->import_attach) + return dma_buf_mmap(obj->dma_buf, vma, 0); + shmem = to_drm_gem_shmem_obj(obj);
ret = drm_gem_shmem_get_pages(shmem);
Hi
Am 11.05.20 um 11:35 schrieb Daniel Vetter:
Currently this seems to work by converting the sgt into a pages array, and then treating it like a native object. Do the right thing and redirect mmap to the exporter.
With this nothing is calling get_pages anymore on imported dma-buf, and we can start to remove the use of the ->pages array for that case.
v2: Rebase
Cc: Gerd Hoffmann kraxel@redhat.com Cc: Rob Herring robh@kernel.org Cc: Noralf Trønnes noralf@tronnes.org Signed-off-by: Daniel Vetter daniel.vetter@intel.com
drivers/gpu/drm/drm_gem_shmem_helper.c | 3 +++ 1 file changed, 3 insertions(+)
diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index b9cba5cc61c3..117a7841e284 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -551,6 +551,9 @@ int drm_gem_shmem_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma) /* Remove the fake offset */ vma->vm_pgoff -= drm_vma_node_start(&obj->vma_node);
- if (obj->import_attach)
return dma_buf_mmap(obj->dma_buf, vma, 0);
Just a question: how does it work with the fake value in vm_pgoffset? The offset is a DRM-specific thing and the dma-buf exporter expects the real offset?
With this question clarified:
Acked-by: Thomas Zimmermann tzimmermann@suse.de
shmem = to_drm_gem_shmem_obj(obj);
ret = drm_gem_shmem_get_pages(shmem);
On Thu, May 14, 2020 at 09:23:37AM +0200, Thomas Zimmermann wrote:
Hi
Am 11.05.20 um 11:35 schrieb Daniel Vetter:
Currently this seems to work by converting the sgt into a pages array, and then treating it like a native object. Do the right thing and redirect mmap to the exporter.
With this nothing is calling get_pages anymore on imported dma-buf, and we can start to remove the use of the ->pages array for that case.
v2: Rebase
Cc: Gerd Hoffmann kraxel@redhat.com Cc: Rob Herring robh@kernel.org Cc: Noralf Trønnes noralf@tronnes.org Signed-off-by: Daniel Vetter daniel.vetter@intel.com
drivers/gpu/drm/drm_gem_shmem_helper.c | 3 +++ 1 file changed, 3 insertions(+)
diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index b9cba5cc61c3..117a7841e284 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -551,6 +551,9 @@ int drm_gem_shmem_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma) /* Remove the fake offset */ vma->vm_pgoff -= drm_vma_node_start(&obj->vma_node);
- if (obj->import_attach)
return dma_buf_mmap(obj->dma_buf, vma, 0);
Just a question: how does it work with the fake value in vm_pgoffset? The offset is a DRM-specific thing and the dma-buf exporter expects the real offset?
For drm chardev file descriptor we have one file, but want to let userspace map arbitrary objects. So an object gets a allocated an area within the over drm mmap space, this fake offset.
For dma-buf mmap otoh there's a 1:1 mapping between fd and object, so no additional offset needed. -Daniel
With this question clarified:
Acked-by: Thomas Zimmermann tzimmermann@suse.de
shmem = to_drm_gem_shmem_obj(obj);
ret = drm_gem_shmem_get_pages(shmem);
-- Thomas Zimmermann Graphics Driver Developer SUSE Software Solutions Germany GmbH Maxfeldstr. 5, 90409 Nürnberg, Germany (HRB 36809, AG Nürnberg) Geschäftsführer: Felix Imendörffer
Hi Daniel,
what's your plan for this patch set? I'd need this patch for the udl shmem cleanup.
Best regards Thomas
Am 11.05.20 um 11:35 schrieb Daniel Vetter:
Currently this seems to work by converting the sgt into a pages array, and then treating it like a native object. Do the right thing and redirect mmap to the exporter.
With this nothing is calling get_pages anymore on imported dma-buf, and we can start to remove the use of the ->pages array for that case.
v2: Rebase
Cc: Gerd Hoffmann kraxel@redhat.com Cc: Rob Herring robh@kernel.org Cc: Noralf Trønnes noralf@tronnes.org Signed-off-by: Daniel Vetter daniel.vetter@intel.com
drivers/gpu/drm/drm_gem_shmem_helper.c | 3 +++ 1 file changed, 3 insertions(+)
diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index b9cba5cc61c3..117a7841e284 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -551,6 +551,9 @@ int drm_gem_shmem_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma) /* Remove the fake offset */ vma->vm_pgoff -= drm_vma_node_start(&obj->vma_node);
if (obj->import_attach)
return dma_buf_mmap(obj->dma_buf, vma, 0);
shmem = to_drm_gem_shmem_obj(obj);
ret = drm_gem_shmem_get_pages(shmem);
On Wed, May 27, 2020 at 8:32 PM Thomas Zimmermann tzimmermann@suse.de wrote:
Hi Daniel,
what's your plan for this patch set? I'd need this patch for the udl shmem cleanup.
I was pinging some people for a tested-by, I kinda don't want to push this entirely untested. I think at least one of the rendering drivers using shmem would be nice to run this on, I've pinged Rob Herring a bit. -Daniel
Best regards Thomas
Am 11.05.20 um 11:35 schrieb Daniel Vetter:
Currently this seems to work by converting the sgt into a pages array, and then treating it like a native object. Do the right thing and redirect mmap to the exporter.
With this nothing is calling get_pages anymore on imported dma-buf, and we can start to remove the use of the ->pages array for that case.
v2: Rebase
Cc: Gerd Hoffmann kraxel@redhat.com Cc: Rob Herring robh@kernel.org Cc: Noralf Trønnes noralf@tronnes.org Signed-off-by: Daniel Vetter daniel.vetter@intel.com
drivers/gpu/drm/drm_gem_shmem_helper.c | 3 +++ 1 file changed, 3 insertions(+)
diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index b9cba5cc61c3..117a7841e284 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -551,6 +551,9 @@ int drm_gem_shmem_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma) /* Remove the fake offset */ vma->vm_pgoff -= drm_vma_node_start(&obj->vma_node);
if (obj->import_attach)
return dma_buf_mmap(obj->dma_buf, vma, 0);
shmem = to_drm_gem_shmem_obj(obj); ret = drm_gem_shmem_get_pages(shmem);
-- Thomas Zimmermann Graphics Driver Developer SUSE Software Solutions Germany GmbH Maxfeldstr. 5, 90409 Nürnberg, Germany (HRB 36809, AG Nürnberg) Geschäftsführer: Felix Imendörffer
Hi
Am 27.05.20 um 21:34 schrieb Daniel Vetter:
On Wed, May 27, 2020 at 8:32 PM Thomas Zimmermann tzimmermann@suse.de wrote:
Hi Daniel,
what's your plan for this patch set? I'd need this patch for the udl shmem cleanup.
I was pinging some people for a tested-by, I kinda don't want to push this entirely untested. I think at least one of the rendering drivers using shmem would be nice to run this on, I've pinged Rob Herring a bit.
OK, makes sense. FWIW I tested the patchset with udl as secondary adapter. No problems noticed.
Tested-by: Thomas Zimmermann tzimmermann@suse.de
Best regards Thomas
-Daniel
Best regards Thomas
Am 11.05.20 um 11:35 schrieb Daniel Vetter:
Currently this seems to work by converting the sgt into a pages array, and then treating it like a native object. Do the right thing and redirect mmap to the exporter.
With this nothing is calling get_pages anymore on imported dma-buf, and we can start to remove the use of the ->pages array for that case.
v2: Rebase
Cc: Gerd Hoffmann kraxel@redhat.com Cc: Rob Herring robh@kernel.org Cc: Noralf Trønnes noralf@tronnes.org Signed-off-by: Daniel Vetter daniel.vetter@intel.com
drivers/gpu/drm/drm_gem_shmem_helper.c | 3 +++ 1 file changed, 3 insertions(+)
diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index b9cba5cc61c3..117a7841e284 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -551,6 +551,9 @@ int drm_gem_shmem_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma) /* Remove the fake offset */ vma->vm_pgoff -= drm_vma_node_start(&obj->vma_node);
if (obj->import_attach)
return dma_buf_mmap(obj->dma_buf, vma, 0);
shmem = to_drm_gem_shmem_obj(obj); ret = drm_gem_shmem_get_pages(shmem);
-- Thomas Zimmermann Graphics Driver Developer SUSE Software Solutions Germany GmbH Maxfeldstr. 5, 90409 Nürnberg, Germany (HRB 36809, AG Nürnberg) Geschäftsführer: Felix Imendörffer
Just a bit of light paranoia. Also sprinkle this check over drm_gem_shmem_get_sg_table, which should only be called when exporting, same for the pin/unpin functions, on which it relies to work correctly.
Cc: Gerd Hoffmann kraxel@redhat.com Cc: Rob Herring robh@kernel.org Cc: Noralf Trønnes noralf@tronnes.org Signed-off-by: Daniel Vetter daniel.vetter@intel.com --- drivers/gpu/drm/drm_gem_shmem_helper.c | 10 ++++++++++ 1 file changed, 10 insertions(+)
diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index 117a7841e284..f7011338813e 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -170,6 +170,8 @@ int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem) { int ret;
+ WARN_ON(shmem->base.import_attach); + ret = mutex_lock_interruptible(&shmem->pages_lock); if (ret) return ret; @@ -225,6 +227,8 @@ int drm_gem_shmem_pin(struct drm_gem_object *obj) { struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
+ WARN_ON(shmem->base.import_attach); + return drm_gem_shmem_get_pages(shmem); } EXPORT_SYMBOL(drm_gem_shmem_pin); @@ -240,6 +244,8 @@ void drm_gem_shmem_unpin(struct drm_gem_object *obj) { struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
+ WARN_ON(shmem->base.import_attach); + drm_gem_shmem_put_pages(shmem); } EXPORT_SYMBOL(drm_gem_shmem_unpin); @@ -510,6 +516,8 @@ static void drm_gem_shmem_vm_open(struct vm_area_struct *vma) struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj); int ret;
+ WARN_ON(shmem->base.import_attach); + ret = drm_gem_shmem_get_pages(shmem); WARN_ON_ONCE(ret != 0);
@@ -611,6 +619,8 @@ struct sg_table *drm_gem_shmem_get_sg_table(struct drm_gem_object *obj) { struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
+ WARN_ON(shmem->base.import_attach); + return drm_prime_pages_to_sg(shmem->pages, obj->size >> PAGE_SHIFT); } EXPORT_SYMBOL_GPL(drm_gem_shmem_get_sg_table);
Hi
Am 11.05.20 um 11:35 schrieb Daniel Vetter:
Just a bit of light paranoia. Also sprinkle this check over drm_gem_shmem_get_sg_table, which should only be called when exporting, same for the pin/unpin functions, on which it relies to work correctly.
Cc: Gerd Hoffmann kraxel@redhat.com Cc: Rob Herring robh@kernel.org Cc: Noralf Trønnes noralf@tronnes.org Signed-off-by: Daniel Vetter daniel.vetter@intel.com
drivers/gpu/drm/drm_gem_shmem_helper.c | 10 ++++++++++ 1 file changed, 10 insertions(+)
diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index 117a7841e284..f7011338813e 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -170,6 +170,8 @@ int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem) { int ret;
- WARN_ON(shmem->base.import_attach);
- ret = mutex_lock_interruptible(&shmem->pages_lock); if (ret) return ret;
@@ -225,6 +227,8 @@ int drm_gem_shmem_pin(struct drm_gem_object *obj) { struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
- WARN_ON(shmem->base.import_attach);
I don't understand this change. If a driver pins pages it now has to check that the pages are not imported?
return drm_gem_shmem_get_pages(shmem); } EXPORT_SYMBOL(drm_gem_shmem_pin); @@ -240,6 +244,8 @@ void drm_gem_shmem_unpin(struct drm_gem_object *obj) { struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
- WARN_ON(shmem->base.import_attach);
- drm_gem_shmem_put_pages(shmem);
} EXPORT_SYMBOL(drm_gem_shmem_unpin); @@ -510,6 +516,8 @@ static void drm_gem_shmem_vm_open(struct vm_area_struct *vma) struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj); int ret;
- WARN_ON(shmem->base.import_attach);
- ret = drm_gem_shmem_get_pages(shmem); WARN_ON_ONCE(ret != 0);
@@ -611,6 +619,8 @@ struct sg_table *drm_gem_shmem_get_sg_table(struct drm_gem_object *obj) { struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
- WARN_ON(shmem->base.import_attach);
- return drm_prime_pages_to_sg(shmem->pages, obj->size >> PAGE_SHIFT);
} EXPORT_SYMBOL_GPL(drm_gem_shmem_get_sg_table);
On Thu, May 14, 2020 at 09:30:04AM +0200, Thomas Zimmermann wrote:
Hi
Am 11.05.20 um 11:35 schrieb Daniel Vetter:
Just a bit of light paranoia. Also sprinkle this check over drm_gem_shmem_get_sg_table, which should only be called when exporting, same for the pin/unpin functions, on which it relies to work correctly.
Cc: Gerd Hoffmann kraxel@redhat.com Cc: Rob Herring robh@kernel.org Cc: Noralf Trønnes noralf@tronnes.org Signed-off-by: Daniel Vetter daniel.vetter@intel.com
drivers/gpu/drm/drm_gem_shmem_helper.c | 10 ++++++++++ 1 file changed, 10 insertions(+)
diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index 117a7841e284..f7011338813e 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -170,6 +170,8 @@ int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem) { int ret;
- WARN_ON(shmem->base.import_attach);
- ret = mutex_lock_interruptible(&shmem->pages_lock); if (ret) return ret;
@@ -225,6 +227,8 @@ int drm_gem_shmem_pin(struct drm_gem_object *obj) { struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
- WARN_ON(shmem->base.import_attach);
I don't understand this change. If a driver pins pages it now has to check that the pages are not imported?
Nope. There's two classes of functions in the helpers, and I'm trying to unconfuse them:
- stuff used to implement gem_funcs. These are obviously only ever used on native objects, never on imported ones (on imported ones we try to forward through the dma-buf layer to the exporter). drm_gem_shmem_pin is only used in that role to implement gem_funcs->pin. Calling it on an imported buffer is indeed a bug.
- the other set of functions are for drivers to do their stuff. The interface which (implicitly) pins stuff into places is various set of get_pages, which do have different paths for native and imported objects.
Apologies that I missed your question here, I merged all the patches leading up to this one for now.
Thanks, Daniel
return drm_gem_shmem_get_pages(shmem); } EXPORT_SYMBOL(drm_gem_shmem_pin); @@ -240,6 +244,8 @@ void drm_gem_shmem_unpin(struct drm_gem_object *obj) { struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
- WARN_ON(shmem->base.import_attach);
- drm_gem_shmem_put_pages(shmem);
} EXPORT_SYMBOL(drm_gem_shmem_unpin); @@ -510,6 +516,8 @@ static void drm_gem_shmem_vm_open(struct vm_area_struct *vma) struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj); int ret;
- WARN_ON(shmem->base.import_attach);
- ret = drm_gem_shmem_get_pages(shmem); WARN_ON_ONCE(ret != 0);
@@ -611,6 +619,8 @@ struct sg_table *drm_gem_shmem_get_sg_table(struct drm_gem_object *obj) { struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
- WARN_ON(shmem->base.import_attach);
- return drm_prime_pages_to_sg(shmem->pages, obj->size >> PAGE_SHIFT);
} EXPORT_SYMBOL_GPL(drm_gem_shmem_get_sg_table);
-- Thomas Zimmermann Graphics Driver Developer SUSE Software Solutions Germany GmbH Maxfeldstr. 5, 90409 Nürnberg, Germany (HRB 36809, AG Nürnberg) Geschäftsführer: Felix Imendörffer
Hi
Am 03.06.20 um 15:12 schrieb Daniel Vetter:
On Thu, May 14, 2020 at 09:30:04AM +0200, Thomas Zimmermann wrote:
Hi
Am 11.05.20 um 11:35 schrieb Daniel Vetter:
Just a bit of light paranoia. Also sprinkle this check over drm_gem_shmem_get_sg_table, which should only be called when exporting, same for the pin/unpin functions, on which it relies to work correctly.
Cc: Gerd Hoffmann kraxel@redhat.com Cc: Rob Herring robh@kernel.org Cc: Noralf Trønnes noralf@tronnes.org Signed-off-by: Daniel Vetter daniel.vetter@intel.com
drivers/gpu/drm/drm_gem_shmem_helper.c | 10 ++++++++++ 1 file changed, 10 insertions(+)
diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index 117a7841e284..f7011338813e 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -170,6 +170,8 @@ int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem) { int ret;
- WARN_ON(shmem->base.import_attach);
- ret = mutex_lock_interruptible(&shmem->pages_lock); if (ret) return ret;
@@ -225,6 +227,8 @@ int drm_gem_shmem_pin(struct drm_gem_object *obj) { struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
- WARN_ON(shmem->base.import_attach);
I don't understand this change. If a driver pins pages it now has to check that the pages are not imported?
Nope. There's two classes of functions in the helpers, and I'm trying to unconfuse them:
stuff used to implement gem_funcs. These are obviously only ever used on native objects, never on imported ones (on imported ones we try to forward through the dma-buf layer to the exporter). drm_gem_shmem_pin is only used in that role to implement gem_funcs->pin. Calling it on an imported buffer is indeed a bug.
the other set of functions are for drivers to do their stuff. The interface which (implicitly) pins stuff into places is various set of get_pages, which do have different paths for native and imported objects.
Thanks for explaining. Patch is
Acked-by: Thomas Zimmermann tzimmermann@suse.de
Apologies that I missed your question here, I merged all the patches leading up to this one for now.
Thanks, Daniel
return drm_gem_shmem_get_pages(shmem); } EXPORT_SYMBOL(drm_gem_shmem_pin); @@ -240,6 +244,8 @@ void drm_gem_shmem_unpin(struct drm_gem_object *obj) { struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
- WARN_ON(shmem->base.import_attach);
- drm_gem_shmem_put_pages(shmem);
} EXPORT_SYMBOL(drm_gem_shmem_unpin); @@ -510,6 +516,8 @@ static void drm_gem_shmem_vm_open(struct vm_area_struct *vma) struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj); int ret;
- WARN_ON(shmem->base.import_attach);
- ret = drm_gem_shmem_get_pages(shmem); WARN_ON_ONCE(ret != 0);
@@ -611,6 +619,8 @@ struct sg_table *drm_gem_shmem_get_sg_table(struct drm_gem_object *obj) { struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
- WARN_ON(shmem->base.import_attach);
- return drm_prime_pages_to_sg(shmem->pages, obj->size >> PAGE_SHIFT);
} EXPORT_SYMBOL_GPL(drm_gem_shmem_get_sg_table);
-- Thomas Zimmermann Graphics Driver Developer SUSE Software Solutions Germany GmbH Maxfeldstr. 5, 90409 Nürnberg, Germany (HRB 36809, AG Nürnberg) Geschäftsführer: Felix Imendörffer
On Mon, Jun 08, 2020 at 04:40:26PM +0200, Thomas Zimmermann wrote:
Hi
Am 03.06.20 um 15:12 schrieb Daniel Vetter:
On Thu, May 14, 2020 at 09:30:04AM +0200, Thomas Zimmermann wrote:
Hi
Am 11.05.20 um 11:35 schrieb Daniel Vetter:
Just a bit of light paranoia. Also sprinkle this check over drm_gem_shmem_get_sg_table, which should only be called when exporting, same for the pin/unpin functions, on which it relies to work correctly.
Cc: Gerd Hoffmann kraxel@redhat.com Cc: Rob Herring robh@kernel.org Cc: Noralf Trønnes noralf@tronnes.org Signed-off-by: Daniel Vetter daniel.vetter@intel.com
drivers/gpu/drm/drm_gem_shmem_helper.c | 10 ++++++++++ 1 file changed, 10 insertions(+)
diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index 117a7841e284..f7011338813e 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -170,6 +170,8 @@ int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem) { int ret;
- WARN_ON(shmem->base.import_attach);
- ret = mutex_lock_interruptible(&shmem->pages_lock); if (ret) return ret;
@@ -225,6 +227,8 @@ int drm_gem_shmem_pin(struct drm_gem_object *obj) { struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
- WARN_ON(shmem->base.import_attach);
I don't understand this change. If a driver pins pages it now has to check that the pages are not imported?
Nope. There's two classes of functions in the helpers, and I'm trying to unconfuse them:
stuff used to implement gem_funcs. These are obviously only ever used on native objects, never on imported ones (on imported ones we try to forward through the dma-buf layer to the exporter). drm_gem_shmem_pin is only used in that role to implement gem_funcs->pin. Calling it on an imported buffer is indeed a bug.
the other set of functions are for drivers to do their stuff. The interface which (implicitly) pins stuff into places is various set of get_pages, which do have different paths for native and imported objects.
Thanks for explaining. Patch is
Thanks for taking a look at all this, last 2 patches now also merged to drm-misc-next. -Daniel
Acked-by: Thomas Zimmermann tzimmermann@suse.de
Apologies that I missed your question here, I merged all the patches leading up to this one for now.
Thanks, Daniel
return drm_gem_shmem_get_pages(shmem); } EXPORT_SYMBOL(drm_gem_shmem_pin); @@ -240,6 +244,8 @@ void drm_gem_shmem_unpin(struct drm_gem_object *obj) { struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
- WARN_ON(shmem->base.import_attach);
- drm_gem_shmem_put_pages(shmem);
} EXPORT_SYMBOL(drm_gem_shmem_unpin); @@ -510,6 +516,8 @@ static void drm_gem_shmem_vm_open(struct vm_area_struct *vma) struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj); int ret;
- WARN_ON(shmem->base.import_attach);
- ret = drm_gem_shmem_get_pages(shmem); WARN_ON_ONCE(ret != 0);
@@ -611,6 +619,8 @@ struct sg_table *drm_gem_shmem_get_sg_table(struct drm_gem_object *obj) { struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
- WARN_ON(shmem->base.import_attach);
- return drm_prime_pages_to_sg(shmem->pages, obj->size >> PAGE_SHIFT);
} EXPORT_SYMBOL_GPL(drm_gem_shmem_get_sg_table);
-- Thomas Zimmermann Graphics Driver Developer SUSE Software Solutions Germany GmbH Maxfeldstr. 5, 90409 Nürnberg, Germany (HRB 36809, AG Nürnberg) Geschäftsführer: Felix Imendörffer
-- Thomas Zimmermann Graphics Driver Developer SUSE Software Solutions Germany GmbH Maxfeldstr. 5, 90409 Nürnberg, Germany (HRB 36809, AG Nürnberg) Geschäftsführer: Felix Imendörffer
- Ditch the ->pages array - Make it a private gem bo, which means no shmem object, which means fireworks if anyone calls drm_gem_object_get_pages. But we've just made sure that's all covered.
Cc: Gerd Hoffmann kraxel@redhat.com Cc: Rob Herring robh@kernel.org Cc: Noralf Trønnes noralf@tronnes.org Signed-off-by: Daniel Vetter daniel.vetter@intel.com --- drivers/gpu/drm/drm_gem_shmem_helper.c | 59 ++++++++++---------------- 1 file changed, 23 insertions(+), 36 deletions(-)
diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index f7011338813e..8c7d4f422b7b 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -35,22 +35,12 @@ static const struct drm_gem_object_funcs drm_gem_shmem_funcs = { .mmap = drm_gem_shmem_mmap, };
-/** - * drm_gem_shmem_create - Allocate an object with the given size - * @dev: DRM device - * @size: Size of the object to allocate - * - * This function creates a shmem GEM object. - * - * Returns: - * A struct drm_gem_shmem_object * on success or an ERR_PTR()-encoded negative - * error code on failure. - */ -struct drm_gem_shmem_object *drm_gem_shmem_create(struct drm_device *dev, size_t size) +static struct drm_gem_shmem_object * +__drm_gem_shmem_create(struct drm_device *dev, size_t size, bool private) { struct drm_gem_shmem_object *shmem; struct drm_gem_object *obj; - int ret; + int ret = 0;
size = PAGE_ALIGN(size);
@@ -64,7 +54,10 @@ struct drm_gem_shmem_object *drm_gem_shmem_create(struct drm_device *dev, size_t if (!obj->funcs) obj->funcs = &drm_gem_shmem_funcs;
- ret = drm_gem_object_init(dev, obj, size); + if (private) + drm_gem_private_object_init(dev, obj, size); + else + ret = drm_gem_object_init(dev, obj, size); if (ret) goto err_free;
@@ -96,6 +89,21 @@ struct drm_gem_shmem_object *drm_gem_shmem_create(struct drm_device *dev, size_t
return ERR_PTR(ret); } +/** + * drm_gem_shmem_create - Allocate an object with the given size + * @dev: DRM device + * @size: Size of the object to allocate + * + * This function creates a shmem GEM object. + * + * Returns: + * A struct drm_gem_shmem_object * on success or an ERR_PTR()-encoded negative + * error code on failure. + */ +struct drm_gem_shmem_object *drm_gem_shmem_create(struct drm_device *dev, size_t size) +{ + return __drm_gem_shmem_create(dev, size, false); +} EXPORT_SYMBOL_GPL(drm_gem_shmem_create);
/** @@ -115,7 +123,6 @@ void drm_gem_shmem_free_object(struct drm_gem_object *obj) if (obj->import_attach) { shmem->pages_use_count--; drm_prime_gem_destroy(obj, shmem->sgt); - kvfree(shmem->pages); } else { if (shmem->sgt) { dma_unmap_sg(obj->dev->dev, shmem->sgt->sgl, @@ -371,7 +378,7 @@ drm_gem_shmem_create_with_handle(struct drm_file *file_priv, struct drm_gem_shmem_object *shmem; int ret;
- shmem = drm_gem_shmem_create(dev, size); + shmem = __drm_gem_shmem_create(dev, size, true); if (IS_ERR(shmem)) return shmem;
@@ -695,36 +702,16 @@ drm_gem_shmem_prime_import_sg_table(struct drm_device *dev, struct sg_table *sgt) { size_t size = PAGE_ALIGN(attach->dmabuf->size); - size_t npages = size >> PAGE_SHIFT; struct drm_gem_shmem_object *shmem; - int ret;
shmem = drm_gem_shmem_create(dev, size); if (IS_ERR(shmem)) return ERR_CAST(shmem);
- shmem->pages = kvmalloc_array(npages, sizeof(struct page *), GFP_KERNEL); - if (!shmem->pages) { - ret = -ENOMEM; - goto err_free_gem; - } - - ret = drm_prime_sg_to_page_addr_arrays(sgt, shmem->pages, NULL, npages); - if (ret < 0) - goto err_free_array; - shmem->sgt = sgt; - shmem->pages_use_count = 1; /* Permanently pinned from our point of view */
DRM_DEBUG_PRIME("size = %zu\n", size);
return &shmem->base; - -err_free_array: - kvfree(shmem->pages); -err_free_gem: - drm_gem_object_put_unlocked(&shmem->base); - - return ERR_PTR(ret); } EXPORT_SYMBOL_GPL(drm_gem_shmem_prime_import_sg_table);
Hi
Am 11.05.20 um 11:35 schrieb Daniel Vetter:
- Ditch the ->pages array
- Make it a private gem bo, which means no shmem object, which means fireworks if anyone calls drm_gem_object_get_pages. But we've just made sure that's all covered.
Cc: Gerd Hoffmann kraxel@redhat.com Cc: Rob Herring robh@kernel.org Cc: Noralf Trønnes noralf@tronnes.org Signed-off-by: Daniel Vetter daniel.vetter@intel.com
drivers/gpu/drm/drm_gem_shmem_helper.c | 59 ++++++++++---------------- 1 file changed, 23 insertions(+), 36 deletions(-)
diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index f7011338813e..8c7d4f422b7b 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -35,22 +35,12 @@ static const struct drm_gem_object_funcs drm_gem_shmem_funcs = { .mmap = drm_gem_shmem_mmap, };
-/**
- drm_gem_shmem_create - Allocate an object with the given size
- @dev: DRM device
- @size: Size of the object to allocate
- This function creates a shmem GEM object.
- Returns:
- A struct drm_gem_shmem_object * on success or an ERR_PTR()-encoded negative
- error code on failure.
- */
-struct drm_gem_shmem_object *drm_gem_shmem_create(struct drm_device *dev, size_t size) +static struct drm_gem_shmem_object * +__drm_gem_shmem_create(struct drm_device *dev, size_t size, bool private) { struct drm_gem_shmem_object *shmem; struct drm_gem_object *obj;
- int ret;
int ret = 0;
size = PAGE_ALIGN(size);
@@ -64,7 +54,10 @@ struct drm_gem_shmem_object *drm_gem_shmem_create(struct drm_device *dev, size_t if (!obj->funcs) obj->funcs = &drm_gem_shmem_funcs;
- ret = drm_gem_object_init(dev, obj, size);
- if (private)
drm_gem_private_object_init(dev, obj, size);
- else
if (ret) goto err_free;ret = drm_gem_object_init(dev, obj, size);
@@ -96,6 +89,21 @@ struct drm_gem_shmem_object *drm_gem_shmem_create(struct drm_device *dev, size_t
return ERR_PTR(ret); } +/**
- drm_gem_shmem_create - Allocate an object with the given size
- @dev: DRM device
- @size: Size of the object to allocate
- This function creates a shmem GEM object.
- Returns:
- A struct drm_gem_shmem_object * on success or an ERR_PTR()-encoded negative
- error code on failure.
- */
+struct drm_gem_shmem_object *drm_gem_shmem_create(struct drm_device *dev, size_t size) +{
- return __drm_gem_shmem_create(dev, size, false);
+} EXPORT_SYMBOL_GPL(drm_gem_shmem_create);
/** @@ -115,7 +123,6 @@ void drm_gem_shmem_free_object(struct drm_gem_object *obj) if (obj->import_attach) { shmem->pages_use_count--; drm_prime_gem_destroy(obj, shmem->sgt);
} else { if (shmem->sgt) { dma_unmap_sg(obj->dev->dev, shmem->sgt->sgl,kvfree(shmem->pages);
@@ -371,7 +378,7 @@ drm_gem_shmem_create_with_handle(struct drm_file *file_priv, struct drm_gem_shmem_object *shmem; int ret;
- shmem = drm_gem_shmem_create(dev, size);
- shmem = __drm_gem_shmem_create(dev, size, true); if (IS_ERR(shmem)) return shmem;
@@ -695,36 +702,16 @@ drm_gem_shmem_prime_import_sg_table(struct drm_device *dev, struct sg_table *sgt) { size_t size = PAGE_ALIGN(attach->dmabuf->size);
size_t npages = size >> PAGE_SHIFT; struct drm_gem_shmem_object *shmem;
int ret;
shmem = drm_gem_shmem_create(dev, size); if (IS_ERR(shmem)) return ERR_CAST(shmem);
shmem->pages = kvmalloc_array(npages, sizeof(struct page *), GFP_KERNEL);
if (!shmem->pages) {
ret = -ENOMEM;
goto err_free_gem;
}
ret = drm_prime_sg_to_page_addr_arrays(sgt, shmem->pages, NULL, npages);
if (ret < 0)
goto err_free_array;
shmem->sgt = sgt;
shmem->pages_use_count = 1; /* Permanently pinned from our point of view */
This counter protected drm_gem_shmem_get_pages() from being executed on imported buffers. I guess that previous patches sorted out all the instances where this could occur. If so, the current patch looks correct. I'm not sure, if the overall code is really better than what we have ATM, but anyway
Acked-by: Thomas Zimmermann tzimmermann@suse.de
DRM_DEBUG_PRIME("size = %zu\n", size);
return &shmem->base;
-err_free_array:
- kvfree(shmem->pages);
-err_free_gem:
- drm_gem_object_put_unlocked(&shmem->base);
- return ERR_PTR(ret);
} EXPORT_SYMBOL_GPL(drm_gem_shmem_prime_import_sg_table);
On Thu, May 14, 2020 at 09:44:02AM +0200, Thomas Zimmermann wrote:
Hi
Am 11.05.20 um 11:35 schrieb Daniel Vetter:
- Ditch the ->pages array
- Make it a private gem bo, which means no shmem object, which means fireworks if anyone calls drm_gem_object_get_pages. But we've just made sure that's all covered.
Cc: Gerd Hoffmann kraxel@redhat.com Cc: Rob Herring robh@kernel.org Cc: Noralf Trønnes noralf@tronnes.org Signed-off-by: Daniel Vetter daniel.vetter@intel.com
drivers/gpu/drm/drm_gem_shmem_helper.c | 59 ++++++++++---------------- 1 file changed, 23 insertions(+), 36 deletions(-)
diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index f7011338813e..8c7d4f422b7b 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -35,22 +35,12 @@ static const struct drm_gem_object_funcs drm_gem_shmem_funcs = { .mmap = drm_gem_shmem_mmap, };
-/**
- drm_gem_shmem_create - Allocate an object with the given size
- @dev: DRM device
- @size: Size of the object to allocate
- This function creates a shmem GEM object.
- Returns:
- A struct drm_gem_shmem_object * on success or an ERR_PTR()-encoded negative
- error code on failure.
- */
-struct drm_gem_shmem_object *drm_gem_shmem_create(struct drm_device *dev, size_t size) +static struct drm_gem_shmem_object * +__drm_gem_shmem_create(struct drm_device *dev, size_t size, bool private) { struct drm_gem_shmem_object *shmem; struct drm_gem_object *obj;
- int ret;
int ret = 0;
size = PAGE_ALIGN(size);
@@ -64,7 +54,10 @@ struct drm_gem_shmem_object *drm_gem_shmem_create(struct drm_device *dev, size_t if (!obj->funcs) obj->funcs = &drm_gem_shmem_funcs;
- ret = drm_gem_object_init(dev, obj, size);
- if (private)
drm_gem_private_object_init(dev, obj, size);
- else
if (ret) goto err_free;ret = drm_gem_object_init(dev, obj, size);
@@ -96,6 +89,21 @@ struct drm_gem_shmem_object *drm_gem_shmem_create(struct drm_device *dev, size_t
return ERR_PTR(ret); } +/**
- drm_gem_shmem_create - Allocate an object with the given size
- @dev: DRM device
- @size: Size of the object to allocate
- This function creates a shmem GEM object.
- Returns:
- A struct drm_gem_shmem_object * on success or an ERR_PTR()-encoded negative
- error code on failure.
- */
+struct drm_gem_shmem_object *drm_gem_shmem_create(struct drm_device *dev, size_t size) +{
- return __drm_gem_shmem_create(dev, size, false);
+} EXPORT_SYMBOL_GPL(drm_gem_shmem_create);
/** @@ -115,7 +123,6 @@ void drm_gem_shmem_free_object(struct drm_gem_object *obj) if (obj->import_attach) { shmem->pages_use_count--; drm_prime_gem_destroy(obj, shmem->sgt);
} else { if (shmem->sgt) { dma_unmap_sg(obj->dev->dev, shmem->sgt->sgl,kvfree(shmem->pages);
@@ -371,7 +378,7 @@ drm_gem_shmem_create_with_handle(struct drm_file *file_priv, struct drm_gem_shmem_object *shmem; int ret;
- shmem = drm_gem_shmem_create(dev, size);
- shmem = __drm_gem_shmem_create(dev, size, true); if (IS_ERR(shmem)) return shmem;
@@ -695,36 +702,16 @@ drm_gem_shmem_prime_import_sg_table(struct drm_device *dev, struct sg_table *sgt) { size_t size = PAGE_ALIGN(attach->dmabuf->size);
size_t npages = size >> PAGE_SHIFT; struct drm_gem_shmem_object *shmem;
int ret;
shmem = drm_gem_shmem_create(dev, size); if (IS_ERR(shmem)) return ERR_CAST(shmem);
shmem->pages = kvmalloc_array(npages, sizeof(struct page *), GFP_KERNEL);
if (!shmem->pages) {
ret = -ENOMEM;
goto err_free_gem;
}
ret = drm_prime_sg_to_page_addr_arrays(sgt, shmem->pages, NULL, npages);
if (ret < 0)
goto err_free_array;
shmem->sgt = sgt;
shmem->pages_use_count = 1; /* Permanently pinned from our point of view */
This counter protected drm_gem_shmem_get_pages() from being executed on imported buffers. I guess that previous patches sorted out all the instances where this could occur. If so, the current patch looks correct. I'm not sure, if the overall code is really better than what we have ATM, but anyway
The goal was to clearly sort these cases out, iirc we had callers of get_pages doing the wrong thing, but I tried to review them all. Some got removed while this series was hanging around in my tree somewhere.
What I wanted to do in the end is replace all mutex_lock with dma_resv_lock, which now should be doable. Except I need to audit all the drivers, and some want _locked variant since they are already holding the lock. That's roughly the point where I gave up on this eandeavour, at least for now.
But if we'd get there then all the various helpers we have (cma, shmem, vram) would more or less properly use dma_resv_lock as their protectiong concept. That's kinda neat since with the dynamic dma-buf stuff dma_resv_lock really becomes _the_ buffer lock for drivers, so some motivation to move towards that.
Anyway if you don't feel like this is all that useful without the dma_resv_lock work on top, I guess I can merge up to the doc patch and leave the others out. Not sure myself, thoughts?
Thanks for taking a look. -Daniel
Acked-by: Thomas Zimmermann tzimmermann@suse.de
DRM_DEBUG_PRIME("size = %zu\n", size);
return &shmem->base;
-err_free_array:
- kvfree(shmem->pages);
-err_free_gem:
- drm_gem_object_put_unlocked(&shmem->base);
- return ERR_PTR(ret);
} EXPORT_SYMBOL_GPL(drm_gem_shmem_prime_import_sg_table);
-- Thomas Zimmermann Graphics Driver Developer SUSE Software Solutions Germany GmbH Maxfeldstr. 5, 90409 Nürnberg, Germany (HRB 36809, AG Nürnberg) Geschäftsführer: Felix Imendörffer
Hi
Am 14.05.20 um 14:55 schrieb Daniel Vetter:
On Thu, May 14, 2020 at 09:44:02AM +0200, Thomas Zimmermann wrote:
Hi
Am 11.05.20 um 11:35 schrieb Daniel Vetter:
- Ditch the ->pages array
- Make it a private gem bo, which means no shmem object, which means fireworks if anyone calls drm_gem_object_get_pages. But we've just made sure that's all covered.
Cc: Gerd Hoffmann kraxel@redhat.com Cc: Rob Herring robh@kernel.org Cc: Noralf Trønnes noralf@tronnes.org Signed-off-by: Daniel Vetter daniel.vetter@intel.com
drivers/gpu/drm/drm_gem_shmem_helper.c | 59 ++++++++++---------------- 1 file changed, 23 insertions(+), 36 deletions(-)
diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index f7011338813e..8c7d4f422b7b 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -35,22 +35,12 @@ static const struct drm_gem_object_funcs drm_gem_shmem_funcs = { .mmap = drm_gem_shmem_mmap, };
-/**
- drm_gem_shmem_create - Allocate an object with the given size
- @dev: DRM device
- @size: Size of the object to allocate
- This function creates a shmem GEM object.
- Returns:
- A struct drm_gem_shmem_object * on success or an ERR_PTR()-encoded negative
- error code on failure.
- */
-struct drm_gem_shmem_object *drm_gem_shmem_create(struct drm_device *dev, size_t size) +static struct drm_gem_shmem_object * +__drm_gem_shmem_create(struct drm_device *dev, size_t size, bool private) { struct drm_gem_shmem_object *shmem; struct drm_gem_object *obj;
- int ret;
int ret = 0;
size = PAGE_ALIGN(size);
@@ -64,7 +54,10 @@ struct drm_gem_shmem_object *drm_gem_shmem_create(struct drm_device *dev, size_t if (!obj->funcs) obj->funcs = &drm_gem_shmem_funcs;
- ret = drm_gem_object_init(dev, obj, size);
- if (private)
drm_gem_private_object_init(dev, obj, size);
- else
if (ret) goto err_free;ret = drm_gem_object_init(dev, obj, size);
@@ -96,6 +89,21 @@ struct drm_gem_shmem_object *drm_gem_shmem_create(struct drm_device *dev, size_t
return ERR_PTR(ret); } +/**
- drm_gem_shmem_create - Allocate an object with the given size
- @dev: DRM device
- @size: Size of the object to allocate
- This function creates a shmem GEM object.
- Returns:
- A struct drm_gem_shmem_object * on success or an ERR_PTR()-encoded negative
- error code on failure.
- */
+struct drm_gem_shmem_object *drm_gem_shmem_create(struct drm_device *dev, size_t size) +{
- return __drm_gem_shmem_create(dev, size, false);
+} EXPORT_SYMBOL_GPL(drm_gem_shmem_create);
/** @@ -115,7 +123,6 @@ void drm_gem_shmem_free_object(struct drm_gem_object *obj) if (obj->import_attach) { shmem->pages_use_count--; drm_prime_gem_destroy(obj, shmem->sgt);
} else { if (shmem->sgt) { dma_unmap_sg(obj->dev->dev, shmem->sgt->sgl,kvfree(shmem->pages);
@@ -371,7 +378,7 @@ drm_gem_shmem_create_with_handle(struct drm_file *file_priv, struct drm_gem_shmem_object *shmem; int ret;
- shmem = drm_gem_shmem_create(dev, size);
- shmem = __drm_gem_shmem_create(dev, size, true); if (IS_ERR(shmem)) return shmem;
@@ -695,36 +702,16 @@ drm_gem_shmem_prime_import_sg_table(struct drm_device *dev, struct sg_table *sgt) { size_t size = PAGE_ALIGN(attach->dmabuf->size);
size_t npages = size >> PAGE_SHIFT; struct drm_gem_shmem_object *shmem;
int ret;
shmem = drm_gem_shmem_create(dev, size); if (IS_ERR(shmem)) return ERR_CAST(shmem);
shmem->pages = kvmalloc_array(npages, sizeof(struct page *), GFP_KERNEL);
if (!shmem->pages) {
ret = -ENOMEM;
goto err_free_gem;
}
ret = drm_prime_sg_to_page_addr_arrays(sgt, shmem->pages, NULL, npages);
if (ret < 0)
goto err_free_array;
shmem->sgt = sgt;
shmem->pages_use_count = 1; /* Permanently pinned from our point of view */
This counter protected drm_gem_shmem_get_pages() from being executed on imported buffers. I guess that previous patches sorted out all the instances where this could occur. If so, the current patch looks correct. I'm not sure, if the overall code is really better than what we have ATM, but anyway
The goal was to clearly sort these cases out, iirc we had callers of get_pages doing the wrong thing, but I tried to review them all. Some got removed while this series was hanging around in my tree somewhere.
What I wanted to do in the end is replace all mutex_lock with dma_resv_lock, which now should be doable. Except I need to audit all the drivers, and some want _locked variant since they are already holding the lock. That's roughly the point where I gave up on this eandeavour, at least for now.
But if we'd get there then all the various helpers we have (cma, shmem, vram) would more or less properly use dma_resv_lock as their protectiong concept. That's kinda neat since with the dynamic dma-buf stuff dma_resv_lock really becomes _the_ buffer lock for drivers, so some motivation to move towards that.
Anyway if you don't feel like this is all that useful without the dma_resv_lock work on top, I guess I can merge up to the doc patch and leave the others out. Not sure myself, thoughts?
Don't get me wrong, it's useful. The dma-buf support is just always a bit special and bolted-on in all memory managers.
Please go ahead and merge it if you like. The clean separation of dma-buf calls is better than what I did in my shmem patches. I'll rebase my stuff on top of whatever you end up merging.
Best regards Thomas
Thanks for taking a look. -Daniel
Acked-by: Thomas Zimmermann tzimmermann@suse.de
DRM_DEBUG_PRIME("size = %zu\n", size);
return &shmem->base;
-err_free_array:
- kvfree(shmem->pages);
-err_free_gem:
- drm_gem_object_put_unlocked(&shmem->base);
- return ERR_PTR(ret);
} EXPORT_SYMBOL_GPL(drm_gem_shmem_prime_import_sg_table);
-- Thomas Zimmermann Graphics Driver Developer SUSE Software Solutions Germany GmbH Maxfeldstr. 5, 90409 Nürnberg, Germany (HRB 36809, AG Nürnberg) Geschäftsführer: Felix Imendörffer
- Ditch the ->pages array - Make it a private gem bo, which means no shmem object, which means fireworks if anyone calls drm_gem_object_get_pages. But we've just made sure that's all covered.
v2: Rebase
Acked-by: Thomas Zimmermann tzimmermann@suse.de Cc: Gerd Hoffmann kraxel@redhat.com Cc: Rob Herring robh@kernel.org Cc: Noralf Trønnes noralf@tronnes.org Signed-off-by: Daniel Vetter daniel.vetter@intel.com --- drivers/gpu/drm/drm_gem_shmem_helper.c | 59 ++++++++++---------------- 1 file changed, 23 insertions(+), 36 deletions(-)
diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index 06cee8e97d27..f6854af206d2 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -35,22 +35,12 @@ static const struct drm_gem_object_funcs drm_gem_shmem_funcs = { .mmap = drm_gem_shmem_mmap, };
-/** - * drm_gem_shmem_create - Allocate an object with the given size - * @dev: DRM device - * @size: Size of the object to allocate - * - * This function creates a shmem GEM object. - * - * Returns: - * A struct drm_gem_shmem_object * on success or an ERR_PTR()-encoded negative - * error code on failure. - */ -struct drm_gem_shmem_object *drm_gem_shmem_create(struct drm_device *dev, size_t size) +static struct drm_gem_shmem_object * +__drm_gem_shmem_create(struct drm_device *dev, size_t size, bool private) { struct drm_gem_shmem_object *shmem; struct drm_gem_object *obj; - int ret; + int ret = 0;
size = PAGE_ALIGN(size);
@@ -64,7 +54,10 @@ struct drm_gem_shmem_object *drm_gem_shmem_create(struct drm_device *dev, size_t if (!obj->funcs) obj->funcs = &drm_gem_shmem_funcs;
- ret = drm_gem_object_init(dev, obj, size); + if (private) + drm_gem_private_object_init(dev, obj, size); + else + ret = drm_gem_object_init(dev, obj, size); if (ret) goto err_free;
@@ -96,6 +89,21 @@ struct drm_gem_shmem_object *drm_gem_shmem_create(struct drm_device *dev, size_t
return ERR_PTR(ret); } +/** + * drm_gem_shmem_create - Allocate an object with the given size + * @dev: DRM device + * @size: Size of the object to allocate + * + * This function creates a shmem GEM object. + * + * Returns: + * A struct drm_gem_shmem_object * on success or an ERR_PTR()-encoded negative + * error code on failure. + */ +struct drm_gem_shmem_object *drm_gem_shmem_create(struct drm_device *dev, size_t size) +{ + return __drm_gem_shmem_create(dev, size, false); +} EXPORT_SYMBOL_GPL(drm_gem_shmem_create);
/** @@ -115,7 +123,6 @@ void drm_gem_shmem_free_object(struct drm_gem_object *obj) if (obj->import_attach) { shmem->pages_use_count--; drm_prime_gem_destroy(obj, shmem->sgt); - kvfree(shmem->pages); } else { if (shmem->sgt) { dma_unmap_sg(obj->dev->dev, shmem->sgt->sgl, @@ -371,7 +378,7 @@ drm_gem_shmem_create_with_handle(struct drm_file *file_priv, struct drm_gem_shmem_object *shmem; int ret;
- shmem = drm_gem_shmem_create(dev, size); + shmem = __drm_gem_shmem_create(dev, size, true); if (IS_ERR(shmem)) return shmem;
@@ -695,36 +702,16 @@ drm_gem_shmem_prime_import_sg_table(struct drm_device *dev, struct sg_table *sgt) { size_t size = PAGE_ALIGN(attach->dmabuf->size); - size_t npages = size >> PAGE_SHIFT; struct drm_gem_shmem_object *shmem; - int ret;
shmem = drm_gem_shmem_create(dev, size); if (IS_ERR(shmem)) return ERR_CAST(shmem);
- shmem->pages = kvmalloc_array(npages, sizeof(struct page *), GFP_KERNEL); - if (!shmem->pages) { - ret = -ENOMEM; - goto err_free_gem; - } - - ret = drm_prime_sg_to_page_addr_arrays(sgt, shmem->pages, NULL, npages); - if (ret < 0) - goto err_free_array; - shmem->sgt = sgt; - shmem->pages_use_count = 1; /* Permanently pinned from our point of view */
DRM_DEBUG_PRIME("size = %zu\n", size);
return &shmem->base; - -err_free_array: - kvfree(shmem->pages); -err_free_gem: - drm_gem_object_put(&shmem->base); - - return ERR_PTR(ret); } EXPORT_SYMBOL_GPL(drm_gem_shmem_prime_import_sg_table);
Hi Daniel,
On Wed, 20 May 2020 20:02:32 +0200 Daniel Vetter daniel.vetter@ffwll.ch wrote:
- Ditch the ->pages array
- Make it a private gem bo, which means no shmem object, which means fireworks if anyone calls drm_gem_object_get_pages. But we've just made sure that's all covered.
v2: Rebase
Acked-by: Thomas Zimmermann tzimmermann@suse.de Cc: Gerd Hoffmann kraxel@redhat.com Cc: Rob Herring robh@kernel.org Cc: Noralf Trønnes noralf@tronnes.org Signed-off-by: Daniel Vetter daniel.vetter@intel.com
I see a bunch of
[ 38.261313] ------------[ cut here ]------------ [ 38.261740] WARNING: CPU: 4 PID: 2945 at drivers/gpu/drm/drm_gem_shmem_helper.c:137 drm_gem_shmem_free_object+0xb4/0xe0 [ 38.262676] Modules linked in: [ 38.262949] CPU: 4 PID: 2945 Comm: Xwayland Not tainted 5.7.0-rc1-00111-g9c7e526c43d0 #520 [ 38.263667] Hardware name: Radxa ROCK Pi 4 (DT) [ 38.264062] pstate: 60000005 (nZCv daif -PAN -UAO) [ 38.264482] pc : drm_gem_shmem_free_object+0xb4/0xe0 [ 38.264916] lr : drm_gem_shmem_free_object+0x38/0xe0 [ 38.265348] sp : ffff800011cebbb0 [ 38.265639] x29: ffff800011cebbb0 x28: ffff800011cebd88 [ 38.266102] x27: 0000000000000000 x26: ffff000072a1f400 [ 38.266566] x25: 0000000000000009 x24: ffff000072a1f400 [ 38.267029] x23: 0000000000000002 x22: ffff000079409080 [ 38.267492] x21: ffff000079409280 x20: ffff00006c304800 [ 38.267955] x19: ffff00006c304800 x18: 0000000000000000 [ 38.268417] x17: 0000000000000000 x16: 0000000000000000 [ 38.268880] x15: 0000000000000000 x14: 0000000000000000 [ 38.269343] x13: 0001000000000000 x12: 0000000000000008 [ 38.269806] x11: 000000000000ffff x10: 0000000000000000 [ 38.270269] x9 : 0000000000000001 x8 : 0000000000210d00 [ 38.270732] x7 : 0000000000000001 x6 : ffff800011307980 [ 38.271195] x5 : ffff00006641c240 x4 : ffff00006ee1b400 [ 38.271656] x3 : 0000000000000004 x2 : aafbc6f338cf6000 [ 38.272119] x1 : 0000000000000000 x0 : 00000000ffffffff [ 38.272583] Call trace: [ 38.272799] drm_gem_shmem_free_object+0xb4/0xe0 [ 38.273203] panfrost_gem_free_object+0xf0/0x128 [ 38.273608] drm_gem_object_free+0x18/0x40 [ 38.273967] drm_gem_object_handle_put_unlocked+0xe4/0xe8 [ 38.274439] drm_gem_object_release_handle+0x6c/0x98 [ 38.274872] drm_gem_handle_delete+0x84/0x140 [ 38.275253] drm_gem_close_ioctl+0x2c/0x40 [ 38.275612] drm_ioctl_kernel+0xb8/0x108 [ 38.275954] drm_ioctl+0x214/0x450 [ 38.276255] ksys_ioctl+0xa0/0xe0 [ 38.276546] __arm64_sys_ioctl+0x1c/0x28 [ 38.276891] el0_svc_common.constprop.0+0x68/0x160 [ 38.277310] do_el0_svc+0x20/0x80 [ 38.277602] el0_sync_handler+0x10c/0x178 [ 38.277952] el0_sync+0x140/0x180 [ 38.278242] ---[ end trace db5754ef8b213ce5 ]---
after applying that patch. Didn't have time to dig through it, unfortunately.
drivers/gpu/drm/drm_gem_shmem_helper.c | 59 ++++++++++---------------- 1 file changed, 23 insertions(+), 36 deletions(-)
diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index 06cee8e97d27..f6854af206d2 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -35,22 +35,12 @@ static const struct drm_gem_object_funcs drm_gem_shmem_funcs = { .mmap = drm_gem_shmem_mmap, };
-/**
- drm_gem_shmem_create - Allocate an object with the given size
- @dev: DRM device
- @size: Size of the object to allocate
- This function creates a shmem GEM object.
- Returns:
- A struct drm_gem_shmem_object * on success or an ERR_PTR()-encoded negative
- error code on failure.
- */
-struct drm_gem_shmem_object *drm_gem_shmem_create(struct drm_device *dev, size_t size) +static struct drm_gem_shmem_object * +__drm_gem_shmem_create(struct drm_device *dev, size_t size, bool private) { struct drm_gem_shmem_object *shmem; struct drm_gem_object *obj;
- int ret;
int ret = 0;
size = PAGE_ALIGN(size);
@@ -64,7 +54,10 @@ struct drm_gem_shmem_object *drm_gem_shmem_create(struct drm_device *dev, size_t if (!obj->funcs) obj->funcs = &drm_gem_shmem_funcs;
- ret = drm_gem_object_init(dev, obj, size);
- if (private)
drm_gem_private_object_init(dev, obj, size);
- else
if (ret) goto err_free;ret = drm_gem_object_init(dev, obj, size);
@@ -96,6 +89,21 @@ struct drm_gem_shmem_object *drm_gem_shmem_create(struct drm_device *dev, size_t
return ERR_PTR(ret); } +/**
- drm_gem_shmem_create - Allocate an object with the given size
- @dev: DRM device
- @size: Size of the object to allocate
- This function creates a shmem GEM object.
- Returns:
- A struct drm_gem_shmem_object * on success or an ERR_PTR()-encoded negative
- error code on failure.
- */
+struct drm_gem_shmem_object *drm_gem_shmem_create(struct drm_device *dev, size_t size) +{
- return __drm_gem_shmem_create(dev, size, false);
+} EXPORT_SYMBOL_GPL(drm_gem_shmem_create);
/** @@ -115,7 +123,6 @@ void drm_gem_shmem_free_object(struct drm_gem_object *obj) if (obj->import_attach) { shmem->pages_use_count--; drm_prime_gem_destroy(obj, shmem->sgt);
} else { if (shmem->sgt) { dma_unmap_sg(obj->dev->dev, shmem->sgt->sgl,kvfree(shmem->pages);
@@ -371,7 +378,7 @@ drm_gem_shmem_create_with_handle(struct drm_file *file_priv, struct drm_gem_shmem_object *shmem; int ret;
- shmem = drm_gem_shmem_create(dev, size);
- shmem = __drm_gem_shmem_create(dev, size, true); if (IS_ERR(shmem)) return shmem;
@@ -695,36 +702,16 @@ drm_gem_shmem_prime_import_sg_table(struct drm_device *dev, struct sg_table *sgt) { size_t size = PAGE_ALIGN(attach->dmabuf->size);
size_t npages = size >> PAGE_SHIFT; struct drm_gem_shmem_object *shmem;
int ret;
shmem = drm_gem_shmem_create(dev, size); if (IS_ERR(shmem)) return ERR_CAST(shmem);
shmem->pages = kvmalloc_array(npages, sizeof(struct page *), GFP_KERNEL);
if (!shmem->pages) {
ret = -ENOMEM;
goto err_free_gem;
}
ret = drm_prime_sg_to_page_addr_arrays(sgt, shmem->pages, NULL, npages);
if (ret < 0)
goto err_free_array;
shmem->sgt = sgt;
shmem->pages_use_count = 1; /* Permanently pinned from our point of view */
DRM_DEBUG_PRIME("size = %zu\n", size);
return &shmem->base;
-err_free_array:
- kvfree(shmem->pages);
-err_free_gem:
- drm_gem_object_put(&shmem->base);
- return ERR_PTR(ret);
} EXPORT_SYMBOL_GPL(drm_gem_shmem_prime_import_sg_table);
On Fri, 29 May 2020 15:34:28 +0200 Boris Brezillon boris.brezillon@collabora.com wrote:
Hi Daniel,
On Wed, 20 May 2020 20:02:32 +0200 Daniel Vetter daniel.vetter@ffwll.ch wrote:
- Ditch the ->pages array
- Make it a private gem bo, which means no shmem object, which means fireworks if anyone calls drm_gem_object_get_pages. But we've just made sure that's all covered.
v2: Rebase
Acked-by: Thomas Zimmermann tzimmermann@suse.de Cc: Gerd Hoffmann kraxel@redhat.com Cc: Rob Herring robh@kernel.org Cc: Noralf Trønnes noralf@tronnes.org Signed-off-by: Daniel Vetter daniel.vetter@intel.com
I see a bunch of
[ 38.261313] ------------[ cut here ]------------ [ 38.261740] WARNING: CPU: 4 PID: 2945 at drivers/gpu/drm/drm_gem_shmem_helper.c:137 drm_gem_shmem_free_object+0xb4/0xe0 [ 38.262676] Modules linked in: [ 38.262949] CPU: 4 PID: 2945 Comm: Xwayland Not tainted 5.7.0-rc1-00111-g9c7e526c43d0 #520 [ 38.263667] Hardware name: Radxa ROCK Pi 4 (DT) [ 38.264062] pstate: 60000005 (nZCv daif -PAN -UAO) [ 38.264482] pc : drm_gem_shmem_free_object+0xb4/0xe0 [ 38.264916] lr : drm_gem_shmem_free_object+0x38/0xe0 [ 38.265348] sp : ffff800011cebbb0 [ 38.265639] x29: ffff800011cebbb0 x28: ffff800011cebd88 [ 38.266102] x27: 0000000000000000 x26: ffff000072a1f400 [ 38.266566] x25: 0000000000000009 x24: ffff000072a1f400 [ 38.267029] x23: 0000000000000002 x22: ffff000079409080 [ 38.267492] x21: ffff000079409280 x20: ffff00006c304800 [ 38.267955] x19: ffff00006c304800 x18: 0000000000000000 [ 38.268417] x17: 0000000000000000 x16: 0000000000000000 [ 38.268880] x15: 0000000000000000 x14: 0000000000000000 [ 38.269343] x13: 0001000000000000 x12: 0000000000000008 [ 38.269806] x11: 000000000000ffff x10: 0000000000000000 [ 38.270269] x9 : 0000000000000001 x8 : 0000000000210d00 [ 38.270732] x7 : 0000000000000001 x6 : ffff800011307980 [ 38.271195] x5 : ffff00006641c240 x4 : ffff00006ee1b400 [ 38.271656] x3 : 0000000000000004 x2 : aafbc6f338cf6000 [ 38.272119] x1 : 0000000000000000 x0 : 00000000ffffffff [ 38.272583] Call trace: [ 38.272799] drm_gem_shmem_free_object+0xb4/0xe0 [ 38.273203] panfrost_gem_free_object+0xf0/0x128 [ 38.273608] drm_gem_object_free+0x18/0x40 [ 38.273967] drm_gem_object_handle_put_unlocked+0xe4/0xe8 [ 38.274439] drm_gem_object_release_handle+0x6c/0x98 [ 38.274872] drm_gem_handle_delete+0x84/0x140 [ 38.275253] drm_gem_close_ioctl+0x2c/0x40 [ 38.275612] drm_ioctl_kernel+0xb8/0x108 [ 38.275954] drm_ioctl+0x214/0x450 [ 38.276255] ksys_ioctl+0xa0/0xe0 [ 38.276546] __arm64_sys_ioctl+0x1c/0x28 [ 38.276891] el0_svc_common.constprop.0+0x68/0x160 [ 38.277310] do_el0_svc+0x20/0x80 [ 38.277602] el0_sync_handler+0x10c/0x178 [ 38.277952] el0_sync+0x140/0x180 [ 38.278242] ---[ end trace db5754ef8b213ce5 ]---
after applying that patch. Didn't have time to dig through it, unfortunately.
Sorry, I forgot to mention that I'm testing on panfrost.
On Wed, 20 May 2020 20:02:32 +0200 Daniel Vetter daniel.vetter@ffwll.ch wrote:
@@ -695,36 +702,16 @@ drm_gem_shmem_prime_import_sg_table(struct drm_device *dev, struct sg_table *sgt) { size_t size = PAGE_ALIGN(attach->dmabuf->size);
size_t npages = size >> PAGE_SHIFT; struct drm_gem_shmem_object *shmem;
int ret;
shmem = drm_gem_shmem_create(dev, size); if (IS_ERR(shmem)) return ERR_CAST(shmem);
shmem->pages = kvmalloc_array(npages, sizeof(struct page *), GFP_KERNEL);
if (!shmem->pages) {
ret = -ENOMEM;
goto err_free_gem;
}
ret = drm_prime_sg_to_page_addr_arrays(sgt, shmem->pages, NULL, npages);
if (ret < 0)
goto err_free_array;
shmem->sgt = sgt;
shmem->pages_use_count = 1; /* Permanently pinned from our point of view */
Keep the above line and that should be good.
DRM_DEBUG_PRIME("size = %zu\n", size);
return &shmem->base;
-err_free_array:
- kvfree(shmem->pages);
-err_free_gem:
- drm_gem_object_put(&shmem->base);
- return ERR_PTR(ret);
} EXPORT_SYMBOL_GPL(drm_gem_shmem_prime_import_sg_table);
- Ditch the ->pages array - Make it a private gem bo, which means no shmem object, which means fireworks if anyone calls drm_gem_object_get_pages. But we've just made sure that's all covered.
v2: Rebase
v3: I forgot to remove the page_count mangling from the free path too. Noticed by Boris while testing.
Cc: Boris Brezillon boris.brezillon@collabora.com Acked-by: Thomas Zimmermann tzimmermann@suse.de Cc: Gerd Hoffmann kraxel@redhat.com Cc: Rob Herring robh@kernel.org Cc: Noralf Trønnes noralf@tronnes.org Signed-off-by: Daniel Vetter daniel.vetter@intel.com --- drivers/gpu/drm/drm_gem_shmem_helper.c | 60 ++++++++++---------------- 1 file changed, 23 insertions(+), 37 deletions(-)
diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index 06cee8e97d27..f750063968ef 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -35,22 +35,12 @@ static const struct drm_gem_object_funcs drm_gem_shmem_funcs = { .mmap = drm_gem_shmem_mmap, };
-/** - * drm_gem_shmem_create - Allocate an object with the given size - * @dev: DRM device - * @size: Size of the object to allocate - * - * This function creates a shmem GEM object. - * - * Returns: - * A struct drm_gem_shmem_object * on success or an ERR_PTR()-encoded negative - * error code on failure. - */ -struct drm_gem_shmem_object *drm_gem_shmem_create(struct drm_device *dev, size_t size) +static struct drm_gem_shmem_object * +__drm_gem_shmem_create(struct drm_device *dev, size_t size, bool private) { struct drm_gem_shmem_object *shmem; struct drm_gem_object *obj; - int ret; + int ret = 0;
size = PAGE_ALIGN(size);
@@ -64,7 +54,10 @@ struct drm_gem_shmem_object *drm_gem_shmem_create(struct drm_device *dev, size_t if (!obj->funcs) obj->funcs = &drm_gem_shmem_funcs;
- ret = drm_gem_object_init(dev, obj, size); + if (private) + drm_gem_private_object_init(dev, obj, size); + else + ret = drm_gem_object_init(dev, obj, size); if (ret) goto err_free;
@@ -96,6 +89,21 @@ struct drm_gem_shmem_object *drm_gem_shmem_create(struct drm_device *dev, size_t
return ERR_PTR(ret); } +/** + * drm_gem_shmem_create - Allocate an object with the given size + * @dev: DRM device + * @size: Size of the object to allocate + * + * This function creates a shmem GEM object. + * + * Returns: + * A struct drm_gem_shmem_object * on success or an ERR_PTR()-encoded negative + * error code on failure. + */ +struct drm_gem_shmem_object *drm_gem_shmem_create(struct drm_device *dev, size_t size) +{ + return __drm_gem_shmem_create(dev, size, false); +} EXPORT_SYMBOL_GPL(drm_gem_shmem_create);
/** @@ -113,9 +121,7 @@ void drm_gem_shmem_free_object(struct drm_gem_object *obj) WARN_ON(shmem->vmap_use_count);
if (obj->import_attach) { - shmem->pages_use_count--; drm_prime_gem_destroy(obj, shmem->sgt); - kvfree(shmem->pages); } else { if (shmem->sgt) { dma_unmap_sg(obj->dev->dev, shmem->sgt->sgl, @@ -371,7 +377,7 @@ drm_gem_shmem_create_with_handle(struct drm_file *file_priv, struct drm_gem_shmem_object *shmem; int ret;
- shmem = drm_gem_shmem_create(dev, size); + shmem = __drm_gem_shmem_create(dev, size, true); if (IS_ERR(shmem)) return shmem;
@@ -695,36 +701,16 @@ drm_gem_shmem_prime_import_sg_table(struct drm_device *dev, struct sg_table *sgt) { size_t size = PAGE_ALIGN(attach->dmabuf->size); - size_t npages = size >> PAGE_SHIFT; struct drm_gem_shmem_object *shmem; - int ret;
shmem = drm_gem_shmem_create(dev, size); if (IS_ERR(shmem)) return ERR_CAST(shmem);
- shmem->pages = kvmalloc_array(npages, sizeof(struct page *), GFP_KERNEL); - if (!shmem->pages) { - ret = -ENOMEM; - goto err_free_gem; - } - - ret = drm_prime_sg_to_page_addr_arrays(sgt, shmem->pages, NULL, npages); - if (ret < 0) - goto err_free_array; - shmem->sgt = sgt; - shmem->pages_use_count = 1; /* Permanently pinned from our point of view */
DRM_DEBUG_PRIME("size = %zu\n", size);
return &shmem->base; - -err_free_array: - kvfree(shmem->pages); -err_free_gem: - drm_gem_object_put(&shmem->base); - - return ERR_PTR(ret); } EXPORT_SYMBOL_GPL(drm_gem_shmem_prime_import_sg_table);
On Fri, 29 May 2020 16:05:42 +0200 Daniel Vetter daniel.vetter@ffwll.ch wrote:
- Ditch the ->pages array
- Make it a private gem bo, which means no shmem object, which means fireworks if anyone calls drm_gem_object_get_pages. But we've just made sure that's all covered.
v2: Rebase
v3: I forgot to remove the page_count mangling from the free path too. Noticed by Boris while testing.
Cc: Boris Brezillon boris.brezillon@collabora.com
Tested-by: Boris Brezillon boris.brezillon@collabora.com
Acked-by: Thomas Zimmermann tzimmermann@suse.de Cc: Gerd Hoffmann kraxel@redhat.com Cc: Rob Herring robh@kernel.org Cc: Noralf Trønnes noralf@tronnes.org Signed-off-by: Daniel Vetter daniel.vetter@intel.com
drivers/gpu/drm/drm_gem_shmem_helper.c | 60 ++++++++++---------------- 1 file changed, 23 insertions(+), 37 deletions(-)
diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index 06cee8e97d27..f750063968ef 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -35,22 +35,12 @@ static const struct drm_gem_object_funcs drm_gem_shmem_funcs = { .mmap = drm_gem_shmem_mmap, };
-/**
- drm_gem_shmem_create - Allocate an object with the given size
- @dev: DRM device
- @size: Size of the object to allocate
- This function creates a shmem GEM object.
- Returns:
- A struct drm_gem_shmem_object * on success or an ERR_PTR()-encoded negative
- error code on failure.
- */
-struct drm_gem_shmem_object *drm_gem_shmem_create(struct drm_device *dev, size_t size) +static struct drm_gem_shmem_object * +__drm_gem_shmem_create(struct drm_device *dev, size_t size, bool private) { struct drm_gem_shmem_object *shmem; struct drm_gem_object *obj;
- int ret;
int ret = 0;
size = PAGE_ALIGN(size);
@@ -64,7 +54,10 @@ struct drm_gem_shmem_object *drm_gem_shmem_create(struct drm_device *dev, size_t if (!obj->funcs) obj->funcs = &drm_gem_shmem_funcs;
- ret = drm_gem_object_init(dev, obj, size);
- if (private)
drm_gem_private_object_init(dev, obj, size);
- else
if (ret) goto err_free;ret = drm_gem_object_init(dev, obj, size);
@@ -96,6 +89,21 @@ struct drm_gem_shmem_object *drm_gem_shmem_create(struct drm_device *dev, size_t
return ERR_PTR(ret); } +/**
- drm_gem_shmem_create - Allocate an object with the given size
- @dev: DRM device
- @size: Size of the object to allocate
- This function creates a shmem GEM object.
- Returns:
- A struct drm_gem_shmem_object * on success or an ERR_PTR()-encoded negative
- error code on failure.
- */
+struct drm_gem_shmem_object *drm_gem_shmem_create(struct drm_device *dev, size_t size) +{
- return __drm_gem_shmem_create(dev, size, false);
+} EXPORT_SYMBOL_GPL(drm_gem_shmem_create);
/** @@ -113,9 +121,7 @@ void drm_gem_shmem_free_object(struct drm_gem_object *obj) WARN_ON(shmem->vmap_use_count);
if (obj->import_attach) {
drm_prime_gem_destroy(obj, shmem->sgt);shmem->pages_use_count--;
} else { if (shmem->sgt) { dma_unmap_sg(obj->dev->dev, shmem->sgt->sgl,kvfree(shmem->pages);
@@ -371,7 +377,7 @@ drm_gem_shmem_create_with_handle(struct drm_file *file_priv, struct drm_gem_shmem_object *shmem; int ret;
- shmem = drm_gem_shmem_create(dev, size);
- shmem = __drm_gem_shmem_create(dev, size, true); if (IS_ERR(shmem)) return shmem;
@@ -695,36 +701,16 @@ drm_gem_shmem_prime_import_sg_table(struct drm_device *dev, struct sg_table *sgt) { size_t size = PAGE_ALIGN(attach->dmabuf->size);
size_t npages = size >> PAGE_SHIFT; struct drm_gem_shmem_object *shmem;
int ret;
shmem = drm_gem_shmem_create(dev, size); if (IS_ERR(shmem)) return ERR_CAST(shmem);
shmem->pages = kvmalloc_array(npages, sizeof(struct page *), GFP_KERNEL);
if (!shmem->pages) {
ret = -ENOMEM;
goto err_free_gem;
}
ret = drm_prime_sg_to_page_addr_arrays(sgt, shmem->pages, NULL, npages);
if (ret < 0)
goto err_free_array;
shmem->sgt = sgt;
shmem->pages_use_count = 1; /* Permanently pinned from our point of view */
DRM_DEBUG_PRIME("size = %zu\n", size);
return &shmem->base;
-err_free_array:
- kvfree(shmem->pages);
-err_free_gem:
- drm_gem_object_put(&shmem->base);
- return ERR_PTR(ret);
} EXPORT_SYMBOL_GPL(drm_gem_shmem_prime_import_sg_table);
Hi Daniel
this patch causes a segmentation fault.
Am 11.05.20 um 11:35 schrieb Daniel Vetter:
- Ditch the ->pages array
- Make it a private gem bo, which means no shmem object, which means fireworks if anyone calls drm_gem_object_get_pages. But we've just made sure that's all covered.
Cc: Gerd Hoffmann kraxel@redhat.com Cc: Rob Herring robh@kernel.org Cc: Noralf Trønnes noralf@tronnes.org Signed-off-by: Daniel Vetter daniel.vetter@intel.com
drivers/gpu/drm/drm_gem_shmem_helper.c | 59 ++++++++++---------------- 1 file changed, 23 insertions(+), 36 deletions(-)
diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index f7011338813e..8c7d4f422b7b 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -35,22 +35,12 @@ static const struct drm_gem_object_funcs drm_gem_shmem_funcs = { .mmap = drm_gem_shmem_mmap, };
-/**
- drm_gem_shmem_create - Allocate an object with the given size
- @dev: DRM device
- @size: Size of the object to allocate
- This function creates a shmem GEM object.
- Returns:
- A struct drm_gem_shmem_object * on success or an ERR_PTR()-encoded negative
- error code on failure.
- */
-struct drm_gem_shmem_object *drm_gem_shmem_create(struct drm_device *dev, size_t size) +static struct drm_gem_shmem_object * +__drm_gem_shmem_create(struct drm_device *dev, size_t size, bool private) { struct drm_gem_shmem_object *shmem; struct drm_gem_object *obj;
- int ret;
int ret = 0;
size = PAGE_ALIGN(size);
@@ -64,7 +54,10 @@ struct drm_gem_shmem_object *drm_gem_shmem_create(struct drm_device *dev, size_t if (!obj->funcs) obj->funcs = &drm_gem_shmem_funcs;
- ret = drm_gem_object_init(dev, obj, size);
- if (private)
drm_gem_private_object_init(dev, obj, size);
This call doesn't set obj->filp, which is dereferenced further below at [1]. This happens from dumb_create().
Best regards Thomas
[1] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/driv...
- else
if (ret) goto err_free;ret = drm_gem_object_init(dev, obj, size);
@@ -96,6 +89,21 @@ struct drm_gem_shmem_object *drm_gem_shmem_create(struct drm_device *dev, size_t
return ERR_PTR(ret); } +/**
- drm_gem_shmem_create - Allocate an object with the given size
- @dev: DRM device
- @size: Size of the object to allocate
- This function creates a shmem GEM object.
- Returns:
- A struct drm_gem_shmem_object * on success or an ERR_PTR()-encoded negative
- error code on failure.
- */
+struct drm_gem_shmem_object *drm_gem_shmem_create(struct drm_device *dev, size_t size) +{
- return __drm_gem_shmem_create(dev, size, false);
+} EXPORT_SYMBOL_GPL(drm_gem_shmem_create);
/** @@ -115,7 +123,6 @@ void drm_gem_shmem_free_object(struct drm_gem_object *obj) if (obj->import_attach) { shmem->pages_use_count--; drm_prime_gem_destroy(obj, shmem->sgt);
} else { if (shmem->sgt) { dma_unmap_sg(obj->dev->dev, shmem->sgt->sgl,kvfree(shmem->pages);
@@ -371,7 +378,7 @@ drm_gem_shmem_create_with_handle(struct drm_file *file_priv, struct drm_gem_shmem_object *shmem; int ret;
- shmem = drm_gem_shmem_create(dev, size);
- shmem = __drm_gem_shmem_create(dev, size, true); if (IS_ERR(shmem)) return shmem;
@@ -695,36 +702,16 @@ drm_gem_shmem_prime_import_sg_table(struct drm_device *dev, struct sg_table *sgt) { size_t size = PAGE_ALIGN(attach->dmabuf->size);
size_t npages = size >> PAGE_SHIFT; struct drm_gem_shmem_object *shmem;
int ret;
shmem = drm_gem_shmem_create(dev, size); if (IS_ERR(shmem)) return ERR_CAST(shmem);
shmem->pages = kvmalloc_array(npages, sizeof(struct page *), GFP_KERNEL);
if (!shmem->pages) {
ret = -ENOMEM;
goto err_free_gem;
}
ret = drm_prime_sg_to_page_addr_arrays(sgt, shmem->pages, NULL, npages);
if (ret < 0)
goto err_free_array;
shmem->sgt = sgt;
shmem->pages_use_count = 1; /* Permanently pinned from our point of view */
DRM_DEBUG_PRIME("size = %zu\n", size);
return &shmem->base;
-err_free_array:
- kvfree(shmem->pages);
-err_free_gem:
- drm_gem_object_put_unlocked(&shmem->base);
- return ERR_PTR(ret);
} EXPORT_SYMBOL_GPL(drm_gem_shmem_prime_import_sg_table);
On Mon, 11 May 2020 11:35:45 +0200 Daniel Vetter daniel.vetter@ffwll.ch wrote:
Hi all,
I've started this a while ago, with the idea to move shmem helpers over to dma_resv_lock. Big prep work for that was to untangle the layering between functions called by drivers, and functions used to implement drm_gem_object_funcs.
I didn't ever get to the locking part, but I think the cleanup here are worth it stand-alone still.
Comments, review and testing very much welcome.
Cheers, Daniel
Daniel Vetter (9): drm/msm: Don't call dma_buf_vunmap without _vmap drm/gem: WARN if drm_gem_get_pages is called on a private obj drm/doc: Some polish for shmem helpers drm/virtio: Call the right shmem helpers drm/udl: Don't call get/put_pages on imported dma-buf drm/shmem-helpers: Don't call get/put_pages on imported dma-buf in vmap drm/shmem-helpers: Redirect mmap for imported dma-buf drm/shmem-helpers: Ensure get_pages is not called on imported dma-buf drm/shmem-helpers: Simplify dma-buf importing
With the fix suggested on patch 9 (or something similar to initialize pages_use_count to 1 when importing a dma-buf), this patchset seems to work on panfrost:
Tested-by: Boris Brezillon boris.brezillon@collabora.com
Documentation/gpu/drm-kms-helpers.rst | 12 --- Documentation/gpu/drm-mm.rst | 12 +++ drivers/gpu/drm/drm_gem.c | 8 ++ drivers/gpu/drm/drm_gem_shmem_helper.c | 128 ++++++++++++++---------- drivers/gpu/drm/msm/msm_gem.c | 3 +- drivers/gpu/drm/udl/udl_gem.c | 22 ++-- drivers/gpu/drm/virtio/virtgpu_object.c | 2 +- 7 files changed, 111 insertions(+), 76 deletions(-)
dri-devel@lists.freedesktop.org