From: Gustavo Padovan gustavo.padovan@collabora.co.uk
Hi,
This is a first proposal to discuss the addition of in-fences support to DRM. It adds a new struct to fence.c to abstract the use of sync_file in DRM drivers. The new struct fence_collection contains a array with all fences that a atomic commit needs to wait on
/** * struct fence_collection - aggregate fences together * @num_fences: number of fence in the collection. * @user_data: user data. * @func: user callback to put user data. * @fences: array of @num_fences fences. */ struct fence_collection { int num_fences; void *user_data; collection_put_func_t func; struct fence *fences[]; };
The fence_collection is allocated and filled by sync_file_fences_get() and atomic_commit helpers can use fence_collection_wait() to wait the fences to signal.
These patches depends on the sync ABI rework:
https://www.spinics.net/lists/dri-devel/msg102795.html
and the patch to de-stage the sync framework:
https://www.spinics.net/lists/dri-devel/msg102799.html
I also hacked together some sync support into modetest for testing:
https://git.collabora.com/cgit/user/padovan/libdrm.git/log/?h=atomic
Gustavo
Gustavo Padovan (6): drm/fence: add FENCE_FD property to planes dma-buf/fence: add struct fence_collection dma-buf/sync_file: add sync_file_fences_get() dma-buf/fence: add fence_collection_put() dma-buf/fence: add fence_collection_wait() drm/fence: support fence_collection on atomic commit
drivers/dma-buf/fence.c | 33 +++++++++++++++++++++++++++++++++ drivers/dma-buf/sync_file.c | 36 ++++++++++++++++++++++++++++++++++++ drivers/gpu/drm/drm_atomic.c | 13 +++++++++++++ drivers/gpu/drm/drm_atomic_helper.c | 10 ++++++---- drivers/gpu/drm/drm_crtc.c | 7 +++++++ include/drm/drm_crtc.h | 5 ++++- include/linux/fence.h | 19 +++++++++++++++++++ include/linux/sync_file.h | 8 ++++++++ 8 files changed, 126 insertions(+), 5 deletions(-)
From: Gustavo Padovan gustavo.padovan@collabora.co.uk
FENCE_FD can now be set by the user during an atomic IOCTL, it will be used by atomic_commit to wait until the sync_file is signalled, i.e., the framebuffer is ready for scanout.
Signed-off-by: Gustavo Padovan gustavo.padovan@collabora.co.uk --- drivers/gpu/drm/drm_atomic.c | 4 ++++ drivers/gpu/drm/drm_atomic_helper.c | 1 + drivers/gpu/drm/drm_crtc.c | 7 +++++++ include/drm/drm_crtc.h | 3 +++ 4 files changed, 15 insertions(+)
diff --git a/drivers/gpu/drm/drm_atomic.c b/drivers/gpu/drm/drm_atomic.c index 8fb469c..8bc364c 100644 --- a/drivers/gpu/drm/drm_atomic.c +++ b/drivers/gpu/drm/drm_atomic.c @@ -609,6 +609,8 @@ int drm_atomic_plane_set_property(struct drm_plane *plane, drm_atomic_set_fb_for_plane(state, fb); if (fb) drm_framebuffer_unreference(fb); + } else if (property == config->prop_fence_fd) { + state->fence_fd = val; } else if (property == config->prop_crtc_id) { struct drm_crtc *crtc = drm_crtc_find(dev, val); return drm_atomic_set_crtc_for_plane(state, crtc); @@ -666,6 +668,8 @@ drm_atomic_plane_get_property(struct drm_plane *plane,
if (property == config->prop_fb_id) { *val = (state->fb) ? state->fb->base.id : 0; + } else if (property == config->prop_fence_fd) { + *val = state->fence_fd; } else if (property == config->prop_crtc_id) { *val = (state->crtc) ? state->crtc->base.id : 0; } else if (property == config->prop_crtc_x) { diff --git a/drivers/gpu/drm/drm_atomic_helper.c b/drivers/gpu/drm/drm_atomic_helper.c index 2b430b0..4f91f84 100644 --- a/drivers/gpu/drm/drm_atomic_helper.c +++ b/drivers/gpu/drm/drm_atomic_helper.c @@ -2594,6 +2594,7 @@ void drm_atomic_helper_plane_reset(struct drm_plane *plane) if (plane->state) { plane->state->plane = plane; plane->state->rotation = BIT(DRM_ROTATE_0); + plane->state->fence_fd = -1; } } EXPORT_SYMBOL(drm_atomic_helper_plane_reset); diff --git a/drivers/gpu/drm/drm_crtc.c b/drivers/gpu/drm/drm_crtc.c index 65258ac..165f199 100644 --- a/drivers/gpu/drm/drm_crtc.c +++ b/drivers/gpu/drm/drm_crtc.c @@ -1291,6 +1291,7 @@ int drm_universal_plane_init(struct drm_device *dev, struct drm_plane *plane,
if (drm_core_check_feature(dev, DRIVER_ATOMIC)) { drm_object_attach_property(&plane->base, config->prop_fb_id, 0); + drm_object_attach_property(&plane->base, config->prop_fence_fd, -1); drm_object_attach_property(&plane->base, config->prop_crtc_id, 0); drm_object_attach_property(&plane->base, config->prop_crtc_x, 0); drm_object_attach_property(&plane->base, config->prop_crtc_y, 0); @@ -1546,6 +1547,12 @@ static int drm_mode_create_standard_properties(struct drm_device *dev) return -ENOMEM; dev->mode_config.prop_fb_id = prop;
+ prop = drm_property_create_signed_range(dev, DRM_MODE_PROP_ATOMIC, + "FENCE_FD", -1, INT_MAX); + if (!prop) + return -ENOMEM; + dev->mode_config.prop_fence_fd = prop; + prop = drm_property_create_object(dev, DRM_MODE_PROP_ATOMIC, "CRTC_ID", DRM_MODE_OBJECT_CRTC); if (!prop) diff --git a/include/drm/drm_crtc.h b/include/drm/drm_crtc.h index 8c7fb3d..a8f6ec0 100644 --- a/include/drm/drm_crtc.h +++ b/include/drm/drm_crtc.h @@ -1239,6 +1239,7 @@ struct drm_connector { * @crtc: currently bound CRTC, NULL if disabled * @fb: currently bound framebuffer * @fence: optional fence to wait for before scanning out @fb + * @fence_fd: fd representing the sync_fence * @crtc_x: left position of visible portion of plane on crtc * @crtc_y: upper position of visible portion of plane on crtc * @crtc_w: width of visible portion of plane on crtc @@ -1257,6 +1258,7 @@ struct drm_plane_state { struct drm_crtc *crtc; /* do not write directly, use drm_atomic_set_crtc_for_plane() */ struct drm_framebuffer *fb; /* do not write directly, use drm_atomic_set_fb_for_plane() */ struct fence *fence; + int fence_fd;
/* Signed dest location allows it to be partially off screen */ int32_t crtc_x, crtc_y; @@ -2098,6 +2100,7 @@ struct drm_mode_config { struct drm_property *prop_crtc_w; struct drm_property *prop_crtc_h; struct drm_property *prop_fb_id; + struct drm_property *prop_fence_fd; struct drm_property *prop_crtc_id; struct drm_property *prop_active; struct drm_property *prop_mode_id;
Hi,
2016년 03월 24일 03:47에 Gustavo Padovan 이(가) 쓴 글:
From: Gustavo Padovan gustavo.padovan@collabora.co.uk
FENCE_FD can now be set by the user during an atomic IOCTL, it will be used by atomic_commit to wait until the sync_file is signalled, i.e., the framebuffer is ready for scanout.
It seems that this patch makes Linux DRM to be dependent of Android which uses explicit sync interfaces. So was there any a consensus that Android sync driver is used for DMA buffer synchronization as a Linux standard?
I see the Android sync driver exists in staging yet. Isn't the implicit sync interface used for DMA device as a Linux standard? I don't see why Android specific things are spreaded into Linux DRM.
Thanks, Inki Dae
Signed-off-by: Gustavo Padovan gustavo.padovan@collabora.co.uk
drivers/gpu/drm/drm_atomic.c | 4 ++++ drivers/gpu/drm/drm_atomic_helper.c | 1 + drivers/gpu/drm/drm_crtc.c | 7 +++++++ include/drm/drm_crtc.h | 3 +++ 4 files changed, 15 insertions(+)
diff --git a/drivers/gpu/drm/drm_atomic.c b/drivers/gpu/drm/drm_atomic.c index 8fb469c..8bc364c 100644 --- a/drivers/gpu/drm/drm_atomic.c +++ b/drivers/gpu/drm/drm_atomic.c @@ -609,6 +609,8 @@ int drm_atomic_plane_set_property(struct drm_plane *plane, drm_atomic_set_fb_for_plane(state, fb); if (fb) drm_framebuffer_unreference(fb);
- } else if (property == config->prop_fence_fd) {
} else if (property == config->prop_crtc_id) { struct drm_crtc *crtc = drm_crtc_find(dev, val); return drm_atomic_set_crtc_for_plane(state, crtc);state->fence_fd = val;
@@ -666,6 +668,8 @@ drm_atomic_plane_get_property(struct drm_plane *plane,
if (property == config->prop_fb_id) { *val = (state->fb) ? state->fb->base.id : 0;
- } else if (property == config->prop_fence_fd) {
} else if (property == config->prop_crtc_id) { *val = (state->crtc) ? state->crtc->base.id : 0; } else if (property == config->prop_crtc_x) {*val = state->fence_fd;
diff --git a/drivers/gpu/drm/drm_atomic_helper.c b/drivers/gpu/drm/drm_atomic_helper.c index 2b430b0..4f91f84 100644 --- a/drivers/gpu/drm/drm_atomic_helper.c +++ b/drivers/gpu/drm/drm_atomic_helper.c @@ -2594,6 +2594,7 @@ void drm_atomic_helper_plane_reset(struct drm_plane *plane) if (plane->state) { plane->state->plane = plane; plane->state->rotation = BIT(DRM_ROTATE_0);
}plane->state->fence_fd = -1;
} EXPORT_SYMBOL(drm_atomic_helper_plane_reset); diff --git a/drivers/gpu/drm/drm_crtc.c b/drivers/gpu/drm/drm_crtc.c index 65258ac..165f199 100644 --- a/drivers/gpu/drm/drm_crtc.c +++ b/drivers/gpu/drm/drm_crtc.c @@ -1291,6 +1291,7 @@ int drm_universal_plane_init(struct drm_device *dev, struct drm_plane *plane,
if (drm_core_check_feature(dev, DRIVER_ATOMIC)) { drm_object_attach_property(&plane->base, config->prop_fb_id, 0);
drm_object_attach_property(&plane->base, config->prop_crtc_id, 0); drm_object_attach_property(&plane->base, config->prop_crtc_x, 0); drm_object_attach_property(&plane->base, config->prop_crtc_y, 0);drm_object_attach_property(&plane->base, config->prop_fence_fd, -1);
@@ -1546,6 +1547,12 @@ static int drm_mode_create_standard_properties(struct drm_device *dev) return -ENOMEM; dev->mode_config.prop_fb_id = prop;
- prop = drm_property_create_signed_range(dev, DRM_MODE_PROP_ATOMIC,
"FENCE_FD", -1, INT_MAX);
- if (!prop)
return -ENOMEM;
- dev->mode_config.prop_fence_fd = prop;
- prop = drm_property_create_object(dev, DRM_MODE_PROP_ATOMIC, "CRTC_ID", DRM_MODE_OBJECT_CRTC); if (!prop)
diff --git a/include/drm/drm_crtc.h b/include/drm/drm_crtc.h index 8c7fb3d..a8f6ec0 100644 --- a/include/drm/drm_crtc.h +++ b/include/drm/drm_crtc.h @@ -1239,6 +1239,7 @@ struct drm_connector {
- @crtc: currently bound CRTC, NULL if disabled
- @fb: currently bound framebuffer
- @fence: optional fence to wait for before scanning out @fb
- @fence_fd: fd representing the sync_fence
- @crtc_x: left position of visible portion of plane on crtc
- @crtc_y: upper position of visible portion of plane on crtc
- @crtc_w: width of visible portion of plane on crtc
@@ -1257,6 +1258,7 @@ struct drm_plane_state { struct drm_crtc *crtc; /* do not write directly, use drm_atomic_set_crtc_for_plane() */ struct drm_framebuffer *fb; /* do not write directly, use drm_atomic_set_fb_for_plane() */ struct fence *fence;
int fence_fd;
/* Signed dest location allows it to be partially off screen */ int32_t crtc_x, crtc_y;
@@ -2098,6 +2100,7 @@ struct drm_mode_config { struct drm_property *prop_crtc_w; struct drm_property *prop_crtc_h; struct drm_property *prop_fb_id;
- struct drm_property *prop_fence_fd; struct drm_property *prop_crtc_id; struct drm_property *prop_active; struct drm_property *prop_mode_id;
On Wed, Mar 23, 2016 at 2:47 PM, Gustavo Padovan gustavo@padovan.org wrote:
From: Gustavo Padovan gustavo.padovan@collabora.co.uk
FENCE_FD can now be set by the user during an atomic IOCTL, it will be used by atomic_commit to wait until the sync_file is signalled, i.e., the framebuffer is ready for scanout.
ok, so I've been slacking on writing up the reasons that I don't like the idea of using a property for fd's (including fence fd's).. I did at one point assume we'd use properties for fence fd's, but that idea falls apart pretty quickly when you think about the 'struct file' vs fd lifecycle. It could *possibly* work if it was a write-only property, but I don't think that is a good solution.
The issue is that 'struct file' / 'int fd' have a fundamentally different lifecycle compared to 'kms obj' / 'u32 obj_id'. For the kms objects (like framebuffers) where we can use them with properties, the id is tied to the kernel side object. This is not true for file descriptors. Resulting in a few problems:
1) if it was a readable property, reading it would (should) take a reference.. we can't just return fence_fd that was previously set (since in the mean time userspace might have close()d the fd). So we have to return a fresh fd. And if userspace (like modetest) doesn't realize it is responsible to close() that fd we have a leak
2) basically we shouldn't be tracking fd's on the kernel side, since we can only count on them being valid during the duration of the ioctl call. Once the call returns, you must assume userspace has close()d the fd. So in the ioctl we need to convert fd -> file, and from then on out track the file object (or in this case the underlying fence object).
I guess we *could* have something analogous to dmabuf, where we import the fence fd and get a kms drm_fence object (with an id tied to the kernel side object), and then use a property to attach the drm_fence object.. but that seems kind of pointless and just trying to force the 'everything is a property' mantra.
I think it is really better to pass in an array of 'struct { u32 plane; int fd }' (which the atomic ioctl code converts into 'struct fence's and attaches to the plane state) and an array of 'struct { u32 crtc; int fd }' filled in on the kernel side for the out-fences.
(I guess I can't think of any case where we'd have more out-fences than in-fences so we could possibly re-use the same array.. but I don't think we have to care about that sort of micro-optimization.. on the gpu side, the submit/execbuf ioctl already passes in a larger table of gem obj id's and reloc info, and that happens even more frequently than atomic ioctl.)
BR, -R
Signed-off-by: Gustavo Padovan gustavo.padovan@collabora.co.uk
drivers/gpu/drm/drm_atomic.c | 4 ++++ drivers/gpu/drm/drm_atomic_helper.c | 1 + drivers/gpu/drm/drm_crtc.c | 7 +++++++ include/drm/drm_crtc.h | 3 +++ 4 files changed, 15 insertions(+)
diff --git a/drivers/gpu/drm/drm_atomic.c b/drivers/gpu/drm/drm_atomic.c index 8fb469c..8bc364c 100644 --- a/drivers/gpu/drm/drm_atomic.c +++ b/drivers/gpu/drm/drm_atomic.c @@ -609,6 +609,8 @@ int drm_atomic_plane_set_property(struct drm_plane *plane, drm_atomic_set_fb_for_plane(state, fb); if (fb) drm_framebuffer_unreference(fb);
} else if (property == config->prop_fence_fd) {
state->fence_fd = val; } else if (property == config->prop_crtc_id) { struct drm_crtc *crtc = drm_crtc_find(dev, val); return drm_atomic_set_crtc_for_plane(state, crtc);
@@ -666,6 +668,8 @@ drm_atomic_plane_get_property(struct drm_plane *plane,
if (property == config->prop_fb_id) { *val = (state->fb) ? state->fb->base.id : 0;
} else if (property == config->prop_fence_fd) {
*val = state->fence_fd; } else if (property == config->prop_crtc_id) { *val = (state->crtc) ? state->crtc->base.id : 0; } else if (property == config->prop_crtc_x) {
diff --git a/drivers/gpu/drm/drm_atomic_helper.c b/drivers/gpu/drm/drm_atomic_helper.c index 2b430b0..4f91f84 100644 --- a/drivers/gpu/drm/drm_atomic_helper.c +++ b/drivers/gpu/drm/drm_atomic_helper.c @@ -2594,6 +2594,7 @@ void drm_atomic_helper_plane_reset(struct drm_plane *plane) if (plane->state) { plane->state->plane = plane; plane->state->rotation = BIT(DRM_ROTATE_0);
plane->state->fence_fd = -1; }
} EXPORT_SYMBOL(drm_atomic_helper_plane_reset); diff --git a/drivers/gpu/drm/drm_crtc.c b/drivers/gpu/drm/drm_crtc.c index 65258ac..165f199 100644 --- a/drivers/gpu/drm/drm_crtc.c +++ b/drivers/gpu/drm/drm_crtc.c @@ -1291,6 +1291,7 @@ int drm_universal_plane_init(struct drm_device *dev, struct drm_plane *plane,
if (drm_core_check_feature(dev, DRIVER_ATOMIC)) { drm_object_attach_property(&plane->base, config->prop_fb_id, 0);
drm_object_attach_property(&plane->base, config->prop_fence_fd, -1); drm_object_attach_property(&plane->base, config->prop_crtc_id, 0); drm_object_attach_property(&plane->base, config->prop_crtc_x, 0); drm_object_attach_property(&plane->base, config->prop_crtc_y, 0);
@@ -1546,6 +1547,12 @@ static int drm_mode_create_standard_properties(struct drm_device *dev) return -ENOMEM; dev->mode_config.prop_fb_id = prop;
prop = drm_property_create_signed_range(dev, DRM_MODE_PROP_ATOMIC,
"FENCE_FD", -1, INT_MAX);
if (!prop)
return -ENOMEM;
dev->mode_config.prop_fence_fd = prop;
prop = drm_property_create_object(dev, DRM_MODE_PROP_ATOMIC, "CRTC_ID", DRM_MODE_OBJECT_CRTC); if (!prop)
diff --git a/include/drm/drm_crtc.h b/include/drm/drm_crtc.h index 8c7fb3d..a8f6ec0 100644 --- a/include/drm/drm_crtc.h +++ b/include/drm/drm_crtc.h @@ -1239,6 +1239,7 @@ struct drm_connector {
- @crtc: currently bound CRTC, NULL if disabled
- @fb: currently bound framebuffer
- @fence: optional fence to wait for before scanning out @fb
- @fence_fd: fd representing the sync_fence
- @crtc_x: left position of visible portion of plane on crtc
- @crtc_y: upper position of visible portion of plane on crtc
- @crtc_w: width of visible portion of plane on crtc
@@ -1257,6 +1258,7 @@ struct drm_plane_state { struct drm_crtc *crtc; /* do not write directly, use drm_atomic_set_crtc_for_plane() */ struct drm_framebuffer *fb; /* do not write directly, use drm_atomic_set_fb_for_plane() */ struct fence *fence;
int fence_fd; /* Signed dest location allows it to be partially off screen */ int32_t crtc_x, crtc_y;
@@ -2098,6 +2100,7 @@ struct drm_mode_config { struct drm_property *prop_crtc_w; struct drm_property *prop_crtc_h; struct drm_property *prop_fb_id;
struct drm_property *prop_fence_fd; struct drm_property *prop_crtc_id; struct drm_property *prop_active; struct drm_property *prop_mode_id;
-- 2.5.0
Hi,
On 5 April 2016 at 13:36, Rob Clark robdclark@gmail.com wrote:
ok, so I've been slacking on writing up the reasons that I don't like the idea of using a property for fd's (including fence fd's).. I did at one point assume we'd use properties for fence fd's, but that idea falls apart pretty quickly when you think about the 'struct file' vs fd lifecycle. It could *possibly* work if it was a write-only property, but I don't think that is a good solution.
I've been assuming that it would have to be write-only; I don't believe there would be any meaningful usecases for read. Do you have any in mind, or is it just a symmetry/cleanliness thing?
The issue is that 'struct file' / 'int fd' have a fundamentally different lifecycle compared to 'kms obj' / 'u32 obj_id'. For the kms objects (like framebuffers) where we can use them with properties, the id is tied to the kernel side object. This is not true for file descriptors. Resulting in a few problems:
Surely the fence FD tied to the kernel-side struct fence, in exactly the same way, no?
- if it was a readable property, reading it would (should) take a
reference.. we can't just return fence_fd that was previously set (since in the mean time userspace might have close()d the fd). So we have to return a fresh fd. And if userspace (like modetest) doesn't realize it is responsible to close() that fd we have a leak
Again, assuming that read would always return -1.
- basically we shouldn't be tracking fd's on the kernel side, since
we can only count on them being valid during the duration of the ioctl call. Once the call returns, you must assume userspace has close()d the fd. So in the ioctl we need to convert fd -> file, and from then on out track the file object (or in this case the underlying fence object).
Right, it would have to be the same as KMS objects, where userspace passes in an ID (in this case an entry in a non-IDR table, but still), and the kernel maps that to struct fence and handles the lifetime. So, almost exactly the same as what we do with KMS objects ...
I guess we *could* have something analogous to dmabuf, where we import the fence fd and get a kms drm_fence object (with an id tied to the kernel side object), and then use a property to attach the drm_fence object.. but that seems kind of pointless and just trying to force the 'everything is a property' mantra.
I think that would be pointless indirection as well.
I think it is really better to pass in an array of 'struct { u32 plane; int fd }' (which the atomic ioctl code converts into 'struct fence's and attaches to the plane state) and an array of 'struct { u32 crtc; int fd }' filled in on the kernel side for the out-fences.
Mmm, it definitely makes ioctl parsing harder, and still you rely on drivers to implement the more-difficult-to-not-screw-up part, which (analagous to crtc_state->event) is the driver managing the lifecycle of that component of the state. We already enforce this with events though, and the difficult part wasn't in the userspace interface, but the backend handling. This doesn't change at all regardless of whether we use a property or an external array, so ...
Cheers, Daniel
On Tue, Apr 5, 2016 at 8:57 AM, Daniel Stone daniel@fooishbar.org wrote:
Hi,
On 5 April 2016 at 13:36, Rob Clark robdclark@gmail.com wrote:
ok, so I've been slacking on writing up the reasons that I don't like the idea of using a property for fd's (including fence fd's).. I did at one point assume we'd use properties for fence fd's, but that idea falls apart pretty quickly when you think about the 'struct file' vs fd lifecycle. It could *possibly* work if it was a write-only property, but I don't think that is a good solution.
I've been assuming that it would have to be write-only; I don't believe there would be any meaningful usecases for read. Do you have any in mind, or is it just a symmetry/cleanliness thing?
no, don't see a use-case for it.. but this patch didn't seem to be preventing it. And it was storing the fence_fd on the kernel side which is a no-no as well.
The issue is that 'struct file' / 'int fd' have a fundamentally different lifecycle compared to 'kms obj' / 'u32 obj_id'. For the kms objects (like framebuffers) where we can use them with properties, the id is tied to the kernel side object. This is not true for file descriptors. Resulting in a few problems:
Surely the fence FD tied to the kernel-side struct fence, in exactly the same way, no?
well, what I mean is you can't keep around the int fd on the kernel side, like this patch does
A write-only property, which immediately (ie. during the ioctl call) is converted into a fence object, would work. Although given that we need to handle fences differently (ie. not a property) for out-fences, it seems odd to shoehorn them into a property for in-fences.
- if it was a readable property, reading it would (should) take a
reference.. we can't just return fence_fd that was previously set (since in the mean time userspace might have close()d the fd). So we have to return a fresh fd. And if userspace (like modetest) doesn't realize it is responsible to close() that fd we have a leak
Again, assuming that read would always return -1.
- basically we shouldn't be tracking fd's on the kernel side, since
we can only count on them being valid during the duration of the ioctl call. Once the call returns, you must assume userspace has close()d the fd. So in the ioctl we need to convert fd -> file, and from then on out track the file object (or in this case the underlying fence object).
Right, it would have to be the same as KMS objects, where userspace passes in an ID (in this case an entry in a non-IDR table, but still), and the kernel maps that to struct fence and handles the lifetime. So, almost exactly the same as what we do with KMS objects ...
I guess we *could* have something analogous to dmabuf, where we import the fence fd and get a kms drm_fence object (with an id tied to the kernel side object), and then use a property to attach the drm_fence object.. but that seems kind of pointless and just trying to force the 'everything is a property' mantra.
I think that would be pointless indirection as well.
I think it is really better to pass in an array of 'struct { u32 plane; int fd }' (which the atomic ioctl code converts into 'struct fence's and attaches to the plane state) and an array of 'struct { u32 crtc; int fd }' filled in on the kernel side for the out-fences.
Mmm, it definitely makes ioctl parsing harder, and still you rely on drivers to implement the more-difficult-to-not-screw-up part, which (analagous to crtc_state->event) is the driver managing the lifecycle of that component of the state. We already enforce this with events though, and the difficult part wasn't in the userspace interface, but the backend handling. This doesn't change at all regardless of whether we use a property or an external array, so ...
hmm, I'm assuming that the in/out arrays are handled in drm_mode_atomic_ioctl() and the drivers never see 'em..
BR, -R
Cheers, Daniel
Hi,
On 5 April 2016 at 15:04, Rob Clark robdclark@gmail.com wrote:
On Tue, Apr 5, 2016 at 8:57 AM, Daniel Stone daniel@fooishbar.org wrote:
I've been assuming that it would have to be write-only; I don't believe there would be any meaningful usecases for read. Do you have any in mind, or is it just a symmetry/cleanliness thing?
no, don't see a use-case for it.. but this patch didn't seem to be preventing it. And it was storing the fence_fd on the kernel side which is a no-no as well.
Yeah, both should be fixed. :)
The issue is that 'struct file' / 'int fd' have a fundamentally different lifecycle compared to 'kms obj' / 'u32 obj_id'. For the kms objects (like framebuffers) where we can use them with properties, the id is tied to the kernel side object. This is not true for file descriptors. Resulting in a few problems:
Surely the fence FD tied to the kernel-side struct fence, in exactly the same way, no?
well, what I mean is you can't keep around the int fd on the kernel side, like this patch does
A write-only property, which immediately (ie. during the ioctl call) is converted into a fence object, would work. Although given that we need to handle fences differently (ie. not a property) for out-fences, it seems odd to shoehorn them into a property for in-fences.
Depends on how you look at it, I guess. From the point of view of all fence-like things being consistent, yes, it falls down. But from the point of view of in-fences being attached to an FB, and out-fences (like events) being separately attached to a request, it's a lot more consistent.
I think it is really better to pass in an array of 'struct { u32 plane; int fd }' (which the atomic ioctl code converts into 'struct fence's and attaches to the plane state) and an array of 'struct { u32 crtc; int fd }' filled in on the kernel side for the out-fences.
Mmm, it definitely makes ioctl parsing harder, and still you rely on drivers to implement the more-difficult-to-not-screw-up part, which (analagous to crtc_state->event) is the driver managing the lifecycle of that component of the state. We already enforce this with events though, and the difficult part wasn't in the userspace interface, but the backend handling. This doesn't change at all regardless of whether we use a property or an external array, so ...
hmm, I'm assuming that the in/out arrays are handled in drm_mode_atomic_ioctl() and the drivers never see 'em..
True, but it complicates the (already not hugely straightforward) parsing that the ioctl has to do. It also makes extending the ioctl a little harder to do in future, because you're adding in two variable-size elements, and have to do some fairly complicated parsing to even figure out what the size _should_ be. So I'd rather not do it if there was any way out of it; at the very least it'd have to be userptr rather than array.
Cheers, Daniel
On Tue, Apr 5, 2016 at 10:19 AM, Daniel Stone daniel@fooishbar.org wrote:
hmm, I'm assuming that the in/out arrays are handled in drm_mode_atomic_ioctl() and the drivers never see 'em..
True, but it complicates the (already not hugely straightforward) parsing that the ioctl has to do. It also makes extending the ioctl a little harder to do in future, because you're adding in two variable-size elements, and have to do some fairly complicated parsing to even figure out what the size _should_ be. So I'd rather not do it if there was any way out of it; at the very least it'd have to be userptr rather than array.
oh, I was assuming ptr to a user array (rather than var length ioctl struct) for both in and out fences. I guess my wording didn't make that clear. But yeah, it would be madness to do anything else.
BR, -R
From: Gustavo Padovan gustavo.padovan@collabora.co.uk
The struct aggregates fences that we need to wait on before proceed with some specific operation. In DRM, for example, we may wait for a group of fences to signal before we scanout the buffers related to those fences.
Signed-off-by: Gustavo Padovan gustavo.padovan@collabora.co.uk --- include/linux/fence.h | 16 ++++++++++++++++ 1 file changed, 16 insertions(+)
diff --git a/include/linux/fence.h b/include/linux/fence.h index 605bd88..3d1151f 100644 --- a/include/linux/fence.h +++ b/include/linux/fence.h @@ -104,6 +104,22 @@ struct fence_cb { fence_func_t func; };
+typedef void (*collection_put_func_t)(void *data); + +/** + * struct fence_collection - aggregate fences together + * @num_fences: number of fence in the collection. + * @user_data: user data. + * @func: user callback to put user data. + * @fences: array of @num_fences fences. + */ +struct fence_collection { + int num_fences; + void *user_data; + collection_put_func_t func; + struct fence *fences[]; +}; + /** * struct fence_ops - operations implemented for fence * @get_driver_name: returns the driver name.
From: Gustavo Padovan gustavo.padovan@collabora.co.uk
Creates a function that given an sync file descriptor returns a fence_collection containing all fences in the sync_file.
Signed-off-by: Gustavo Padovan gustavo.padovan@collabora.co.uk --- drivers/dma-buf/sync_file.c | 36 ++++++++++++++++++++++++++++++++++++ include/linux/sync_file.h | 8 ++++++++ 2 files changed, 44 insertions(+)
diff --git a/drivers/dma-buf/sync_file.c b/drivers/dma-buf/sync_file.c index b67876b..16a3ef7 100644 --- a/drivers/dma-buf/sync_file.c +++ b/drivers/dma-buf/sync_file.c @@ -122,6 +122,42 @@ void sync_file_install(struct sync_file *sync_file, int fd) } EXPORT_SYMBOL(sync_file_install);
+static void sync_file_fence_collection_put(void *data) +{ + struct sync_file *sync_file = data; + + sync_file_put(sync_file); +} + +struct fence_collection *sync_file_fences_get(int fd) +{ + struct fence_collection *collection; + struct sync_file *sync_file; + int i; + + sync_file = sync_file_fdget(fd); + if (!sync_file) + return NULL; + + collection = kzalloc(offsetof(struct fence_collection, + fences[sync_file->num_fences]), + GFP_KERNEL); + if (!collection) + return NULL; + + collection->num_fences = sync_file->num_fences; + collection->user_data = sync_file; + collection->func = sync_file_fence_collection_put; + + for (i = 0; i < sync_file->num_fences; ++i) { + collection->fences[i] = sync_file->cbs[i].fence; + fence_get(collection->fences[i]); + } + + return collection; +} +EXPORT_SYMBOL(sync_file_fences_get); + static void sync_file_add_pt(struct sync_file *sync_file, int *i, struct fence *fence) { diff --git a/include/linux/sync_file.h b/include/linux/sync_file.h index 7b7a89d..1b9386a 100644 --- a/include/linux/sync_file.h +++ b/include/linux/sync_file.h @@ -103,4 +103,12 @@ void sync_file_put(struct sync_file *sync_file); */ void sync_file_install(struct sync_file *sync_file, int fd);
+/** + * sync_file_fences_get - get the fence collection related to the fd + * @fd: file descriptor to look for a fence collection + * + * Ensures @fd references a valid sync_file, increments the refcount of the + * backing file. Returns the fence_collection or NULL in case of error. + */ +struct fence_collection *sync_file_fences_get(int fd); #endif /* _LINUX_SYNC_H */
From: Gustavo Padovan gustavo.padovan@collabora.co.uk
Put fence_collection data. For that calls fence_put() on all fences and the user put callback.
Signed-off-by: Gustavo Padovan gustavo.padovan@collabora.co.uk --- drivers/dma-buf/fence.c | 17 +++++++++++++++++ include/linux/fence.h | 2 ++ 2 files changed, 19 insertions(+)
diff --git a/drivers/dma-buf/fence.c b/drivers/dma-buf/fence.c index 7b05dbe..a3fe3e7 100644 --- a/drivers/dma-buf/fence.c +++ b/drivers/dma-buf/fence.c @@ -530,3 +530,20 @@ fence_init(struct fence *fence, const struct fence_ops *ops, trace_fence_init(fence); } EXPORT_SYMBOL(fence_init); + +/** + * fence_collection_put - put all the fences in a collection + * @collection: [in] the collection to put fences + * + * This functions unrefs all fences in the collection. + */ +void fence_collection_put(struct fence_collection *collection) +{ + int i; + + for (i = 0 ; i < collection->num_fences ; i++) + fence_put(collection->fences[i]); + + collection->func(collection->user_data); +} +EXPORT_SYMBOL(fence_collection_put); diff --git a/include/linux/fence.h b/include/linux/fence.h index 3d1151f..3f871b0 100644 --- a/include/linux/fence.h +++ b/include/linux/fence.h @@ -244,6 +244,8 @@ int fence_add_callback(struct fence *fence, struct fence_cb *cb, bool fence_remove_callback(struct fence *fence, struct fence_cb *cb); void fence_enable_sw_signaling(struct fence *fence);
+void fence_collection_put(struct fence_collection *collection); + /** * fence_is_signaled_locked - Return an indication if the fence is signaled yet. * @fence: [in] the fence to check
From: Gustavo Padovan gustavo.padovan@collabora.co.uk
Iterate over the array of fences and wait for all of the to finish.
Signed-off-by: Gustavo Padovan gustavo.padovan@collabora.co.uk --- drivers/dma-buf/fence.c | 16 ++++++++++++++++ include/linux/fence.h | 1 + 2 files changed, 17 insertions(+)
diff --git a/drivers/dma-buf/fence.c b/drivers/dma-buf/fence.c index a3fe3e7..31e554b 100644 --- a/drivers/dma-buf/fence.c +++ b/drivers/dma-buf/fence.c @@ -532,6 +532,22 @@ fence_init(struct fence *fence, const struct fence_ops *ops, EXPORT_SYMBOL(fence_init);
/** + * fence_collection_wait - Wait for collection of fences to signal + * @collection: [in] the collection to wait on + * + * This functions simplifies the waiting process when one needs to + * wait for many fences at the same time. + */ +void fence_collection_wait(struct fence_collection *collection) +{ + int i; + + for (i = 0 ; i < collection->num_fences ; i++) + fence_wait(collection->fences[i], false); +} +EXPORT_SYMBOL(fence_collection_wait); + +/** * fence_collection_put - put all the fences in a collection * @collection: [in] the collection to put fences * diff --git a/include/linux/fence.h b/include/linux/fence.h index 3f871b0..52f1aea 100644 --- a/include/linux/fence.h +++ b/include/linux/fence.h @@ -244,6 +244,7 @@ int fence_add_callback(struct fence *fence, struct fence_cb *cb, bool fence_remove_callback(struct fence *fence, struct fence_cb *cb); void fence_enable_sw_signaling(struct fence *fence);
+void fence_collection_wait(struct fence_collection *collection); void fence_collection_put(struct fence_collection *collection);
/**
From: Gustavo Padovan gustavo.padovan@collabora.co.uk
Let atomic_commit() wait on a collection of fences before proceed with the scanout.
Signed-off-by: Gustavo Padovan gustavo.padovan@collabora.co.uk --- drivers/gpu/drm/drm_atomic.c | 9 +++++++++ drivers/gpu/drm/drm_atomic_helper.c | 9 +++++---- include/drm/drm_crtc.h | 2 +- 3 files changed, 15 insertions(+), 5 deletions(-)
diff --git a/drivers/gpu/drm/drm_atomic.c b/drivers/gpu/drm/drm_atomic.c index 8bc364c..28a65d1 100644 --- a/drivers/gpu/drm/drm_atomic.c +++ b/drivers/gpu/drm/drm_atomic.c @@ -29,6 +29,7 @@ #include <drm/drmP.h> #include <drm/drm_atomic.h> #include <drm/drm_plane_helper.h> +#include <linux/sync_file.h>
/** * drm_atomic_state_default_release - @@ -795,6 +796,14 @@ static int drm_atomic_plane_check(struct drm_plane *plane, return -EINVAL; }
+#ifdef CONFIG_SYNC_FILE + if (state->fence_fd >= 0) { + state->fences = sync_file_fences_get(state->fence_fd); + if (!state->fences) + return -EINVAL; + } +#endif + return 0; }
diff --git a/drivers/gpu/drm/drm_atomic_helper.c b/drivers/gpu/drm/drm_atomic_helper.c index 4f91f84..a6e34b6 100644 --- a/drivers/gpu/drm/drm_atomic_helper.c +++ b/drivers/gpu/drm/drm_atomic_helper.c @@ -977,14 +977,12 @@ static void wait_for_fences(struct drm_device *dev, int i;
for_each_plane_in_state(state, plane, plane_state, i) { - if (!plane->state->fence) + if (!plane->state->fences) continue;
WARN_ON(!plane->state->fb);
- fence_wait(plane->state->fence, false); - fence_put(plane->state->fence); - plane->state->fence = NULL; + fence_collection_wait(plane->state->fences); } }
@@ -2654,6 +2652,9 @@ void __drm_atomic_helper_plane_destroy_state(struct drm_plane *plane, { if (state->fb) drm_framebuffer_unreference(state->fb); + + if (state->fences) + fence_collection_put(state->fences); } EXPORT_SYMBOL(__drm_atomic_helper_plane_destroy_state);
diff --git a/include/drm/drm_crtc.h b/include/drm/drm_crtc.h index a8f6ec0..c221c28 100644 --- a/include/drm/drm_crtc.h +++ b/include/drm/drm_crtc.h @@ -1257,7 +1257,7 @@ struct drm_plane_state {
struct drm_crtc *crtc; /* do not write directly, use drm_atomic_set_crtc_for_plane() */ struct drm_framebuffer *fb; /* do not write directly, use drm_atomic_set_fb_for_plane() */ - struct fence *fence; + struct fence_collection *fences; int fence_fd;
/* Signed dest location allows it to be partially off screen */
Hey,
Op 23-03-16 om 19:47 schreef Gustavo Padovan:
From: Gustavo Padovan gustavo.padovan@collabora.co.uk
Hi,
This is a first proposal to discuss the addition of in-fences support to DRM. It adds a new struct to fence.c to abstract the use of sync_file in DRM drivers. The new struct fence_collection contains a array with all fences that a atomic commit needs to wait on
/**
- struct fence_collection - aggregate fences together
- @num_fences: number of fence in the collection.
- @user_data: user data.
- @func: user callback to put user data.
- @fences: array of @num_fences fences.
*/ struct fence_collection { int num_fences; void *user_data; collection_put_func_t func; struct fence *fences[]; };
The fence_collection is allocated and filled by sync_file_fences_get() and atomic_commit helpers can use fence_collection_wait() to wait the fences to signal.
These patches depends on the sync ABI rework:
https://www.spinics.net/lists/dri-devel/msg102795.html
and the patch to de-stage the sync framework:
https://www.spinics.net/lists/dri-devel/msg102799.html
I also hacked together some sync support into modetest for testing:
https://git.collabora.com/cgit/user/padovan/libdrm.git/log/?h=atomic
Why did you choose to add fence_collection, rather than putting sync_file in state?
There used to be a sync_fence_wait function, which would mean you'd have everything you need.
~Maarten
this patch series reminder me my another thoughts recently, But I don't know if my idea is appropriated: sometimes one person could only need wait any of these fence array, but it doesn't want to call fence_wait_any since which will block its thread. if there is a mechanism let the person register a callback to these fence array, then that will be very convenient. So I want to add a event fence, which is a special kind of fence and indicates this event is satisfied and is signalled by any of the fence array.
What do you think of it?
Regards, David Zhou
On 2016年03月24日 15:20, Maarten Lankhorst wrote:
Hey,
Op 23-03-16 om 19:47 schreef Gustavo Padovan:
From: Gustavo Padovan gustavo.padovan@collabora.co.uk
Hi,
This is a first proposal to discuss the addition of in-fences support to DRM. It adds a new struct to fence.c to abstract the use of sync_file in DRM drivers. The new struct fence_collection contains a array with all fences that a atomic commit needs to wait on
/**
- struct fence_collection - aggregate fences together
- @num_fences: number of fence in the collection.
- @user_data: user data.
- @func: user callback to put user data.
- @fences: array of @num_fences fences.
*/ struct fence_collection { int num_fences; void *user_data; collection_put_func_t func; struct fence *fences[]; };
The fence_collection is allocated and filled by sync_file_fences_get() and atomic_commit helpers can use fence_collection_wait() to wait the fences to signal.
These patches depends on the sync ABI rework:
https://www.spinics.net/lists/dri-devel/msg102795.html
and the patch to de-stage the sync framework:
https://www.spinics.net/lists/dri-devel/msg102799.html
I also hacked together some sync support into modetest for testing:
https://git.collabora.com/cgit/user/padovan/libdrm.git/log/?h=atomic
Why did you choose to add fence_collection, rather than putting sync_file in state?
There used to be a sync_fence_wait function, which would mean you'd have everything you need.
~Maarten _______________________________________________ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel
Hi Maarten,
2016-03-24 Maarten Lankhorst maarten.lankhorst@linux.intel.com:
Hey,
Op 23-03-16 om 19:47 schreef Gustavo Padovan:
From: Gustavo Padovan gustavo.padovan@collabora.co.uk
Hi,
This is a first proposal to discuss the addition of in-fences support to DRM. It adds a new struct to fence.c to abstract the use of sync_file in DRM drivers. The new struct fence_collection contains a array with all fences that a atomic commit needs to wait on
/**
- struct fence_collection - aggregate fences together
- @num_fences: number of fence in the collection.
- @user_data: user data.
- @func: user callback to put user data.
- @fences: array of @num_fences fences.
*/ struct fence_collection { int num_fences; void *user_data; collection_put_func_t func; struct fence *fences[]; };
The fence_collection is allocated and filled by sync_file_fences_get() and atomic_commit helpers can use fence_collection_wait() to wait the fences to signal.
These patches depends on the sync ABI rework:
https://www.spinics.net/lists/dri-devel/msg102795.html
and the patch to de-stage the sync framework:
https://www.spinics.net/lists/dri-devel/msg102799.html
I also hacked together some sync support into modetest for testing:
https://git.collabora.com/cgit/user/padovan/libdrm.git/log/?h=atomic
Why did you choose to add fence_collection, rather than putting sync_file in state?
There used to be a sync_fence_wait function, which would mean you'd have everything you need.
We discussed this on #dri-devel a few days ago. The idea behind this is to abstract sync_file from any drm driver and let only drm core deal with sync_file.
In the next iteration even fence_collection will be gone, so the driver we deal only with struct fence and the fence_collection will be a subclass of fence.
Gustavo
Hi,
2016년 03월 24일 03:47에 Gustavo Padovan 이(가) 쓴 글:
From: Gustavo Padovan gustavo.padovan@collabora.co.uk
Hi,
This is a first proposal to discuss the addition of in-fences support to DRM. It adds a new struct to fence.c to abstract the use of sync_file in DRM drivers. The new struct fence_collection contains a array with all fences that a atomic commit needs to wait on
As I mentioned already like below, http://www.spinics.net/lists/dri-devel/msg103225.html
I don't see why Android specific thing is tried to propagate to Linux DRM. In Linux mainline, it has already implicit sync interfaces for DMA devices called dma fence which forces registering a fence obejct to DMABUF through a reservation obejct when a dmabuf object is created. However, Android sync driver creates a new file for a sync object and this would have different point of view.
Is there anyone who can explan why Android specific thing is tried to spread into Linux DRM? Was there any consensus to use Android sync driver - which uses explicit sync interfaces - as Linux standard?
Thanks, Inki Dae
/**
- struct fence_collection - aggregate fences together
- @num_fences: number of fence in the collection.
- @user_data: user data.
- @func: user callback to put user data.
- @fences: array of @num_fences fences.
*/ struct fence_collection { int num_fences; void *user_data; collection_put_func_t func; struct fence *fences[]; };
The fence_collection is allocated and filled by sync_file_fences_get() and atomic_commit helpers can use fence_collection_wait() to wait the fences to signal.
These patches depends on the sync ABI rework:
https://www.spinics.net/lists/dri-devel/msg102795.html
and the patch to de-stage the sync framework:
https://www.spinics.net/lists/dri-devel/msg102799.html
I also hacked together some sync support into modetest for testing:
https://git.collabora.com/cgit/user/padovan/libdrm.git/log/?h=atomic
Gustavo
Gustavo Padovan (6): drm/fence: add FENCE_FD property to planes dma-buf/fence: add struct fence_collection dma-buf/sync_file: add sync_file_fences_get() dma-buf/fence: add fence_collection_put() dma-buf/fence: add fence_collection_wait() drm/fence: support fence_collection on atomic commit
drivers/dma-buf/fence.c | 33 +++++++++++++++++++++++++++++++++ drivers/dma-buf/sync_file.c | 36 ++++++++++++++++++++++++++++++++++++ drivers/gpu/drm/drm_atomic.c | 13 +++++++++++++ drivers/gpu/drm/drm_atomic_helper.c | 10 ++++++---- drivers/gpu/drm/drm_crtc.c | 7 +++++++ include/drm/drm_crtc.h | 5 ++++- include/linux/fence.h | 19 +++++++++++++++++++ include/linux/sync_file.h | 8 ++++++++ 8 files changed, 126 insertions(+), 5 deletions(-)
Hi Inki,
2016-03-24 Inki Dae inki.dae@samsung.com:
Hi,
2016년 03월 24일 03:47에 Gustavo Padovan 이(가) 쓴 글:
From: Gustavo Padovan gustavo.padovan@collabora.co.uk
Hi,
This is a first proposal to discuss the addition of in-fences support to DRM. It adds a new struct to fence.c to abstract the use of sync_file in DRM drivers. The new struct fence_collection contains a array with all fences that a atomic commit needs to wait on
As I mentioned already like below, http://www.spinics.net/lists/dri-devel/msg103225.html
I don't see why Android specific thing is tried to propagate to Linux DRM. In Linux mainline, it has already implicit sync interfaces for DMA devices called dma fence which forces registering a fence obejct to DMABUF through a reservation obejct when a dmabuf object is created. However, Android sync driver creates a new file for a sync object and this would have different point of view.
Is there anyone who can explan why Android specific thing is tried to spread into Linux DRM? Was there any consensus to use Android sync driver - which uses explicit sync interfaces - as Linux standard?
Because we want explicit fencing as the Linux standard in the future to be able to do smart scheduling, e.g., send async jobs to the gpu and at the same time send async atomic commits with sync_file fd attached so they can wait the GPU to finish and we don't block in userspace anymore, quite similar to what Android does.
This would still use dma-buf fences in the driver level, but it has a lot more advantages than implicit fencing.
Gustavo
Hi Guestavo,
2016년 03월 24일 23:39에 Gustavo Padovan 이(가) 쓴 글:
Hi Inki,
2016-03-24 Inki Dae inki.dae@samsung.com:
Hi,
2016년 03월 24일 03:47에 Gustavo Padovan 이(가) 쓴 글:
From: Gustavo Padovan gustavo.padovan@collabora.co.uk
Hi,
This is a first proposal to discuss the addition of in-fences support to DRM. It adds a new struct to fence.c to abstract the use of sync_file in DRM drivers. The new struct fence_collection contains a array with all fences that a atomic commit needs to wait on
As I mentioned already like below, http://www.spinics.net/lists/dri-devel/msg103225.html
I don't see why Android specific thing is tried to propagate to Linux DRM. In Linux mainline, it has already implicit sync interfaces for DMA devices called dma fence which forces registering a fence obejct to DMABUF through a reservation obejct when a dmabuf object is created. However, Android sync driver creates a new file for a sync object and this would have different point of view.
Is there anyone who can explan why Android specific thing is tried to spread into Linux DRM? Was there any consensus to use Android sync driver - which uses explicit sync interfaces - as Linux standard?
Because we want explicit fencing as the Linux standard in the future to be able to do smart scheduling, e.g., send async jobs to the gpu and at the same time send async atomic commits with sync_file fd attached so they can wait the GPU to finish and we don't block in userspace anymore, quite similar to what Android does.
GPU is also DMA device so I think the synchonization should be handled transparent to user-space. And I know that Chromium guy already did similar thing with non-atomic commit only using implicit sync, https://chromium.googlesource.com/chromiumos/third_party/kernel branch name : chromeos-3.14
Of course, this approach uses a new helper framework placed in drm directory so I think if this framework can be moved into dma-buf directory after some cleanup and refactoring them if necessary. Anyway, I'm not sure I understood the smart scheduling you mentioned but I think we could do what you try to do without the explicit fence.
This would still use dma-buf fences in the driver level, but it has a lot more advantages than implicit fencing.
You means things for rendering pipeline debugging and merging sync fences?
Thanks, Inki Dae
Gustavo
On Thu, Mar 24, 2016 at 4:18 AM, Inki Dae inki.dae@samsung.com wrote:
Hi,
2016년 03월 24일 03:47에 Gustavo Padovan 이(가) 쓴 글:
From: Gustavo Padovan gustavo.padovan@collabora.co.uk
Hi,
This is a first proposal to discuss the addition of in-fences support to DRM. It adds a new struct to fence.c to abstract the use of sync_file in DRM drivers. The new struct fence_collection contains a array with all fences that a atomic commit needs to wait on
As I mentioned already like below, http://www.spinics.net/lists/dri-devel/msg103225.html
I don't see why Android specific thing is tried to propagate to Linux DRM. In Linux mainline, it has already implicit sync interfaces for DMA devices called dma fence which forces registering a fence obejct to DMABUF through a reservation obejct when a dmabuf object is created. However, Android sync driver creates a new file for a sync object and this would have different point of view.
Is there anyone who can explan why Android specific thing is tried to spread into Linux DRM? Was there any consensus to use Android sync driver - which uses explicit sync interfaces - as Linux standard?
btw, there is already plane_state->fence .. which I don't think has any users yet, but I start to use it in my patchset that converts drm/msm to 'struct fence'
That said, we do need syncpt as the way to expose fences to userspace for explicit synchronization, but I'm not entirely sure that the various drivers ever need to see that (vs just struct fence), at least on the kms side of things.
BR, -R
Thanks, Inki Dae
/**
- struct fence_collection - aggregate fences together
- @num_fences: number of fence in the collection.
- @user_data: user data.
- @func: user callback to put user data.
- @fences: array of @num_fences fences.
*/ struct fence_collection { int num_fences; void *user_data; collection_put_func_t func; struct fence *fences[]; };
The fence_collection is allocated and filled by sync_file_fences_get() and atomic_commit helpers can use fence_collection_wait() to wait the fences to signal.
These patches depends on the sync ABI rework:
https://www.spinics.net/lists/dri-devel/msg102795.html
and the patch to de-stage the sync framework:
https://www.spinics.net/lists/dri-devel/msg102799.html
I also hacked together some sync support into modetest for testing:
https://git.collabora.com/cgit/user/padovan/libdrm.git/log/?h=atomic
Gustavo
Gustavo Padovan (6): drm/fence: add FENCE_FD property to planes dma-buf/fence: add struct fence_collection dma-buf/sync_file: add sync_file_fences_get() dma-buf/fence: add fence_collection_put() dma-buf/fence: add fence_collection_wait() drm/fence: support fence_collection on atomic commit
drivers/dma-buf/fence.c | 33 +++++++++++++++++++++++++++++++++ drivers/dma-buf/sync_file.c | 36 ++++++++++++++++++++++++++++++++++++ drivers/gpu/drm/drm_atomic.c | 13 +++++++++++++ drivers/gpu/drm/drm_atomic_helper.c | 10 ++++++---- drivers/gpu/drm/drm_crtc.c | 7 +++++++ include/drm/drm_crtc.h | 5 ++++- include/linux/fence.h | 19 +++++++++++++++++++ include/linux/sync_file.h | 8 ++++++++ 8 files changed, 126 insertions(+), 5 deletions(-)
dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel
2016년 03월 25일 00:40에 Rob Clark 이(가) 쓴 글:
On Thu, Mar 24, 2016 at 4:18 AM, Inki Dae inki.dae@samsung.com wrote:
Hi,
2016년 03월 24일 03:47에 Gustavo Padovan 이(가) 쓴 글:
From: Gustavo Padovan gustavo.padovan@collabora.co.uk
Hi,
This is a first proposal to discuss the addition of in-fences support to DRM. It adds a new struct to fence.c to abstract the use of sync_file in DRM drivers. The new struct fence_collection contains a array with all fences that a atomic commit needs to wait on
As I mentioned already like below, http://www.spinics.net/lists/dri-devel/msg103225.html
I don't see why Android specific thing is tried to propagate to Linux DRM. In Linux mainline, it has already implicit sync interfaces for DMA devices called dma fence which forces registering a fence obejct to DMABUF through a reservation obejct when a dmabuf object is created. However, Android sync driver creates a new file for a sync object and this would have different point of view.
Is there anyone who can explan why Android specific thing is tried to spread into Linux DRM? Was there any consensus to use Android sync driver - which uses explicit sync interfaces - as Linux standard?
btw, there is already plane_state->fence .. which I don't think has any users yet, but I start to use it in my patchset that converts drm/msm to 'struct fence'
Yes, Exynos also started it.
That said, we do need syncpt as the way to expose fences to userspace for explicit synchronization, but I'm not entirely sure that the
It's definitely different case. This tries to add new user-space interfaces to expose fences to user-space. At least, implicit interfaces are embedded into drivers. So I'd like to give you a question. Why exposing fences to user-space is required? To provide easy-to-debug solution to rendering pipleline? To provide merge fence feature?
And if we need really to expose fences to user-space and there is really real user, then we have already good candidates, DMA-BUF-IOCTL-SYNC or maybe fcntl system call because we share already DMA buffer between CPU <-> DMA and DMA <-> DMA using DMABUF. For DMA-BUF-IOCTL-SYNC, I think you remember that was what I tried long time ago because you was there. Several years ago, I tried to couple exposing the fences to user-space with cache operation although at that time, I really misleaded the fence machnism. My trying was also for the potential users.
Anyway, my opinion is that we could expose the fences hided by DMABUF to user-space using interfaces it exists already around us. And for this, below Chromium solution would also give us some helps, https://chromium.googlesource.com/chromiumos/third_party/kernel/+/chromeos-3...
And in /driver/dma-buf/, there are DMABUF-centric modules so looks strange sync_file module of Android is placed in that directory - Android sync driver doesn't use really DMABUF but creates new file for their sync fence instead. For implicit sync interfaces for DMA devices, we use DMABUF and for explict sync interfaces for user-space, we use sync_file not DMABUF? That doesn't make sense.
I love really Android but I feel as if we try to give a seat available to Android somehow.
Thanks, Inki Dae
various drivers ever need to see that (vs just struct fence), at least on the kms side of things.
BR, -R
Thanks, Inki Dae
/**
- struct fence_collection - aggregate fences together
- @num_fences: number of fence in the collection.
- @user_data: user data.
- @func: user callback to put user data.
- @fences: array of @num_fences fences.
*/ struct fence_collection { int num_fences; void *user_data; collection_put_func_t func; struct fence *fences[]; };
The fence_collection is allocated and filled by sync_file_fences_get() and atomic_commit helpers can use fence_collection_wait() to wait the fences to signal.
These patches depends on the sync ABI rework:
https://www.spinics.net/lists/dri-devel/msg102795.html
and the patch to de-stage the sync framework:
https://www.spinics.net/lists/dri-devel/msg102799.html
I also hacked together some sync support into modetest for testing:
https://git.collabora.com/cgit/user/padovan/libdrm.git/log/?h=atomic
Gustavo
Gustavo Padovan (6): drm/fence: add FENCE_FD property to planes dma-buf/fence: add struct fence_collection dma-buf/sync_file: add sync_file_fences_get() dma-buf/fence: add fence_collection_put() dma-buf/fence: add fence_collection_wait() drm/fence: support fence_collection on atomic commit
drivers/dma-buf/fence.c | 33 +++++++++++++++++++++++++++++++++ drivers/dma-buf/sync_file.c | 36 ++++++++++++++++++++++++++++++++++++ drivers/gpu/drm/drm_atomic.c | 13 +++++++++++++ drivers/gpu/drm/drm_atomic_helper.c | 10 ++++++---- drivers/gpu/drm/drm_crtc.c | 7 +++++++ include/drm/drm_crtc.h | 5 ++++- include/linux/fence.h | 19 +++++++++++++++++++ include/linux/sync_file.h | 8 ++++++++ 8 files changed, 126 insertions(+), 5 deletions(-)
dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel
On Thu, Mar 24, 2016 at 7:49 PM, Inki Dae inki.dae@samsung.com wrote:
2016년 03월 25일 00:40에 Rob Clark 이(가) 쓴 글:
On Thu, Mar 24, 2016 at 4:18 AM, Inki Dae inki.dae@samsung.com wrote:
Hi,
2016년 03월 24일 03:47에 Gustavo Padovan 이(가) 쓴 글:
From: Gustavo Padovan gustavo.padovan@collabora.co.uk
Hi,
This is a first proposal to discuss the addition of in-fences support to DRM. It adds a new struct to fence.c to abstract the use of sync_file in DRM drivers. The new struct fence_collection contains a array with all fences that a atomic commit needs to wait on
As I mentioned already like below, http://www.spinics.net/lists/dri-devel/msg103225.html
I don't see why Android specific thing is tried to propagate to Linux DRM. In Linux mainline, it has already implicit sync interfaces for DMA devices called dma fence which forces registering a fence obejct to DMABUF through a reservation obejct when a dmabuf object is created. However, Android sync driver creates a new file for a sync object and this would have different point of view.
Is there anyone who can explan why Android specific thing is tried to spread into Linux DRM? Was there any consensus to use Android sync driver - which uses explicit sync interfaces - as Linux standard?
btw, there is already plane_state->fence .. which I don't think has any users yet, but I start to use it in my patchset that converts drm/msm to 'struct fence'
Yes, Exynos also started it.
That said, we do need syncpt as the way to expose fences to userspace for explicit synchronization, but I'm not entirely sure that the
It's definitely different case. This tries to add new user-space interfaces to expose fences to user-space. At least, implicit interfaces are embedded into drivers. So I'd like to give you a question. Why exposing fences to user-space is required? To provide easy-to-debug solution to rendering pipleline? To provide merge fence feature?
Well, implicit sync and explicit sync are two different cases. Implicit sync ofc remains the default, but userspace could opt-in to explicit sync instead. For example, on the gpu side of things, depending on flags userspace passes in to the submit ioctl we would either attach the fence to all the written buffers (implicit) or return it as a fence fd to userspace (explicit), which userspace could then pass in to atomic ioctl to synchronize pageflip.
And visa-versa, we can pass the pageflip (atomic) completion fence back in to gpu so it doesn't start rendering the next frame until the buffer is off screen.
fwiw, currently android is the first user of explicit sync (although I expect wayland/weston to follow suit). A couple linaro folks have android running with an upstream kernel + mesa + atomic/kms hwc on a couple devices (nexus7 and db410c with freedreno, and qemu with virgl). But there are some limitations due to missing the EGL_ANDROID_native_fence_sync extension in mesa. I plan to implement that, but I ofc need the fence fd stuff in order to do so ;-)
And if we need really to expose fences to user-space and there is really real user, then we have already good candidates, DMA-BUF-IOCTL-SYNC or maybe fcntl system call because we share already DMA buffer between CPU <-> DMA and DMA <-> DMA using DMABUF. For DMA-BUF-IOCTL-SYNC, I think you remember that was what I tried long time ago because you was there. Several years ago, I tried to couple exposing the fences to user-space with cache operation although at that time, I really misleaded the fence machnism. My trying was also for the potential users.
Note that this is not (just) about sw sync, but also sync between multiple hw devices.
BR, -R
Anyway, my opinion is that we could expose the fences hided by DMABUF to user-space using interfaces it exists already around us. And for this, below Chromium solution would also give us some helps, https://chromium.googlesource.com/chromiumos/third_party/kernel/+/chromeos-3...
And in /driver/dma-buf/, there are DMABUF-centric modules so looks strange sync_file module of Android is placed in that directory - Android sync driver doesn't use really DMABUF but creates new file for their sync fence instead. For implicit sync interfaces for DMA devices, we use DMABUF and for explict sync interfaces for user-space, we use sync_file not DMABUF? That doesn't make sense.
I love really Android but I feel as if we try to give a seat available to Android somehow.
Thanks, Inki Dae
various drivers ever need to see that (vs just struct fence), at least on the kms side of things.
BR, -R
Thanks, Inki Dae
/**
- struct fence_collection - aggregate fences together
- @num_fences: number of fence in the collection.
- @user_data: user data.
- @func: user callback to put user data.
- @fences: array of @num_fences fences.
*/ struct fence_collection { int num_fences; void *user_data; collection_put_func_t func; struct fence *fences[]; };
The fence_collection is allocated and filled by sync_file_fences_get() and atomic_commit helpers can use fence_collection_wait() to wait the fences to signal.
These patches depends on the sync ABI rework:
https://www.spinics.net/lists/dri-devel/msg102795.html
and the patch to de-stage the sync framework:
https://www.spinics.net/lists/dri-devel/msg102799.html
I also hacked together some sync support into modetest for testing:
https://git.collabora.com/cgit/user/padovan/libdrm.git/log/?h=atomic
Gustavo
Gustavo Padovan (6): drm/fence: add FENCE_FD property to planes dma-buf/fence: add struct fence_collection dma-buf/sync_file: add sync_file_fences_get() dma-buf/fence: add fence_collection_put() dma-buf/fence: add fence_collection_wait() drm/fence: support fence_collection on atomic commit
drivers/dma-buf/fence.c | 33 +++++++++++++++++++++++++++++++++ drivers/dma-buf/sync_file.c | 36 ++++++++++++++++++++++++++++++++++++ drivers/gpu/drm/drm_atomic.c | 13 +++++++++++++ drivers/gpu/drm/drm_atomic_helper.c | 10 ++++++---- drivers/gpu/drm/drm_crtc.c | 7 +++++++ include/drm/drm_crtc.h | 5 ++++- include/linux/fence.h | 19 +++++++++++++++++++ include/linux/sync_file.h | 8 ++++++++ 8 files changed, 126 insertions(+), 5 deletions(-)
dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel
Hi all,
On 25 March 2016 at 11:58, Rob Clark robdclark@gmail.com wrote:
On Thu, Mar 24, 2016 at 7:49 PM, Inki Dae inki.dae@samsung.com wrote:
It's definitely different case. This tries to add new user-space interfaces to expose fences to user-space. At least, implicit interfaces are embedded into drivers. So I'd like to give you a question. Why exposing fences to user-space is required? To provide easy-to-debug solution to rendering pipleline? To provide merge fence feature?
Well, implicit sync and explicit sync are two different cases. Implicit sync ofc remains the default, but userspace could opt-in to explicit sync instead. For example, on the gpu side of things, depending on flags userspace passes in to the submit ioctl we would either attach the fence to all the written buffers (implicit) or return it as a fence fd to userspace (explicit), which userspace could then pass in to atomic ioctl to synchronize pageflip.
And visa-versa, we can pass the pageflip (atomic) completion fence back in to gpu so it doesn't start rendering the next frame until the buffer is off screen.
fwiw, currently android is the first user of explicit sync (although I expect wayland/weston to follow suit).
Second, really. Vulkan avoids implicit sync entirely, and exposes fence-like primitives throughout its whole API. These include being able to pass prerequisite fences for display (what Gustavo is adding here: something to block on before display), and also when the user acquires a buffer as a render target, it is given another prerequisite fence (the other side of what Gustavo is suggesting, i.e. the fence triggers when the buffer is no longer displayed and becomes available for rendering).
In order to implement this correctly, and avoid performance bubbles, we need a primitive like this exposed through the KMS API, from both sides. This is especially important when you take the case of userspace suballocation, where userspace allocates larger blocks and divides the allocation internally for different uses. Implicit sync does not work at all for that case.
As stated before, there are other benefits, including much better traceability. I would expect Wayland/Weston to also start pushing support for this API relatively soon.
A couple linaro folks have android running with an upstream kernel + mesa + atomic/kms hwc on a couple devices (nexus7 and db410c with freedreno, and qemu with virgl). But there are some limitations due to missing the EGL_ANDROID_native_fence_sync extension in mesa. I plan to implement that, but I ofc need the fence fd stuff in order to do so ;-)
Yes, having that would be a godsend for a lot of people.
And if we need really to expose fences to user-space and there is really real user, then we have already good candidates, DMA-BUF-IOCTL-SYNC or maybe fcntl system call because we share already DMA buffer between CPU <-> DMA and DMA <-> DMA using DMABUF. For DMA-BUF-IOCTL-SYNC, I think you remember that was what I tried long time ago because you was there. Several years ago, I tried to couple exposing the fences to user-space with cache operation although at that time, I really misleaded the fence machnism. My trying was also for the potential users.
Note that this is not (just) about sw sync, but also sync between multiple hw devices.
Sync isn't quite good enough, because it's a mandatory blocking point for userspace. We want to push the explicit fences further down the line, so userspace can parallelise its work.
Even if none of the above requirements held true, I don't think being able to support Android is a bad thing. It's completely right to be worried about pushing in Android work and APIs for the sake of it - hence why we didn't take ADF! - but in this case it's definitely a good thing. This is also the model that ChromeOS is moving towards, so it becomes more important from that point of view as well.
Cheers, Daniel
Hi Rob and Daniel,
2016년 03월 25일 21:10에 Daniel Stone 이(가) 쓴 글:
Hi all,
On 25 March 2016 at 11:58, Rob Clark robdclark@gmail.com wrote:
On Thu, Mar 24, 2016 at 7:49 PM, Inki Dae inki.dae@samsung.com wrote:
It's definitely different case. This tries to add new user-space interfaces to expose fences to user-space. At least, implicit interfaces are embedded into drivers. So I'd like to give you a question. Why exposing fences to user-space is required? To provide easy-to-debug solution to rendering pipleline? To provide merge fence feature?
Well, implicit sync and explicit sync are two different cases. Implicit sync ofc remains the default, but userspace could opt-in to explicit sync instead. For example, on the gpu side of things, depending on flags userspace passes in to the submit ioctl we would either attach the fence to all the written buffers (implicit) or return it as a fence fd to userspace (explicit), which userspace could then pass in to atomic ioctl to synchronize pageflip.
And visa-versa, we can pass the pageflip (atomic) completion fence back in to gpu so it doesn't start rendering the next frame until the buffer is off screen.
fwiw, currently android is the first user of explicit sync (although I expect wayland/weston to follow suit).
Second, really. Vulkan avoids implicit sync entirely, and exposes fence-like primitives throughout its whole API. These include being able to pass prerequisite fences for display (what Gustavo is adding here: something to block on before display), and also when the user acquires a buffer as a render target, it is given another prerequisite fence (the other side of what Gustavo is suggesting, i.e. the fence triggers when the buffer is no longer displayed and becomes available for rendering).
In order to implement this correctly, and avoid performance bubbles, we need a primitive like this exposed through the KMS API, from both sides. This is especially important when you take the case of userspace suballocation, where userspace allocates larger blocks and divides the allocation internally for different uses. Implicit sync does not work at all for that case.
Can you give me more details why implicit sync cannot take care of the case of userspace suballocation? And is there any reason that fence fd shouldn't dependent of DMABUF - now fence fd is a new file, not DMABUF fd?
As stated before, there are other benefits, including much better traceability. I would expect Wayland/Weston to also start pushing support for this API relatively soon.
A couple linaro folks have android running with an upstream kernel + mesa + atomic/kms hwc on a couple devices (nexus7 and db410c with freedreno, and qemu with virgl). But there are some limitations due to missing the EGL_ANDROID_native_fence_sync extension in mesa. I plan to implement that, but I ofc need the fence fd stuff in order to do so ;-)
Yes, having that would be a godsend for a lot of people.
And if we need really to expose fences to user-space and there is really real user, then we have already good candidates, DMA-BUF-IOCTL-SYNC or maybe fcntl system call because we share already DMA buffer between CPU <-> DMA and DMA <-> DMA using DMABUF. For DMA-BUF-IOCTL-SYNC, I think you remember that was what I tried long time ago because you was there. Several years ago, I tried to couple exposing the fences to user-space with cache operation although at that time, I really misleaded the fence machnism. My trying was also for the potential users.
Note that this is not (just) about sw sync, but also sync between multiple hw devices.
Sync isn't quite good enough, because it's a mandatory blocking point for userspace. We want to push the explicit fences further down the line, so userspace can parallelise its work.
Even if none of the above requirements held true, I don't think being able to support Android is a bad thing. It's completely right to be worried about pushing in Android work and APIs for the sake of it - hence why we didn't take ADF! - but in this case it's definitely a
As least Google's ADF boosted up atomic KMS. :)
good thing. This is also the model that ChromeOS is moving towards, so it becomes more important from that point of view as well.
I think Gustavo should had explaned this path series enough to other people when posting them - ie, what relationship explict and implicit fences have, and why implicit fence - which is independent of DMABUF - is required, and what use cases there are in real users, and etc.
Thanks, Inki Dae
Cheers, Daniel
Hi Inki,
On 28 March 2016 at 02:26, Inki Dae inki.dae@samsung.com wrote:
2016년 03월 25일 21:10에 Daniel Stone 이(가) 쓴 글:
Second, really. Vulkan avoids implicit sync entirely, and exposes fence-like primitives throughout its whole API. These include being able to pass prerequisite fences for display (what Gustavo is adding here: something to block on before display), and also when the user acquires a buffer as a render target, it is given another prerequisite fence (the other side of what Gustavo is suggesting, i.e. the fence triggers when the buffer is no longer displayed and becomes available for rendering).
In order to implement this correctly, and avoid performance bubbles, we need a primitive like this exposed through the KMS API, from both sides. This is especially important when you take the case of userspace suballocation, where userspace allocates larger blocks and divides the allocation internally for different uses. Implicit sync does not work at all for that case.
Can you give me more details why implicit sync cannot take care of the case of userspace suballocation?
Implicit sync does not know about suballocation, so implicit will operate for every range in the buffer, not just the one you want.
Say you have one kernel buffer, which userspace subdivides into four independent buffers. It can perform operations on these buffers which are completely independent of each other, and an explicit sync model allows this independence to be kept. Implicit sync ties them together, so that you cannot do any operations on buffer 1 until all operations on buffer 2 have completed.
And is there any reason that fence fd shouldn't dependent of DMABUF - now fence fd is a new file, not DMABUF fd?
Because dmabuf is for buffer sharing, and fences aren't buffers (they will never export page ranges). Is there any particular benefit you think you would get from doing this?
good thing. This is also the model that ChromeOS is moving towards, so it becomes more important from that point of view as well.
I think Gustavo should had explaned this path series enough to other people when posting them - ie, what relationship explict and implicit fences have, and why implicit fence - which is independent of DMABUF - is required, and what use cases there are in real users, and etc.
Fair enough, the summary could perhaps contain something like this.
Cheers, Daniel
Hi Daniel,
2016년 03월 28일 22:26에 Daniel Stone 이(가) 쓴 글:
Hi Inki,
On 28 March 2016 at 02:26, Inki Dae inki.dae@samsung.com wrote:
2016년 03월 25일 21:10에 Daniel Stone 이(가) 쓴 글:
Second, really. Vulkan avoids implicit sync entirely, and exposes fence-like primitives throughout its whole API. These include being able to pass prerequisite fences for display (what Gustavo is adding here: something to block on before display), and also when the user acquires a buffer as a render target, it is given another prerequisite fence (the other side of what Gustavo is suggesting, i.e. the fence triggers when the buffer is no longer displayed and becomes available for rendering).
In order to implement this correctly, and avoid performance bubbles, we need a primitive like this exposed through the KMS API, from both sides. This is especially important when you take the case of userspace suballocation, where userspace allocates larger blocks and divides the allocation internally for different uses. Implicit sync does not work at all for that case.
Can you give me more details why implicit sync cannot take care of the case of userspace suballocation?
Implicit sync does not know about suballocation, so implicit will operate for every range in the buffer, not just the one you want.
Say you have one kernel buffer, which userspace subdivides into four independent buffers. It can perform operations on these buffers which are completely independent of each other, and an explicit sync model allows this independence to be kept. Implicit sync ties them together, so that you cannot do any operations on buffer 1 until all operations on buffer 2 have completed.
And is there any reason that fence fd shouldn't dependent of DMABUF - now fence fd is a new file, not DMABUF fd?
Because dmabuf is for buffer sharing, and fences aren't buffers (they will never export page ranges). Is there any particular benefit you think you would get from doing this?
Just for consistency. As you know, implicit sync hangs DMA fence up on dmabuf object through reservation object so dmabuf independent explicit sync looked strange to me. As you mentioned above, the suballocation would be why explicit sync should be indepenent of DMABUF.
In addition, I wonder how explicit and implicit fences could coexist together. Rob said, "Implicit sync ofc remains the default, but userspace could opt-in to explicit sync instead"
This would mean that if we use explicit sync for user-space then it coexists with implicit sync. However, these two sync fences can't see same DMA buffer because explicit fence has a different file object from implicit one. So in this case, I think explicit fence would need to be hung up on the reservation object of dmabuf object somehow. Otherwise, although they coexist together, are these fences - explicit and implicit - used for differenct purpose separately?
Thanks, Inki Dae
good thing. This is also the model that ChromeOS is moving towards, so it becomes more important from that point of view as well.
I think Gustavo should had explaned this path series enough to other people when posting them - ie, what relationship explict and implicit fences have, and why implicit fence - which is independent of DMABUF - is required, and what use cases there are in real users, and etc.
Fair enough, the summary could perhaps contain something like this.
Cheers, Daniel
On Mon, Mar 28, 2016 at 10:18 PM, Inki Dae inki.dae@samsung.com wrote:
In addition, I wonder how explicit and implicit fences could coexist together. Rob said, "Implicit sync ofc remains the default, but userspace could opt-in to explicit sync instead"
This would mean that if we use explicit sync for user-space then it coexists with implicit sync. However, these two sync fences can't see same DMA buffer because explicit fence has a different file object from implicit one. So in this case, I think explicit fence would need to be hung up on the reservation object of dmabuf object somehow. Otherwise, although they coexist together, are these fences - explicit and implicit - used for differenct purpose separately?
I'm not entirely sure about coexistance at the same time. It ofc shouldn't be a problem for one kernel to support both kinds of userspace (pure explicit and pure implicit). And how this would work on kms atomic ioctl (compositor/consumer) side seems clear enough.. ie. some sort of flag, which if set user provides an explicit fence fd, and if not set we fall back to current behaviour (ie. get fences from resv object).
On the gpu/producer side, I think what makes sense is to both attach the fence to the resv objects and (optionally, specified by an submit ioctl flag) return a fence fd. The other option is to add a new ioctl to convert an internal per-ring fence/seqno (that is already returned by submit ioctl) to a fence fd.. but I think that would end up with duplicate 'struct fence' objects on the kernel side (not sure if that would cause issues somehow), and might be unneeded since with EGL_ANDROID_native_fence_sync since we should know before glFlush() is called whether we want an fd or not. But main thing I'm pondering here is how to sanely support the old way of userspace gl driver internal synchronization with new userspace on old kernel, but also conditionally support EGL_ANDROID_native_fence_sync if you have a new enough kernel.
BR, -R
2016년 03월 29일 22:23에 Rob Clark 이(가) 쓴 글:
On Mon, Mar 28, 2016 at 10:18 PM, Inki Dae inki.dae@samsung.com wrote:
In addition, I wonder how explicit and implicit fences could coexist together. Rob said, "Implicit sync ofc remains the default, but userspace could opt-in to explicit sync instead"
This would mean that if we use explicit sync for user-space then it coexists with implicit sync. However, these two sync fences can't see same DMA buffer because explicit fence has a different file object from implicit one. So in this case, I think explicit fence would need to be hung up on the reservation object of dmabuf object somehow. Otherwise, although they coexist together, are these fences - explicit and implicit - used for differenct purpose separately?
I'm not entirely sure about coexistance at the same time. It ofc shouldn't be a problem for one kernel to support both kinds of userspace (pure explicit and pure implicit). And how this would work on kms atomic ioctl (compositor/consumer) side seems clear enough.. ie. some sort of flag, which if set user provides an explicit fence fd, and if not set we fall back to current behaviour (ie. get fences from resv object).
With this patch series, users can register explicit fence(s) to atomic kms(consumer side) through kms property interface for the explicit sync.
However, now several DRM drivers(also consumer) already have beeen using implicit fence. So while GPU(producer side) is accessing DMA buffer after registering its explicit fence to atomic kms, and if atomic commit is requested by user-space, then atomic helper framework will try to synchronize with the producer - waiting for the signal of GPU side(producer), and device specific page flip function will also try to do same thing.
As of now, it seems that this wouldn't be optional but mandatory if explicit fence support is added to the atomic helper framework. This would definitely be duplication and it seems not clear enough even if one of them is just skipped in runtime.
On the gpu/producer side, I think what makes sense is to both attach the fence to the resv objects and (optionally, specified by an submit ioctl flag) return a fence fd. The other option is to add a new ioctl to convert an internal per-ring fence/seqno (that is already returned by submit ioctl) to a fence fd.. but I think that would end up with duplicate 'struct fence' objects on the kernel side (not sure if that
I think the problem is not that kernel just keeps duplicate fence objects separately but is that these fences can be performed separately for same purpose.
would cause issues somehow), and might be unneeded since with EGL_ANDROID_native_fence_sync since we should know before glFlush() is called whether we want an fd or not. But main thing I'm pondering
So I think this is not user-space issue. All users don't have to know whether DMA drivers support implicit fence or not so as soon as user uses explicit fence, the duplication would happen.
There may be something I missed so your comment would be helpful in understanding it.
Thanks, Inki Dae
here is how to sanely support the old way of userspace gl driver internal synchronization with new userspace on old kernel, but also conditionally support EGL_ANDROID_native_fence_sync if you have a new enough kernel.
BR, -R
Hi Inki,
On 31 March 2016 at 08:45, Inki Dae inki.dae@samsung.com wrote:
2016년 03월 29일 22:23에 Rob Clark 이(가) 쓴 글:
On Mon, Mar 28, 2016 at 10:18 PM, Inki Dae inki.dae@samsung.com wrote:
In addition, I wonder how explicit and implicit fences could coexist together. Rob said, "Implicit sync ofc remains the default, but userspace could opt-in to explicit sync instead"
This would mean that if we use explicit sync for user-space then it coexists with implicit sync. However, these two sync fences can't see same DMA buffer because explicit fence has a different file object from implicit one. So in this case, I think explicit fence would need to be hung up on the reservation object of dmabuf object somehow. Otherwise, although they coexist together, are these fences - explicit and implicit - used for differenct purpose separately?
I'm not entirely sure about coexistance at the same time. It ofc shouldn't be a problem for one kernel to support both kinds of userspace (pure explicit and pure implicit). And how this would work on kms atomic ioctl (compositor/consumer) side seems clear enough.. ie. some sort of flag, which if set user provides an explicit fence fd, and if not set we fall back to current behaviour (ie. get fences from resv object).
With this patch series, users can register explicit fence(s) to atomic kms(consumer side) through kms property interface for the explicit sync.
However, now several DRM drivers(also consumer) already have beeen using implicit fence. So while GPU(producer side) is accessing DMA buffer after registering its explicit fence to atomic kms, and if atomic commit is requested by user-space, then atomic helper framework will try to synchronize with the producer - waiting for the signal of GPU side(producer), and device specific page flip function will also try to do same thing.
Well, it has to be one or the other: mixing explicit and implicit, defeats the purpose of using explicit fencing. So, when explicit fencing is in use, implicit fences must be ignored.
As of now, it seems that this wouldn't be optional but mandatory if explicit fence support is added to the atomic helper framework. This would definitely be duplication and it seems not clear enough even if one of them is just skipped in runtime.
Drivers would have to opt in to explicit fencing support, and part of that would be ensuring that the driver does not wait on implicit fences when the user has requested explicit fencing be used.
Cheers, Daniel
On Thu, Mar 31, 2016 at 10:35:11AM +0100, Daniel Stone wrote:
Well, it has to be one or the other: mixing explicit and implicit, defeats the purpose of using explicit fencing. So, when explicit fencing is in use, implicit fences must be ignored.
You can mix it, if you're careful. CrOS wants that to better mesh android with Ozone, and we'll be discussing what needs to be added to be able to make it work implicit and explicit fencing work together, in both directions. Of course this should only be used for shared buffers, e.g. explicit syncing in an android client running on top of implicitly synced ozone/kms. -Daniel
2016-03-31 19:04 GMT+09:00 Daniel Vetter daniel@ffwll.ch:
On Thu, Mar 31, 2016 at 10:35:11AM +0100, Daniel Stone wrote:
Well, it has to be one or the other: mixing explicit and implicit, defeats the purpose of using explicit fencing. So, when explicit fencing is in use, implicit fences must be ignored.
You can mix it, if you're careful. CrOS wants that to better mesh android with Ozone, and we'll be discussing what needs to be added to be able to make it work implicit and explicit fencing work together, in both directions. Of course this should only be used for shared buffers, e.g. explicit syncing in an android client running on top of implicitly synced ozone/kms.
Good idea. I hope fence things of mainline would be more discussed so could be considered for many cases.
Thanks, Inki Dae
-Daniel
Daniel Vetter Software Engineer, Intel Corporation http://blog.ffwll.ch _______________________________________________ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel
Hi Daniel,
2016년 03월 31일 18:35에 Daniel Stone 이(가) 쓴 글:
Hi Inki,
On 31 March 2016 at 08:45, Inki Dae inki.dae@samsung.com wrote:
2016년 03월 29일 22:23에 Rob Clark 이(가) 쓴 글:
On Mon, Mar 28, 2016 at 10:18 PM, Inki Dae inki.dae@samsung.com wrote:
In addition, I wonder how explicit and implicit fences could coexist together. Rob said, "Implicit sync ofc remains the default, but userspace could opt-in to explicit sync instead"
This would mean that if we use explicit sync for user-space then it coexists with implicit sync. However, these two sync fences can't see same DMA buffer because explicit fence has a different file object from implicit one. So in this case, I think explicit fence would need to be hung up on the reservation object of dmabuf object somehow. Otherwise, although they coexist together, are these fences - explicit and implicit - used for differenct purpose separately?
I'm not entirely sure about coexistance at the same time. It ofc shouldn't be a problem for one kernel to support both kinds of userspace (pure explicit and pure implicit). And how this would work on kms atomic ioctl (compositor/consumer) side seems clear enough.. ie. some sort of flag, which if set user provides an explicit fence fd, and if not set we fall back to current behaviour (ie. get fences from resv object).
With this patch series, users can register explicit fence(s) to atomic kms(consumer side) through kms property interface for the explicit sync.
However, now several DRM drivers(also consumer) already have beeen using implicit fence. So while GPU(producer side) is accessing DMA buffer after registering its explicit fence to atomic kms, and if atomic commit is requested by user-space, then atomic helper framework will try to synchronize with the producer - waiting for the signal of GPU side(producer), and device specific page flip function will also try to do same thing.
Well, it has to be one or the other: mixing explicit and implicit, defeats the purpose of using explicit fencing. So, when explicit fencing is in use, implicit fences must be ignored.
As of now, it seems that this wouldn't be optional but mandatory if explicit fence support is added to the atomic helper framework. This would definitely be duplication and it seems not clear enough even if one of them is just skipped in runtime.
Drivers would have to opt in to explicit fencing support, and part of that would be ensuring that the driver does not wait on implicit fences when the user has requested explicit fencing be used.
Then, existing drivers would need additional works for explicit fencing support. This wouldn't be really what the drivers have to but should be handled with this patch series because this would affect exising device drivers which use implicit fencing.
Thanks, Inki Dae
Cheers, Daniel
Hi Inki,
On 31 March 2016 at 11:05, Inki Dae inki.dae@samsung.com wrote:
2016년 03월 31일 18:35에 Daniel Stone 이(가) 쓴 글:
On 31 March 2016 at 08:45, Inki Dae inki.dae@samsung.com wrote:
As of now, it seems that this wouldn't be optional but mandatory if explicit fence support is added to the atomic helper framework. This would definitely be duplication and it seems not clear enough even if one of them is just skipped in runtime.
Drivers would have to opt in to explicit fencing support, and part of that would be ensuring that the driver does not wait on implicit fences when the user has requested explicit fencing be used.
Then, existing drivers would need additional works for explicit fencing support. This wouldn't be really what the drivers have to but should be handled with this patch series because this would affect exising device drivers which use implicit fencing.
Well, yes. Anyone implementing their own atomic commit would need to ensure that the commit works properly for fences. The helpers could also add it, but the helpers are not mandatory, and you are not required to use every part of the helper to use one part of the helper. There is no magic wand you can wave that instantly makes it work for every driver.
Cheers, Daniel
Hi Daniel,
2016-03-31 19:56 GMT+09:00 Daniel Stone daniel@fooishbar.org:
Hi Inki,
On 31 March 2016 at 11:05, Inki Dae inki.dae@samsung.com wrote:
2016년 03월 31일 18:35에 Daniel Stone 이(가) 쓴 글:
On 31 March 2016 at 08:45, Inki Dae inki.dae@samsung.com wrote:
As of now, it seems that this wouldn't be optional but mandatory if explicit fence support is added to the atomic helper framework. This would definitely be duplication and it seems not clear enough even if one of them is just skipped in runtime.
Drivers would have to opt in to explicit fencing support, and part of that would be ensuring that the driver does not wait on implicit fences when the user has requested explicit fencing be used.
Then, existing drivers would need additional works for explicit fencing support. This wouldn't be really what the drivers have to but should be handled with this patch series because this would affect exising device drivers which use implicit fencing.
Well, yes. Anyone implementing their own atomic commit would need to ensure that the commit works properly for fences. The helpers could also add it, but the helpers are not mandatory, and you are not required to use every part of the helper to use one part of the helper. There is no magic wand you can wave that instantly makes it work for every driver
I meant there are already several DRM drivers which work properly for implicit fence. So if atomic helper framework of DRM core is considered only for the explicit fence, then fencing operation would affect the existing DRM drivers. So I hope this trying could consider existing implicit fence users.
Thanks, Inki Dae .
Cheers, Daniel _______________________________________________ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel
Hi Inki,
On 31 March 2016 at 12:26, Inki Dae daeinki@gmail.com wrote:
2016-03-31 19:56 GMT+09:00 Daniel Stone daniel@fooishbar.org:
On 31 March 2016 at 11:05, Inki Dae inki.dae@samsung.com wrote:
Then, existing drivers would need additional works for explicit fencing support. This wouldn't be really what the drivers have to but should be handled with this patch series because this would affect exising device drivers which use implicit fencing.
Well, yes. Anyone implementing their own atomic commit would need to ensure that the commit works properly for fences. The helpers could also add it, but the helpers are not mandatory, and you are not required to use every part of the helper to use one part of the helper. There is no magic wand you can wave that instantly makes it work for every driver
I meant there are already several DRM drivers which work properly for implicit fence. So if atomic helper framework of DRM core is considered only for the explicit fence, then fencing operation would affect the existing DRM drivers. So I hope this trying could consider existing implicit fence users.
Yes, absolutely. Implicit fencing is already part of userspace ABI that we can effectively never remove: it would break everyone's desktops on Intel alone, as well as many others. So explicit will be opt-in from the user and the driver both, and only when the combination is fully supported will explicit fencing be used.
Cheers, Daniel
On Thu, Mar 31, 2016 at 7:26 AM, Inki Dae daeinki@gmail.com wrote:
Hi Daniel,
2016-03-31 19:56 GMT+09:00 Daniel Stone daniel@fooishbar.org:
Hi Inki,
On 31 March 2016 at 11:05, Inki Dae inki.dae@samsung.com wrote:
2016년 03월 31일 18:35에 Daniel Stone 이(가) 쓴 글:
On 31 March 2016 at 08:45, Inki Dae inki.dae@samsung.com wrote:
As of now, it seems that this wouldn't be optional but mandatory if explicit fence support is added to the atomic helper framework. This would definitely be duplication and it seems not clear enough even if one of them is just skipped in runtime.
Drivers would have to opt in to explicit fencing support, and part of that would be ensuring that the driver does not wait on implicit fences when the user has requested explicit fencing be used.
Then, existing drivers would need additional works for explicit fencing support. This wouldn't be really what the drivers have to but should be handled with this patch series because this would affect exising device drivers which use implicit fencing.
Well, yes. Anyone implementing their own atomic commit would need to ensure that the commit works properly for fences. The helpers could also add it, but the helpers are not mandatory, and you are not required to use every part of the helper to use one part of the helper. There is no magic wand you can wave that instantly makes it work for every driver
I meant there are already several DRM drivers which work properly for implicit fence. So if atomic helper framework of DRM core is considered only for the explicit fence, then fencing operation would affect the existing DRM drivers. So I hope this trying could consider existing implicit fence users.
Note that there would be a new flag on the atomic ioctl to request explicit fencing, and with an old kernel or a driver that does not support it, the ioctl would be rejected and an error returned. The atomic/kms framework would of course continue to support implicit fencing. And an explicit-fencing userspace would require a sufficiently new kernel and possibly some minor driver support (above and beyond 'struct fence' conversion).
BR, -R
2016년 03월 31일 23:10에 Rob Clark 이(가) 쓴 글:
On Thu, Mar 31, 2016 at 7:26 AM, Inki Dae daeinki@gmail.com wrote:
Hi Daniel,
2016-03-31 19:56 GMT+09:00 Daniel Stone daniel@fooishbar.org:
Hi Inki,
On 31 March 2016 at 11:05, Inki Dae inki.dae@samsung.com wrote:
2016년 03월 31일 18:35에 Daniel Stone 이(가) 쓴 글:
On 31 March 2016 at 08:45, Inki Dae inki.dae@samsung.com wrote:
As of now, it seems that this wouldn't be optional but mandatory if explicit fence support is added to the atomic helper framework. This would definitely be duplication and it seems not clear enough even if one of them is just skipped in runtime.
Drivers would have to opt in to explicit fencing support, and part of that would be ensuring that the driver does not wait on implicit fences when the user has requested explicit fencing be used.
Then, existing drivers would need additional works for explicit fencing support. This wouldn't be really what the drivers have to but should be handled with this patch series because this would affect exising device drivers which use implicit fencing.
Well, yes. Anyone implementing their own atomic commit would need to ensure that the commit works properly for fences. The helpers could also add it, but the helpers are not mandatory, and you are not required to use every part of the helper to use one part of the helper. There is no magic wand you can wave that instantly makes it work for every driver
I meant there are already several DRM drivers which work properly for implicit fence. So if atomic helper framework of DRM core is considered only for the explicit fence, then fencing operation would affect the existing DRM drivers. So I hope this trying could consider existing implicit fence users.
Note that there would be a new flag on the atomic ioctl to request
What is the new flag? And Where I could find the new flag?
explicit fencing, and with an old kernel or a driver that does not support it, the ioctl would be rejected and an error returned. The atomic/kms framework would of course continue to support implicit
I couldn't find where such exceptions are considered. And as of now, I think implicit fence is implemented by drivers so hided from drm core framework. So how atomic/kms framework knows whether explicit or implicit fence is supported by driver? Otherwise, you mean such things are TODO in the future?
There may be some logic I don't understand yet.
Thanks, Inki Dae
fencing. And an explicit-fencing userspace would require a sufficiently new kernel and possibly some minor driver support (above and beyond 'struct fence' conversion).
BR, -R _______________________________________________ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel
On Sun, Apr 3, 2016 at 8:14 PM, Inki Dae inki.dae@samsung.com wrote:
2016년 03월 31일 23:10에 Rob Clark 이(가) 쓴 글:
On Thu, Mar 31, 2016 at 7:26 AM, Inki Dae daeinki@gmail.com wrote:
Hi Daniel,
2016-03-31 19:56 GMT+09:00 Daniel Stone daniel@fooishbar.org:
Hi Inki,
On 31 March 2016 at 11:05, Inki Dae inki.dae@samsung.com wrote:
2016년 03월 31일 18:35에 Daniel Stone 이(가) 쓴 글:
On 31 March 2016 at 08:45, Inki Dae inki.dae@samsung.com wrote: > As of now, it seems that this wouldn't be optional but mandatory if explicit fence support is added to the atomic helper framework. This would definitely be duplication and it seems not clear enough even if one of them is just skipped in runtime.
Drivers would have to opt in to explicit fencing support, and part of that would be ensuring that the driver does not wait on implicit fences when the user has requested explicit fencing be used.
Then, existing drivers would need additional works for explicit fencing support. This wouldn't be really what the drivers have to but should be handled with this patch series because this would affect exising device drivers which use implicit fencing.
Well, yes. Anyone implementing their own atomic commit would need to ensure that the commit works properly for fences. The helpers could also add it, but the helpers are not mandatory, and you are not required to use every part of the helper to use one part of the helper. There is no magic wand you can wave that instantly makes it work for every driver
I meant there are already several DRM drivers which work properly for implicit fence. So if atomic helper framework of DRM core is considered only for the explicit fence, then fencing operation would affect the existing DRM drivers. So I hope this trying could consider existing implicit fence users.
Note that there would be a new flag on the atomic ioctl to request
What is the new flag? And Where I could find the new flag?
https://git.kernel.org/cgit/linux/kernel/git/padovan/linux.git/commit/?h=fen...
Note that on the in-fence side of things, the current proposal from Gustavo is to have a new property, rather than separate flag and additional args to atomic ioctl. But end result would be the same.
Keep in mind, this is not merged yet, so things could change. Compared to his current patchset[1] I think we at least need to add a driver feature flag.
[1] https://git.kernel.org/cgit/linux/kernel/git/padovan/linux.git/log/?h=fences
explicit fencing, and with an old kernel or a driver that does not support it, the ioctl would be rejected and an error returned. The atomic/kms framework would of course continue to support implicit
I couldn't find where such exceptions are considered. And as of now, I think implicit fence is implemented by drivers so hided from drm core framework. So how atomic/kms framework knows whether explicit or implicit fence is supported by driver? Otherwise, you mean such things are TODO in the future?
I think we need to add a flag to driver_features (ie. DRIVER_EXPLICIT_FENCE or something like that)
BR, -R
There may be some logic I don't understand yet.
Thanks, Inki Dae
fencing. And an explicit-fencing userspace would require a sufficiently new kernel and possibly some minor driver support (above and beyond 'struct fence' conversion).
BR, -R _______________________________________________ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel
On Mon, Apr 4, 2016 at 11:41 AM, Rob Clark robdclark@gmail.com wrote:
On Sun, Apr 3, 2016 at 8:14 PM, Inki Dae inki.dae@samsung.com wrote:
2016년 03월 31일 23:10에 Rob Clark 이(가) 쓴 글:
On Thu, Mar 31, 2016 at 7:26 AM, Inki Dae daeinki@gmail.com wrote:
Hi Daniel,
2016-03-31 19:56 GMT+09:00 Daniel Stone daniel@fooishbar.org:
Hi Inki,
On 31 March 2016 at 11:05, Inki Dae inki.dae@samsung.com wrote:
2016년 03월 31일 18:35에 Daniel Stone 이(가) 쓴 글: > On 31 March 2016 at 08:45, Inki Dae inki.dae@samsung.com wrote: >> As of now, it seems that this wouldn't be optional but mandatory if explicit fence support is added to the atomic helper framework. This would definitely be duplication and it seems not clear enough even if one of them is just skipped in runtime. > > Drivers would have to opt in to explicit fencing support, and part of > that would be ensuring that the driver does not wait on implicit > fences when the user has requested explicit fencing be used. >
Then, existing drivers would need additional works for explicit fencing support. This wouldn't be really what the drivers have to but should be handled with this patch series because this would affect exising device drivers which use implicit fencing.
Well, yes. Anyone implementing their own atomic commit would need to ensure that the commit works properly for fences. The helpers could also add it, but the helpers are not mandatory, and you are not required to use every part of the helper to use one part of the helper. There is no magic wand you can wave that instantly makes it work for every driver
I meant there are already several DRM drivers which work properly for implicit fence. So if atomic helper framework of DRM core is considered only for the explicit fence, then fencing operation would affect the existing DRM drivers. So I hope this trying could consider existing implicit fence users.
Note that there would be a new flag on the atomic ioctl to request
What is the new flag? And Where I could find the new flag?
https://git.kernel.org/cgit/linux/kernel/git/padovan/linux.git/commit/?h=fen...
Note that on the in-fence side of things, the current proposal from Gustavo is to have a new property, rather than separate flag and additional args to atomic ioctl. But end result would be the same.
Keep in mind, this is not merged yet, so things could change. Compared to his current patchset[1] I think we at least need to add a driver feature flag.
[1] https://git.kernel.org/cgit/linux/kernel/git/padovan/linux.git/log/?h=fences
explicit fencing, and with an old kernel or a driver that does not support it, the ioctl would be rejected and an error returned. The atomic/kms framework would of course continue to support implicit
I couldn't find where such exceptions are considered. And as of now, I think implicit fence is implemented by drivers so hided from drm core framework. So how atomic/kms framework knows whether explicit or implicit fence is supported by driver? Otherwise, you mean such things are TODO in the future?
I think we need to add a flag to driver_features (ie. DRIVER_EXPLICIT_FENCE or something like that)
well, I should say, there is also the possibility that we could handle fences entirely in drm core. Although it would mean that atomic-helper would need to know about the driver's workqueue for blocking on the fence(s) for _NONBLOCK commits..
BR, -R
BR, -R
There may be some logic I don't understand yet.
Thanks, Inki Dae
fencing. And an explicit-fencing userspace would require a sufficiently new kernel and possibly some minor driver support (above and beyond 'struct fence' conversion).
BR, -R _______________________________________________ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel
dri-devel@lists.freedesktop.org