drm_atomic_commit could previous have always failed when waits failed, but locking was always done uninterruptibly. Add infrastructure to make it possible for callers to choose interruptible locking, and convert the 4 most common ioctl's to use it.
All other atomic helpers can be converted when Daniel's "acquire_ctx for everyone!" patch series lands.
Maarten Lankhorst (5): drm/atomic: Prepare drm_modeset_lock infrastructure for interruptible waiting drm/atomic: Convert atomic ioctl locking to interruptible. drm/atomic: Convert pageflip ioctl locking to interruptible. drm/legacy: Convert cursor ioctl locking to interruptible. drm/legacy: Convert setplane ioctl locking to interruptible.
drivers/gpu/drm/drm_atomic.c | 7 ++-- drivers/gpu/drm/drm_debugfs_crc.c | 2 +- drivers/gpu/drm/drm_modeset_lock.c | 75 ++++++++++++++++++-------------------- drivers/gpu/drm/drm_plane.c | 21 ++++++----- include/drm/drm_modeset_lock.h | 12 ++++-- 5 files changed, 60 insertions(+), 57 deletions(-)
When we want to make drm_atomic_commit interruptible, there are a lot of places that call the lock function, which we don't have control over.
Rather than trying to convert every single one, it's easier to toggle interruptible waiting per acquire_ctx. If drm_modeset_acquire_init is called with DRM_MODESET_ACQUIRE_INTERRUPTIBLE, then we will perform interruptible waits in drm_modeset_lock and drm_modeset_backoff.
Signed-off-by: Maarten Lankhorst maarten.lankhorst@linux.intel.com --- drivers/gpu/drm/drm_debugfs_crc.c | 2 +- drivers/gpu/drm/drm_modeset_lock.c | 75 ++++++++++++++++++-------------------- include/drm/drm_modeset_lock.h | 12 ++++-- 3 files changed, 44 insertions(+), 45 deletions(-)
diff --git a/drivers/gpu/drm/drm_debugfs_crc.c b/drivers/gpu/drm/drm_debugfs_crc.c index f9e26dda56d6..9dd879589a2c 100644 --- a/drivers/gpu/drm/drm_debugfs_crc.c +++ b/drivers/gpu/drm/drm_debugfs_crc.c @@ -155,7 +155,7 @@ static int crtc_crc_open(struct inode *inode, struct file *filep) int ret = 0;
if (drm_drv_uses_atomic_modeset(crtc->dev)) { - ret = drm_modeset_lock_interruptible(&crtc->mutex, NULL); + ret = drm_modeset_lock_single_interruptible(&crtc->mutex); if (ret) return ret;
diff --git a/drivers/gpu/drm/drm_modeset_lock.c b/drivers/gpu/drm/drm_modeset_lock.c index af4e906c630d..5db47e04e68e 100644 --- a/drivers/gpu/drm/drm_modeset_lock.c +++ b/drivers/gpu/drm/drm_modeset_lock.c @@ -178,7 +178,11 @@ EXPORT_SYMBOL(drm_warn_on_modeset_not_all_locked); /** * drm_modeset_acquire_init - initialize acquire context * @ctx: the acquire context - * @flags: for future + * @flags: 0 or %DRM_MODESET_ACQUIRE_INTERRUPTIBLE + * + * When passing %DRM_MODESET_ACQUIRE_INTERRUPTIBLE to @flags, + * all calls to drm_modeset_lock() will perform an interruptible + * wait. */ void drm_modeset_acquire_init(struct drm_modeset_acquire_ctx *ctx, uint32_t flags) @@ -186,6 +190,9 @@ void drm_modeset_acquire_init(struct drm_modeset_acquire_ctx *ctx, memset(ctx, 0, sizeof(*ctx)); ww_acquire_init(&ctx->ww_ctx, &crtc_ww_class); INIT_LIST_HEAD(&ctx->locked); + + if (flags & DRM_MODESET_ACQUIRE_INTERRUPTIBLE) + ctx->interruptible = true; } EXPORT_SYMBOL(drm_modeset_acquire_init);
@@ -261,8 +268,19 @@ static inline int modeset_lock(struct drm_modeset_lock *lock, return ret; }
-static int modeset_backoff(struct drm_modeset_acquire_ctx *ctx, - bool interruptible) +/** + * drm_modeset_backoff - deadlock avoidance backoff + * @ctx: the acquire context + * + * If deadlock is detected (ie. drm_modeset_lock() returns -EDEADLK), + * you must call this function to drop all currently held locks and + * block until the contended lock becomes available. + * + * This function returns 0 on success, or -ERESTARTSYS if this context + * is initialized with %DRM_MODESET_ACQUIRE_INTERRUPTIBLE and the + * wait has been interrupted. + */ +int drm_modeset_backoff(struct drm_modeset_acquire_ctx *ctx) { struct drm_modeset_lock *contended = ctx->contended;
@@ -273,36 +291,11 @@ static int modeset_backoff(struct drm_modeset_acquire_ctx *ctx,
drm_modeset_drop_locks(ctx);
- return modeset_lock(contended, ctx, interruptible, true); -} - -/** - * drm_modeset_backoff - deadlock avoidance backoff - * @ctx: the acquire context - * - * If deadlock is detected (ie. drm_modeset_lock() returns -EDEADLK), - * you must call this function to drop all currently held locks and - * block until the contended lock becomes available. - */ -void drm_modeset_backoff(struct drm_modeset_acquire_ctx *ctx) -{ - modeset_backoff(ctx, false); + return modeset_lock(contended, ctx, ctx->interruptible, true); } EXPORT_SYMBOL(drm_modeset_backoff);
/** - * drm_modeset_backoff_interruptible - deadlock avoidance backoff - * @ctx: the acquire context - * - * Interruptible version of drm_modeset_backoff() - */ -int drm_modeset_backoff_interruptible(struct drm_modeset_acquire_ctx *ctx) -{ - return modeset_backoff(ctx, true); -} -EXPORT_SYMBOL(drm_modeset_backoff_interruptible); - -/** * drm_modeset_lock_init - initialize lock * @lock: lock to init */ @@ -324,14 +317,18 @@ EXPORT_SYMBOL(drm_modeset_lock_init); * deadlock scenario has been detected and it is an error to attempt * to take any more locks without first calling drm_modeset_backoff(). * + * If the @ctx is not NULL and initialized with + * %DRM_MODESET_ACQUIRE_INTERRUPTIBLE, this function will behave + * as drm_modeset_lock_interruptible(). + * * If @ctx is NULL then the function call behaves like a normal, - * non-nesting mutex_lock() call. + * uninterruptible non-nesting mutex_lock() call. */ int drm_modeset_lock(struct drm_modeset_lock *lock, struct drm_modeset_acquire_ctx *ctx) { if (ctx) - return modeset_lock(lock, ctx, false, false); + return modeset_lock(lock, ctx, ctx->interruptible, false);
ww_mutex_lock(&lock->mutex, NULL); return 0; @@ -339,21 +336,19 @@ int drm_modeset_lock(struct drm_modeset_lock *lock, EXPORT_SYMBOL(drm_modeset_lock);
/** - * drm_modeset_lock_interruptible - take modeset lock + * drm_modeset_lock_single_interruptible - take a single modeset lock * @lock: lock to take - * @ctx: acquire ctx * - * Interruptible version of drm_modeset_lock() + * This function behaves as drm_modeset_lock() with a NULL context, + * but performs interruptible waits. + * + * This function returns 0 on success, or -ERESTARTSYS when interrupted. */ -int drm_modeset_lock_interruptible(struct drm_modeset_lock *lock, - struct drm_modeset_acquire_ctx *ctx) +int drm_modeset_lock_single_interruptible(struct drm_modeset_lock *lock) { - if (ctx) - return modeset_lock(lock, ctx, true, false); - return ww_mutex_lock_interruptible(&lock->mutex, NULL); } -EXPORT_SYMBOL(drm_modeset_lock_interruptible); +EXPORT_SYMBOL(drm_modeset_lock_single_interruptible);
/** * drm_modeset_unlock - drop modeset lock diff --git a/include/drm/drm_modeset_lock.h b/include/drm/drm_modeset_lock.h index 4b27c2bb955c..a685d1bb21f2 100644 --- a/include/drm/drm_modeset_lock.h +++ b/include/drm/drm_modeset_lock.h @@ -34,6 +34,7 @@ struct drm_modeset_lock; * @contended: used internally for -EDEADLK handling * @locked: list of held locks * @trylock_only: trylock mode used in atomic contexts/panic notifiers + * @interruptible: whether interruptible locking should be used. * * Each thread competing for a set of locks must use one acquire * ctx. And if any lock fxn returns -EDEADLK, it must backoff and @@ -59,6 +60,9 @@ struct drm_modeset_acquire_ctx { * Trylock mode, use only for panic handlers! */ bool trylock_only; + + /* Perform interruptible waits on this context. */ + bool interruptible; };
/** @@ -82,12 +86,13 @@ struct drm_modeset_lock { struct list_head head; };
+#define DRM_MODESET_ACQUIRE_INTERRUPTIBLE BIT(0) + void drm_modeset_acquire_init(struct drm_modeset_acquire_ctx *ctx, uint32_t flags); void drm_modeset_acquire_fini(struct drm_modeset_acquire_ctx *ctx); void drm_modeset_drop_locks(struct drm_modeset_acquire_ctx *ctx); -void drm_modeset_backoff(struct drm_modeset_acquire_ctx *ctx); -int drm_modeset_backoff_interruptible(struct drm_modeset_acquire_ctx *ctx); +int drm_modeset_backoff(struct drm_modeset_acquire_ctx *ctx);
void drm_modeset_lock_init(struct drm_modeset_lock *lock);
@@ -111,8 +116,7 @@ static inline bool drm_modeset_is_locked(struct drm_modeset_lock *lock)
int drm_modeset_lock(struct drm_modeset_lock *lock, struct drm_modeset_acquire_ctx *ctx); -int drm_modeset_lock_interruptible(struct drm_modeset_lock *lock, - struct drm_modeset_acquire_ctx *ctx); +int __must_check drm_modeset_lock_single_interruptible(struct drm_modeset_lock *lock); void drm_modeset_unlock(struct drm_modeset_lock *lock);
struct drm_device;
On Thu, Jul 27, 2017 at 02:58:36PM +0200, Maarten Lankhorst wrote:
When we want to make drm_atomic_commit interruptible, there are a lot of places that call the lock function, which we don't have control over.
Rather than trying to convert every single one, it's easier to toggle interruptible waiting per acquire_ctx. If drm_modeset_acquire_init is called with DRM_MODESET_ACQUIRE_INTERRUPTIBLE, then we will perform interruptible waits in drm_modeset_lock and drm_modeset_backoff.
Signed-off-by: Maarten Lankhorst maarten.lankhorst@linux.intel.com
I wonder whether we should do a drm_modeset_lock_single without the context attribute too, for OCD. But not really needed.
Reviewed-by: Daniel Vetter daniel.vetter@ffwll.ch
drivers/gpu/drm/drm_debugfs_crc.c | 2 +- drivers/gpu/drm/drm_modeset_lock.c | 75 ++++++++++++++++++-------------------- include/drm/drm_modeset_lock.h | 12 ++++-- 3 files changed, 44 insertions(+), 45 deletions(-)
diff --git a/drivers/gpu/drm/drm_debugfs_crc.c b/drivers/gpu/drm/drm_debugfs_crc.c index f9e26dda56d6..9dd879589a2c 100644 --- a/drivers/gpu/drm/drm_debugfs_crc.c +++ b/drivers/gpu/drm/drm_debugfs_crc.c @@ -155,7 +155,7 @@ static int crtc_crc_open(struct inode *inode, struct file *filep) int ret = 0;
if (drm_drv_uses_atomic_modeset(crtc->dev)) {
ret = drm_modeset_lock_interruptible(&crtc->mutex, NULL);
if (ret) return ret;ret = drm_modeset_lock_single_interruptible(&crtc->mutex);
diff --git a/drivers/gpu/drm/drm_modeset_lock.c b/drivers/gpu/drm/drm_modeset_lock.c index af4e906c630d..5db47e04e68e 100644 --- a/drivers/gpu/drm/drm_modeset_lock.c +++ b/drivers/gpu/drm/drm_modeset_lock.c @@ -178,7 +178,11 @@ EXPORT_SYMBOL(drm_warn_on_modeset_not_all_locked); /**
- drm_modeset_acquire_init - initialize acquire context
- @ctx: the acquire context
- @flags: for future
- @flags: 0 or %DRM_MODESET_ACQUIRE_INTERRUPTIBLE
- When passing %DRM_MODESET_ACQUIRE_INTERRUPTIBLE to @flags,
- all calls to drm_modeset_lock() will perform an interruptible
*/
- wait.
void drm_modeset_acquire_init(struct drm_modeset_acquire_ctx *ctx, uint32_t flags) @@ -186,6 +190,9 @@ void drm_modeset_acquire_init(struct drm_modeset_acquire_ctx *ctx, memset(ctx, 0, sizeof(*ctx)); ww_acquire_init(&ctx->ww_ctx, &crtc_ww_class); INIT_LIST_HEAD(&ctx->locked);
- if (flags & DRM_MODESET_ACQUIRE_INTERRUPTIBLE)
ctx->interruptible = true;
} EXPORT_SYMBOL(drm_modeset_acquire_init);
@@ -261,8 +268,19 @@ static inline int modeset_lock(struct drm_modeset_lock *lock, return ret; }
-static int modeset_backoff(struct drm_modeset_acquire_ctx *ctx,
bool interruptible)
+/**
- drm_modeset_backoff - deadlock avoidance backoff
- @ctx: the acquire context
- If deadlock is detected (ie. drm_modeset_lock() returns -EDEADLK),
- you must call this function to drop all currently held locks and
- block until the contended lock becomes available.
- This function returns 0 on success, or -ERESTARTSYS if this context
- is initialized with %DRM_MODESET_ACQUIRE_INTERRUPTIBLE and the
- wait has been interrupted.
- */
+int drm_modeset_backoff(struct drm_modeset_acquire_ctx *ctx) { struct drm_modeset_lock *contended = ctx->contended;
@@ -273,36 +291,11 @@ static int modeset_backoff(struct drm_modeset_acquire_ctx *ctx,
drm_modeset_drop_locks(ctx);
- return modeset_lock(contended, ctx, interruptible, true);
-}
-/**
- drm_modeset_backoff - deadlock avoidance backoff
- @ctx: the acquire context
- If deadlock is detected (ie. drm_modeset_lock() returns -EDEADLK),
- you must call this function to drop all currently held locks and
- block until the contended lock becomes available.
- */
-void drm_modeset_backoff(struct drm_modeset_acquire_ctx *ctx) -{
- modeset_backoff(ctx, false);
- return modeset_lock(contended, ctx, ctx->interruptible, true);
} EXPORT_SYMBOL(drm_modeset_backoff);
/**
- drm_modeset_backoff_interruptible - deadlock avoidance backoff
- @ctx: the acquire context
- Interruptible version of drm_modeset_backoff()
- */
-int drm_modeset_backoff_interruptible(struct drm_modeset_acquire_ctx *ctx) -{
- return modeset_backoff(ctx, true);
-} -EXPORT_SYMBOL(drm_modeset_backoff_interruptible);
-/**
- drm_modeset_lock_init - initialize lock
- @lock: lock to init
*/ @@ -324,14 +317,18 @@ EXPORT_SYMBOL(drm_modeset_lock_init);
- deadlock scenario has been detected and it is an error to attempt
- to take any more locks without first calling drm_modeset_backoff().
- If the @ctx is not NULL and initialized with
- %DRM_MODESET_ACQUIRE_INTERRUPTIBLE, this function will behave
- as drm_modeset_lock_interruptible().
- If @ctx is NULL then the function call behaves like a normal,
- non-nesting mutex_lock() call.
*/
- uninterruptible non-nesting mutex_lock() call.
int drm_modeset_lock(struct drm_modeset_lock *lock, struct drm_modeset_acquire_ctx *ctx) { if (ctx)
return modeset_lock(lock, ctx, false, false);
return modeset_lock(lock, ctx, ctx->interruptible, false);
ww_mutex_lock(&lock->mutex, NULL); return 0;
@@ -339,21 +336,19 @@ int drm_modeset_lock(struct drm_modeset_lock *lock, EXPORT_SYMBOL(drm_modeset_lock);
/**
- drm_modeset_lock_interruptible - take modeset lock
- drm_modeset_lock_single_interruptible - take a single modeset lock
- @lock: lock to take
- @ctx: acquire ctx
- Interruptible version of drm_modeset_lock()
- This function behaves as drm_modeset_lock() with a NULL context,
- but performs interruptible waits.
*/
- This function returns 0 on success, or -ERESTARTSYS when interrupted.
-int drm_modeset_lock_interruptible(struct drm_modeset_lock *lock,
struct drm_modeset_acquire_ctx *ctx)
+int drm_modeset_lock_single_interruptible(struct drm_modeset_lock *lock) {
- if (ctx)
return modeset_lock(lock, ctx, true, false);
- return ww_mutex_lock_interruptible(&lock->mutex, NULL);
} -EXPORT_SYMBOL(drm_modeset_lock_interruptible); +EXPORT_SYMBOL(drm_modeset_lock_single_interruptible);
/**
- drm_modeset_unlock - drop modeset lock
diff --git a/include/drm/drm_modeset_lock.h b/include/drm/drm_modeset_lock.h index 4b27c2bb955c..a685d1bb21f2 100644 --- a/include/drm/drm_modeset_lock.h +++ b/include/drm/drm_modeset_lock.h @@ -34,6 +34,7 @@ struct drm_modeset_lock;
- @contended: used internally for -EDEADLK handling
- @locked: list of held locks
- @trylock_only: trylock mode used in atomic contexts/panic notifiers
- @interruptible: whether interruptible locking should be used.
- Each thread competing for a set of locks must use one acquire
- ctx. And if any lock fxn returns -EDEADLK, it must backoff and
@@ -59,6 +60,9 @@ struct drm_modeset_acquire_ctx { * Trylock mode, use only for panic handlers! */ bool trylock_only;
- /* Perform interruptible waits on this context. */
- bool interruptible;
};
/** @@ -82,12 +86,13 @@ struct drm_modeset_lock { struct list_head head; };
+#define DRM_MODESET_ACQUIRE_INTERRUPTIBLE BIT(0)
void drm_modeset_acquire_init(struct drm_modeset_acquire_ctx *ctx, uint32_t flags); void drm_modeset_acquire_fini(struct drm_modeset_acquire_ctx *ctx); void drm_modeset_drop_locks(struct drm_modeset_acquire_ctx *ctx); -void drm_modeset_backoff(struct drm_modeset_acquire_ctx *ctx); -int drm_modeset_backoff_interruptible(struct drm_modeset_acquire_ctx *ctx); +int drm_modeset_backoff(struct drm_modeset_acquire_ctx *ctx);
void drm_modeset_lock_init(struct drm_modeset_lock *lock);
@@ -111,8 +116,7 @@ static inline bool drm_modeset_is_locked(struct drm_modeset_lock *lock)
int drm_modeset_lock(struct drm_modeset_lock *lock, struct drm_modeset_acquire_ctx *ctx); -int drm_modeset_lock_interruptible(struct drm_modeset_lock *lock,
struct drm_modeset_acquire_ctx *ctx);
+int __must_check drm_modeset_lock_single_interruptible(struct drm_modeset_lock *lock); void drm_modeset_unlock(struct drm_modeset_lock *lock);
struct drm_device;
2.11.0
dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel
Op 27-07-17 om 17:19 schreef Daniel Vetter:
On Thu, Jul 27, 2017 at 02:58:36PM +0200, Maarten Lankhorst wrote:
When we want to make drm_atomic_commit interruptible, there are a lot of places that call the lock function, which we don't have control over.
Rather than trying to convert every single one, it's easier to toggle interruptible waiting per acquire_ctx. If drm_modeset_acquire_init is called with DRM_MODESET_ACQUIRE_INTERRUPTIBLE, then we will perform interruptible waits in drm_modeset_lock and drm_modeset_backoff.
Signed-off-by: Maarten Lankhorst maarten.lankhorst@linux.intel.com
I wonder whether we should do a drm_modeset_lock_single without the context attribute too, for OCD. But not really needed.
Reviewed-by: Daniel Vetter daniel.vetter@ffwll.ch
If it was up to me, we didn't rename drm_modeset_lock_interruptible and allowed it to pass a ctx. Most likely it would have been NULL, but at least we'd have symmetry.
The same discussion was held ww_mutex_lock iirc, and we ended up with a single locking function where we could pass a NULL context if needed. :)
~Maarten
Pass DRM_MODESET_ACQUIRE_INTERRUPTIBLE to acquire_init, and handle drm_modeset_backoff which can now fail by returning the error.
Signed-off-by: Maarten Lankhorst maarten.lankhorst@linux.intel.com --- drivers/gpu/drm/drm_atomic.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/drivers/gpu/drm/drm_atomic.c b/drivers/gpu/drm/drm_atomic.c index 01192dd3ed79..4a1ff90fd73e 100644 --- a/drivers/gpu/drm/drm_atomic.c +++ b/drivers/gpu/drm/drm_atomic.c @@ -2217,7 +2217,7 @@ int drm_mode_atomic_ioctl(struct drm_device *dev, (arg->flags & DRM_MODE_PAGE_FLIP_EVENT)) return -EINVAL;
- drm_modeset_acquire_init(&ctx, 0); + drm_modeset_acquire_init(&ctx, DRM_MODESET_ACQUIRE_INTERRUPTIBLE);
state = drm_atomic_state_alloc(dev); if (!state) @@ -2327,8 +2327,9 @@ int drm_mode_atomic_ioctl(struct drm_device *dev,
if (ret == -EDEADLK) { drm_atomic_state_clear(state); - drm_modeset_backoff(&ctx); - goto retry; + ret = drm_modeset_backoff(&ctx); + if (!ret) + goto retry; }
drm_atomic_state_put(state);
On Thu, Jul 27, 2017 at 02:58:37PM +0200, Maarten Lankhorst wrote:
Pass DRM_MODESET_ACQUIRE_INTERRUPTIBLE to acquire_init, and handle drm_modeset_backoff which can now fail by returning the error.
Signed-off-by: Maarten Lankhorst maarten.lankhorst@linux.intel.com
Requires a testcase, with that addressed:
Reviewed-by: Daniel Vetter daniel.vetter@ffwll.ch
I think the test should be:
1. On thread stuck on a blocking plane update (using a vgem fence that doesn't get signalled).
2. Second thread does a blocking plane update on that the same plane.
3. Gets interrupted by a signal, and we confirm that we receive that signal before we've unblocked the first thread.
This would give us positive confirmation that interruptible actually works.
Cheers, Daniel
drivers/gpu/drm/drm_atomic.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/drivers/gpu/drm/drm_atomic.c b/drivers/gpu/drm/drm_atomic.c index 01192dd3ed79..4a1ff90fd73e 100644 --- a/drivers/gpu/drm/drm_atomic.c +++ b/drivers/gpu/drm/drm_atomic.c @@ -2217,7 +2217,7 @@ int drm_mode_atomic_ioctl(struct drm_device *dev, (arg->flags & DRM_MODE_PAGE_FLIP_EVENT)) return -EINVAL;
- drm_modeset_acquire_init(&ctx, 0);
drm_modeset_acquire_init(&ctx, DRM_MODESET_ACQUIRE_INTERRUPTIBLE);
state = drm_atomic_state_alloc(dev); if (!state)
@@ -2327,8 +2327,9 @@ int drm_mode_atomic_ioctl(struct drm_device *dev,
if (ret == -EDEADLK) { drm_atomic_state_clear(state);
drm_modeset_backoff(&ctx);
goto retry;
ret = drm_modeset_backoff(&ctx);
if (!ret)
goto retry;
}
drm_atomic_state_put(state);
-- 2.11.0
Intel-gfx mailing list Intel-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/intel-gfx
Pass DRM_MODESET_ACQUIRE_INTERRUPTIBLE to acquire_init, and handle drm_modeset_backoff which can now fail by returning the error.
Signed-off-by: Maarten Lankhorst maarten.lankhorst@linux.intel.com --- drivers/gpu/drm/drm_plane.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/drivers/gpu/drm/drm_plane.c b/drivers/gpu/drm/drm_plane.c index 5dc8c4350602..4997229d397b 100644 --- a/drivers/gpu/drm/drm_plane.c +++ b/drivers/gpu/drm/drm_plane.c @@ -866,7 +866,7 @@ int drm_mode_page_flip_ioctl(struct drm_device *dev, return -EINVAL; }
- drm_modeset_acquire_init(&ctx, 0); + drm_modeset_acquire_init(&ctx, DRM_MODESET_ACQUIRE_INTERRUPTIBLE); retry: ret = drm_modeset_lock(&crtc->mutex, &ctx); if (ret) @@ -955,8 +955,9 @@ int drm_mode_page_flip_ioctl(struct drm_device *dev, crtc->primary->old_fb = NULL;
if (ret == -EDEADLK) { - drm_modeset_backoff(&ctx); - goto retry; + ret = drm_modeset_backoff(&ctx); + if (!ret) + goto retry; }
drm_modeset_drop_locks(&ctx);
On Thu, Jul 27, 2017 at 02:58:38PM +0200, Maarten Lankhorst wrote:
Pass DRM_MODESET_ACQUIRE_INTERRUPTIBLE to acquire_init, and handle drm_modeset_backoff which can now fail by returning the error.
Signed-off-by: Maarten Lankhorst maarten.lankhorst@linux.intel.com
Same comment as the atomic ioctl. With the testcase added this gets my
Reviewed-by: Daniel Vetter daniel.vetter@ffwll.ch
drivers/gpu/drm/drm_plane.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/drivers/gpu/drm/drm_plane.c b/drivers/gpu/drm/drm_plane.c index 5dc8c4350602..4997229d397b 100644 --- a/drivers/gpu/drm/drm_plane.c +++ b/drivers/gpu/drm/drm_plane.c @@ -866,7 +866,7 @@ int drm_mode_page_flip_ioctl(struct drm_device *dev, return -EINVAL; }
- drm_modeset_acquire_init(&ctx, 0);
- drm_modeset_acquire_init(&ctx, DRM_MODESET_ACQUIRE_INTERRUPTIBLE);
retry: ret = drm_modeset_lock(&crtc->mutex, &ctx); if (ret) @@ -955,8 +955,9 @@ int drm_mode_page_flip_ioctl(struct drm_device *dev, crtc->primary->old_fb = NULL;
if (ret == -EDEADLK) {
drm_modeset_backoff(&ctx);
goto retry;
ret = drm_modeset_backoff(&ctx);
if (!ret)
goto retry;
}
drm_modeset_drop_locks(&ctx);
-- 2.11.0
dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel
Pass DRM_MODESET_ACQUIRE_INTERRUPTIBLE to acquire_init, and handle drm_modeset_backoff which can now fail by returning the error.
Signed-off-by: Maarten Lankhorst maarten.lankhorst@linux.intel.com --- drivers/gpu/drm/drm_plane.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/drivers/gpu/drm/drm_plane.c b/drivers/gpu/drm/drm_plane.c index 4997229d397b..63ced2e69386 100644 --- a/drivers/gpu/drm/drm_plane.c +++ b/drivers/gpu/drm/drm_plane.c @@ -715,7 +715,7 @@ static int drm_mode_cursor_common(struct drm_device *dev, return -ENOENT; }
- drm_modeset_acquire_init(&ctx, 0); + drm_modeset_acquire_init(&ctx, DRM_MODESET_ACQUIRE_INTERRUPTIBLE); retry: ret = drm_modeset_lock(&crtc->mutex, &ctx); if (ret) @@ -757,8 +757,9 @@ static int drm_mode_cursor_common(struct drm_device *dev, } out: if (ret == -EDEADLK) { - drm_modeset_backoff(&ctx); - goto retry; + ret = drm_modeset_backoff(&ctx); + if (!ret) + goto retry; }
drm_modeset_drop_locks(&ctx);
Pass DRM_MODESET_ACQUIRE_INTERRUPTIBLE to acquire_init, and handle drm_modeset_backoff which can now fail by returning the error.
Signed-off-by: Maarten Lankhorst maarten.lankhorst@linux.intel.com --- drivers/gpu/drm/drm_plane.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/drivers/gpu/drm/drm_plane.c b/drivers/gpu/drm/drm_plane.c index 63ced2e69386..7aef8611ce9d 100644 --- a/drivers/gpu/drm/drm_plane.c +++ b/drivers/gpu/drm/drm_plane.c @@ -549,7 +549,7 @@ static int setplane_internal(struct drm_plane *plane, struct drm_modeset_acquire_ctx ctx; int ret;
- drm_modeset_acquire_init(&ctx, 0); + drm_modeset_acquire_init(&ctx, DRM_MODESET_ACQUIRE_INTERRUPTIBLE); retry: ret = drm_modeset_lock_all_ctx(plane->dev, &ctx); if (ret) @@ -560,8 +560,9 @@ static int setplane_internal(struct drm_plane *plane,
fail: if (ret == -EDEADLK) { - drm_modeset_backoff(&ctx); - goto retry; + ret = drm_modeset_backoff(&ctx); + if (!ret) + goto retry; } drm_modeset_drop_locks(&ctx); drm_modeset_acquire_fini(&ctx);
Hi Maarten
On 27 July 2017 at 13:58, Maarten Lankhorst maarten.lankhorst@linux.intel.com wrote:
drm_atomic_commit could previous have always failed when waits failed, but locking was always done uninterruptibly. Add infrastructure to make it possible for callers to choose interruptible locking, and convert the 4 most common ioctl's to use it.
All other atomic helpers can be converted when Daniel's "acquire_ctx for everyone!" patch series lands.
There's a KMS locking documentation/example in drm_modeset_lock.c. Can you please update that as well?
Thanks Emil
On Thu, Jul 27, 2017 at 03:45:11PM +0100, Emil Velikov wrote:
Hi Maarten
On 27 July 2017 at 13:58, Maarten Lankhorst maarten.lankhorst@linux.intel.com wrote:
drm_atomic_commit could previous have always failed when waits failed, but locking was always done uninterruptibly. Add infrastructure to make it possible for callers to choose interruptible locking, and convert the 4 most common ioctl's to use it.
All other atomic helpers can be converted when Daniel's "acquire_ctx for everyone!" patch series lands.
There's a KMS locking documentation/example in drm_modeset_lock.c. Can you please update that as well?
Not sure what we should update there? As-is it still works for the non-interruptible case. Or do you mean we should have an interruptible variant of it too?
Anyway, with the testcase added also for cursor and plane code, patches 4&5 also get my Reviewed-by: Daniel Vetter daniel.vetter@ffwll.ch
Any reason you didn't convert the set_config path too? That's probably the one most will be stuck in when something goes wrong ... Another one is the various get* and setprop calls (but the later needs my patch series first). -Daniel
On 27 July 2017 at 16:30, Daniel Vetter daniel@ffwll.ch wrote:
On Thu, Jul 27, 2017 at 03:45:11PM +0100, Emil Velikov wrote:
Hi Maarten
On 27 July 2017 at 13:58, Maarten Lankhorst maarten.lankhorst@linux.intel.com wrote:
drm_atomic_commit could previous have always failed when waits failed, but locking was always done uninterruptibly. Add infrastructure to make it possible for callers to choose interruptible locking, and convert the 4 most common ioctl's to use it.
All other atomic helpers can be converted when Daniel's "acquire_ctx for everyone!" patch series lands.
There's a KMS locking documentation/example in drm_modeset_lock.c. Can you please update that as well?
Not sure what we should update there? As-is it still works for the non-interruptible case. Or do you mean we should have an interruptible variant of it too?
Don't think another example is needed.
After the example add a line which says "hey you can have Interruptable locking", pointing to drm_modeset_acquire_init and/or drm_modeset_backoff. Updating the drm_modeset_acquire_init() call to have the flags argument would also be nice.
-Emil
Op 27-07-17 om 17:30 schreef Daniel Vetter:
On Thu, Jul 27, 2017 at 03:45:11PM +0100, Emil Velikov wrote:
Hi Maarten
On 27 July 2017 at 13:58, Maarten Lankhorst maarten.lankhorst@linux.intel.com wrote:
drm_atomic_commit could previous have always failed when waits failed, but locking was always done uninterruptibly. Add infrastructure to make it possible for callers to choose interruptible locking, and convert the 4 most common ioctl's to use it.
All other atomic helpers can be converted when Daniel's "acquire_ctx for everyone!" patch series lands.
There's a KMS locking documentation/example in drm_modeset_lock.c. Can you please update that as well?
Not sure what we should update there? As-is it still works for the non-interruptible case. Or do you mean we should have an interruptible variant of it too?
Anyway, with the testcase added also for cursor and plane code, patches 4&5 also get my Reviewed-by: Daniel Vetter daniel.vetter@ffwll.ch
Any reason you didn't convert the set_config path too? That's probably the one most will be stuck in when something goes wrong ... Another one is the various get* and setprop calls (but the later needs my patch series first). -Daniel
I only did the ones I expect to be called most often by frequency. The atomic ioctl, pageflip, cursor update and setplane for overlays are likely to be done most. I think we could test all additions like that, perhaps even the interruptible backoff case, at least for the atomic ioctl. :)
dri-devel@lists.freedesktop.org