This patch set implements a basic GPU scheduler for amdgpu. A GPU scheduler is required for proper GPU reset and optimal utilization of hw resources. It's currently disabled by default as there are a few things we'd like to improve first: - better integration with kernel fences - clean up scheduler involvement in the ISR Enable it with amdgpu.enable_scheduler=1. Performance is comparable to the non-scheduler paths.
Patches are also available in the drm-next-4.3-wip branch of my git tree: http://cgit.freedesktop.org/~agd5f/linux/log/?h=drm-next-4.3-wip
Christian König (1): drm/amdgpu: fix syncing to VM updates
Chunming Zhou (20): drm/amdgpu: add context entity init drm/amdgpu: disable hw semaphore with scheduler drm/amdgpu: add backend implementation of gpu scheduler (v2) drm/amdgpu: add bo list copy drm/amdgpu: dispatch jobs in cs drm/amdgpu: use scheduler user seq instead of previous user seq drm/amdgpu: make sure the fence is emitted before ring to get it. drm/amdgpu: prepare job before push to sw queue for pte ring drm/amdgpu: add kernel ctx support (v2) drm/amdgpu: dispatch job for vm drm/amdgpu: add sched isr to fence process drm/amdgpu: protect fence_process from multiple context drm/amdgpu: add check for callback drm/amdgpu: wait forever for wait emit drm/amdgpu: fix seq in ctx_add_fence drm/amdgpu: add helper function for kernel submission drm/amdgpu: Use gpu scheduler for gfx ring ib test drm/amdgpu: use gpu scheduler for sdma ib test drm/amdgpu: use scheduler for UVD ib test drm/amdgpu: use scheduler for VCE ib test
Jammy Zhou (6): drm/amd: add basic scheduling framework drm/amdgpu: add scheduler initialization drm/amdgpu: add enable_scheduler module option drm/amdgpu: silent the message for GPU scheduler creation drm/amdgpu: add amdgpu.sched_jobs option drm/amdgpu: add amdgpu.sched_hw_submission option
monk.liu (4): drm/amdgpu: use kernel fence interface when possible drm/amdgpu: new implement for fence_wait_any (v2) drm/amdgpu: re-implement fence_default_wait drm/amdgpu: move wait_queue_head from adev to ring (v2)
drivers/gpu/drm/amd/amdgpu/Makefile | 8 +- drivers/gpu/drm/amd/amdgpu/amdgpu.h | 63 ++- drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.c | 50 +++ drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c | 234 ++++++++--- drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c | 156 ++++++-- drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 9 + drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c | 12 + drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c | 372 ++++++++---------- drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c | 6 +- drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c | 4 +- drivers/gpu/drm/amd/amdgpu/amdgpu_sa.c | 3 +- drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c | 145 +++++++ drivers/gpu/drm/amd/amdgpu/amdgpu_sync.c | 21 +- drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c | 61 ++- drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c | 158 ++++---- drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 245 +++++++++--- drivers/gpu/drm/amd/amdgpu/cik_sdma.c | 26 +- drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c | 28 +- drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c | 27 +- drivers/gpu/drm/amd/amdgpu/sdma_v2_4.c | 27 +- drivers/gpu/drm/amd/amdgpu/sdma_v3_0.c | 26 +- drivers/gpu/drm/amd/scheduler/gpu_scheduler.c | 532 ++++++++++++++++++++++++++ drivers/gpu/drm/amd/scheduler/gpu_scheduler.h | 159 ++++++++ 23 files changed, 1861 insertions(+), 511 deletions(-) create mode 100644 drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c create mode 100644 drivers/gpu/drm/amd/scheduler/gpu_scheduler.c create mode 100644 drivers/gpu/drm/amd/scheduler/gpu_scheduler.h
From: Jammy Zhou Jammy.Zhou@amd.com
run queue: A set of entities scheduling commands for the same ring. It implements the scheduling policy that selects the next entity to emit commands from.
entity: A scheduler entity is a wrapper around a job queue or a group of other entities. This can be used to build hierarchies of entities. For example all job queue entities belonging to the same process may be placed in a higher level entity and scheduled against other process entities. Entities take turns emitting jobs from their job queue to the corresponding hardware ring, in accordance with the scheduler policy.
Signed-off-by: Shaoyun Liu Shaoyun.Liu@amd.com Signed-off-by: Chunming Zhou David1.Zhou@amd.com Signed-off-by: Jammy Zhou Jammy.Zhou@amd.com Acked-by: Christian K?nig christian.koenig@amd.com Reviewed-by: Jammy Zhou Jammy.Zhou@amd.com --- drivers/gpu/drm/amd/scheduler/gpu_scheduler.c | 531 ++++++++++++++++++++++++++ drivers/gpu/drm/amd/scheduler/gpu_scheduler.h | 160 ++++++++ 2 files changed, 691 insertions(+) create mode 100644 drivers/gpu/drm/amd/scheduler/gpu_scheduler.c create mode 100644 drivers/gpu/drm/amd/scheduler/gpu_scheduler.h
diff --git a/drivers/gpu/drm/amd/scheduler/gpu_scheduler.c b/drivers/gpu/drm/amd/scheduler/gpu_scheduler.c new file mode 100644 index 0000000..296496c --- /dev/null +++ b/drivers/gpu/drm/amd/scheduler/gpu_scheduler.c @@ -0,0 +1,531 @@ +/* + * Copyright 2015 Advanced Micro Devices, Inc. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + * OTHER DEALINGS IN THE SOFTWARE. + * + * + */ +#include <linux/kthread.h> +#include <linux/wait.h> +#include <linux/sched.h> +#include <drm/drmP.h> +#include "gpu_scheduler.h" + +/* Initialize a given run queue struct */ +static void init_rq(struct amd_run_queue *rq) +{ + INIT_LIST_HEAD(&rq->head.list); + rq->head.belongto_rq = rq; + mutex_init(&rq->lock); + atomic_set(&rq->nr_entity, 0); + rq->current_entity = &rq->head; +} + +/* Note: caller must hold the lock or in a atomic context */ +static void rq_remove_entity(struct amd_run_queue *rq, + struct amd_sched_entity *entity) +{ + if (rq->current_entity == entity) + rq->current_entity = list_entry(entity->list.prev, + typeof(*entity), list); + list_del_init(&entity->list); + atomic_dec(&rq->nr_entity); +} + +static void rq_add_entity(struct amd_run_queue *rq, + struct amd_sched_entity *entity) +{ + list_add_tail(&entity->list, &rq->head.list); + atomic_inc(&rq->nr_entity); +} + +/** + * Select next entity from a specified run queue with round robin policy. + * It could return the same entity as current one if current is the only + * available one in the queue. Return NULL if nothing available. + */ +static struct amd_sched_entity *rq_select_entity(struct amd_run_queue *rq) +{ + struct amd_sched_entity *p = rq->current_entity; + int i = atomic_read(&rq->nr_entity) + 1; /*real count + dummy head*/ + while (i) { + p = list_entry(p->list.next, typeof(*p), list); + if (!rq->check_entity_status(p)) { + rq->current_entity = p; + break; + } + i--; + } + return i ? p : NULL; +} + +static bool context_entity_is_waiting(struct amd_context_entity *entity) +{ + /* TODO: sync obj for multi-ring synchronization */ + return false; +} + +static int gpu_entity_check_status(struct amd_sched_entity *entity) +{ + struct amd_context_entity *tmp = NULL; + + if (entity == &entity->belongto_rq->head) + return -1; + + tmp = container_of(entity, typeof(*tmp), generic_entity); + if (kfifo_is_empty(&tmp->job_queue) || + context_entity_is_waiting(tmp)) + return -1; + + return 0; +} + +/** + * Note: This function should only been called inside scheduler main + * function for thread safety, there is no other protection here. + * return ture if scheduler has something ready to run. + * + * For active_hw_rq, there is only one producer(scheduler thread) and + * one consumer(ISR). It should be safe to use this function in scheduler + * main thread to decide whether to continue emit more IBs. +*/ +static bool is_scheduler_ready(struct amd_gpu_scheduler *sched) +{ + return !kfifo_is_full(&sched->active_hw_rq); +} + +/** + * Select next entity from the kernel run queue, if not available, + * return null. +*/ +static struct amd_context_entity *kernel_rq_select_context( + struct amd_gpu_scheduler *sched) +{ + struct amd_sched_entity *sched_entity = NULL; + struct amd_context_entity *tmp = NULL; + struct amd_run_queue *rq = &sched->kernel_rq; + + mutex_lock(&rq->lock); + sched_entity = rq_select_entity(rq); + if (sched_entity) + tmp = container_of(sched_entity, + typeof(*tmp), + generic_entity); + mutex_unlock(&rq->lock); + return tmp; +} + +/** + * Select next entity containing real IB submissions +*/ +static struct amd_context_entity *select_context( + struct amd_gpu_scheduler *sched) +{ + struct amd_context_entity *wake_entity = NULL; + struct amd_context_entity *tmp; + struct amd_run_queue *rq; + + if (!is_scheduler_ready(sched)) + return NULL; + + /* Kernel run queue has higher priority than normal run queue*/ + tmp = kernel_rq_select_context(sched); + if (tmp != NULL) + goto exit; + + WARN_ON(offsetof(struct amd_context_entity, generic_entity) != 0); + + rq = &sched->sched_rq; + mutex_lock(&rq->lock); + tmp = container_of(rq_select_entity(rq), + typeof(*tmp), generic_entity); + mutex_unlock(&rq->lock); +exit: + if (sched->current_entity && (sched->current_entity != tmp)) + wake_entity = sched->current_entity; + sched->current_entity = tmp; + if (wake_entity) + wake_up(&wake_entity->wait_queue); + return tmp; +} + +/** + * Init a context entity used by scheduler when submit to HW ring. + * + * @sched The pointer to the scheduler + * @entity The pointer to a valid amd_context_entity + * @parent The parent entity of this amd_context_entity + * @rq The run queue this entity belongs + * @context_id The context id for this entity + * + * return 0 if succeed. negative error code on failure +*/ +int amd_context_entity_init(struct amd_gpu_scheduler *sched, + struct amd_context_entity *entity, + struct amd_sched_entity *parent, + struct amd_run_queue *rq, + uint32_t context_id) +{ + uint64_t seq_ring = 0; + + if (!(sched && entity && rq)) + return -EINVAL; + + memset(entity, 0, sizeof(struct amd_context_entity)); + seq_ring = ((uint64_t)sched->ring_id) << 60; + spin_lock_init(&entity->lock); + entity->generic_entity.belongto_rq = rq; + entity->generic_entity.parent = parent; + entity->scheduler = sched; + init_waitqueue_head(&entity->wait_queue); + init_waitqueue_head(&entity->wait_emit); + if(kfifo_alloc(&entity->job_queue, + AMD_MAX_JOB_ENTRY_PER_CONTEXT * sizeof(void *), + GFP_KERNEL)) + return -EINVAL; + + spin_lock_init(&entity->queue_lock); + entity->tgid = (context_id == AMD_KERNEL_CONTEXT_ID) ? + AMD_KERNEL_PROCESS_ID : current->tgid; + entity->context_id = context_id; + atomic64_set(&entity->last_emitted_v_seq, seq_ring); + atomic64_set(&entity->last_queued_v_seq, seq_ring); + atomic64_set(&entity->last_signaled_v_seq, seq_ring); + + /* Add the entity to the run queue */ + mutex_lock(&rq->lock); + rq_add_entity(rq, &entity->generic_entity); + mutex_unlock(&rq->lock); + return 0; +} + +/** + * Query if entity is initialized + * + * @sched Pointer to scheduler instance + * @entity The pointer to a valid scheduler entity + * + * return true if entity is initialized, false otherwise +*/ +static bool is_context_entity_initialized(struct amd_gpu_scheduler *sched, + struct amd_context_entity *entity) +{ + return entity->scheduler == sched && + entity->generic_entity.belongto_rq != NULL; +} + +static bool is_context_entity_idle(struct amd_gpu_scheduler *sched, + struct amd_context_entity *entity) +{ + /** + * Idle means no pending IBs, and the entity is not + * currently being used. + */ + barrier(); + if ((sched->current_entity != entity) && + kfifo_is_empty(&entity->job_queue)) + return true; + + return false; +} + +/** + * Destroy a context entity + * + * @sched Pointer to scheduler instance + * @entity The pointer to a valid scheduler entity + * + * return 0 if succeed. negative error code on failure + */ +int amd_context_entity_fini(struct amd_gpu_scheduler *sched, + struct amd_context_entity *entity) +{ + int r = 0; + struct amd_run_queue *rq = entity->generic_entity.belongto_rq; + + if (!is_context_entity_initialized(sched, entity)) + return 0; + + /** + * The client will not queue more IBs during this fini, consume existing + * queued IBs + */ + r = wait_event_timeout( + entity->wait_queue, + is_context_entity_idle(sched, entity), + msecs_to_jiffies(AMD_GPU_WAIT_IDLE_TIMEOUT_IN_MS) + ) ? 0 : -1; + + if (r) { + if (entity->is_pending) + DRM_INFO("Entity %u is in waiting state during fini,\ + all pending ibs will be canceled.\n", + entity->context_id); + } + + mutex_lock(&rq->lock); + rq_remove_entity(rq, &entity->generic_entity); + mutex_unlock(&rq->lock); + kfifo_free(&entity->job_queue); + return r; +} + +/** + * Submit a normal job to the job queue + * + * @sched The pointer to the scheduler + * @c_entity The pointer to amd_context_entity + * @job The pointer to job required to submit + * return 0 if succeed. -1 if failed. + * -2 indicate queue is full for this client, client should wait untill + * scheduler consum some queued command. + * -1 other fail. +*/ +int amd_sched_push_job(struct amd_gpu_scheduler *sched, + struct amd_context_entity *c_entity, + void *job) +{ + while (kfifo_in_spinlocked(&c_entity->job_queue, &job, sizeof(void *), + &c_entity->queue_lock) != sizeof(void *)) { + /** + * Current context used up all its IB slots + * wait here, or need to check whether GPU is hung + */ + schedule(); + } + + wake_up_interruptible(&sched->wait_queue); + return 0; +} + +/** + * Check the virtual sequence number for specified context + * + * @seq The virtual sequence number to check + * @c_entity The pointer to a valid amd_context_entity + * + * return 0 if signaled, -1 else. +*/ +int amd_sched_check_ts(struct amd_context_entity *c_entity, uint64_t seq) +{ + return (seq <= atomic64_read(&c_entity->last_signaled_v_seq)) ? 0 : -1; +} + +/** + * Wait for a virtual sequence number to be signaled or timeout + * + * @c_entity The pointer to a valid context entity + * @seq The virtual sequence number to wait + * @intr Interruptible or not + * @timeout Timeout in ms, wait infinitely if <0 + * @emit wait for emit or signal + * + * return =0 signaled , <0 failed +*/ +static int amd_sched_wait(struct amd_context_entity *c_entity, + uint64_t seq, + bool intr, + long timeout, + bool emit) +{ + atomic64_t *v_seq = emit ? &c_entity->last_emitted_v_seq : + &c_entity->last_signaled_v_seq; + wait_queue_head_t *wait_queue = emit ? &c_entity->wait_emit : + &c_entity->wait_queue; + + if (intr && (timeout < 0)) { + wait_event_interruptible( + *wait_queue, + seq <= atomic64_read(v_seq)); + return 0; + } else if (intr && (timeout >= 0)) { + wait_event_interruptible_timeout( + *wait_queue, + seq <= atomic64_read(v_seq), + msecs_to_jiffies(timeout)); + return (seq <= atomic64_read(v_seq)) ? + 0 : -1; + } else if (!intr && (timeout < 0)) { + wait_event( + *wait_queue, + seq <= atomic64_read(v_seq)); + return 0; + } else if (!intr && (timeout >= 0)) { + wait_event_timeout( + *wait_queue, + seq <= atomic64_read(v_seq), + msecs_to_jiffies(timeout)); + return (seq <= atomic64_read(v_seq)) ? + 0 : -1; + } + return 0; +} + +int amd_sched_wait_signal(struct amd_context_entity *c_entity, + uint64_t seq, + bool intr, + long timeout) +{ + return amd_sched_wait(c_entity, seq, intr, timeout, false); +} + +int amd_sched_wait_emit(struct amd_context_entity *c_entity, + uint64_t seq, + bool intr, + long timeout) +{ + return amd_sched_wait(c_entity, seq, intr, timeout, true); +} + +static int amd_sched_main(void *param) +{ + int r; + void *job; + struct sched_param sparam = {.sched_priority = 1}; + struct amd_context_entity *c_entity = NULL; + struct amd_gpu_scheduler *sched = (struct amd_gpu_scheduler *)param; + + sched_setscheduler(current, SCHED_FIFO, &sparam); + + while (!kthread_should_stop()) { + wait_event_interruptible(sched->wait_queue, + is_scheduler_ready(sched) && + (c_entity = select_context(sched))); + r = kfifo_out(&c_entity->job_queue, &job, sizeof(void *)); + if (r != sizeof(void *)) + continue; + r = sched->ops->prepare_job(sched, c_entity, job); + if (!r) + WARN_ON(kfifo_in_spinlocked( + &sched->active_hw_rq, + &job, + sizeof(void *), + &sched->queue_lock) != sizeof(void *)); + mutex_lock(&sched->sched_lock); + sched->ops->run_job(sched, c_entity, job); + mutex_unlock(&sched->sched_lock); + } + return 0; +} + +uint64_t amd_sched_get_handled_seq(struct amd_gpu_scheduler *sched) +{ + return sched->last_handled_seq; +} + +/** + * ISR to handle EOP inetrrupts + * + * @sched: gpu scheduler + * +*/ +void amd_sched_isr(struct amd_gpu_scheduler *sched) +{ + int r; + void *job; + r = kfifo_out_spinlocked(&sched->active_hw_rq, + &job, sizeof(void *), + &sched->queue_lock); + + if (r != sizeof(void *)) + job = NULL; + + sched->ops->process_job(sched, job); + sched->last_handled_seq++; + wake_up_interruptible(&sched->wait_queue); +} + +/** + * Create a gpu scheduler + * + * @device The device context for this scheduler + * @ops The backend operations for this scheduler. + * @id The scheduler is per ring, here is ring id. + * @granularity The minumum ms unit the scheduler will scheduled. + * @preemption Indicate whether this ring support preemption, 0 is no. + * + * return the pointer to scheduler for success, otherwise return NULL +*/ +struct amd_gpu_scheduler *amd_sched_create(void *device, + struct amd_sched_backend_ops *ops, + unsigned ring, + unsigned granularity, + unsigned preemption) +{ + struct amd_gpu_scheduler *sched; + char name[20] = "gpu_sched[0]"; + + sched = kzalloc(sizeof(struct amd_gpu_scheduler), GFP_KERNEL); + if (!sched) + return NULL; + + sched->device = device; + sched->ops = ops; + sched->granularity = granularity; + sched->ring_id = ring; + sched->preemption = preemption; + sched->last_handled_seq = 0; + + snprintf(name, sizeof(name), "gpu_sched[%d]", ring); + mutex_init(&sched->sched_lock); + spin_lock_init(&sched->queue_lock); + init_rq(&sched->sched_rq); + sched->sched_rq.check_entity_status = gpu_entity_check_status; + + init_rq(&sched->kernel_rq); + sched->kernel_rq.check_entity_status = gpu_entity_check_status; + + init_waitqueue_head(&sched->wait_queue); + if(kfifo_alloc(&sched->active_hw_rq, + AMD_MAX_ACTIVE_HW_SUBMISSION * sizeof(void *), + GFP_KERNEL)) { + kfree(sched); + return NULL; + } + + /* Each scheduler will run on a seperate kernel thread */ + sched->thread = kthread_create(amd_sched_main, sched, name); + if (sched->thread) { + wake_up_process(sched->thread); + DRM_INFO("Create gpu scheduler for id %d successfully.\n", + ring); + return sched; + } + + DRM_ERROR("Failed to create scheduler for id %d.\n", ring); + kfifo_free(&sched->active_hw_rq); + kfree(sched); + return NULL; +} + +/** + * Destroy a gpu scheduler + * + * @sched The pointer to the scheduler + * + * return 0 if succeed. -1 if failed. + */ +int amd_sched_destroy(struct amd_gpu_scheduler *sched) +{ + kthread_stop(sched->thread); + kfifo_free(&sched->active_hw_rq); + kfree(sched); + return 0; +} + diff --git a/drivers/gpu/drm/amd/scheduler/gpu_scheduler.h b/drivers/gpu/drm/amd/scheduler/gpu_scheduler.h new file mode 100644 index 0000000..a6226e1 --- /dev/null +++ b/drivers/gpu/drm/amd/scheduler/gpu_scheduler.h @@ -0,0 +1,160 @@ +/* + * Copyright 2015 Advanced Micro Devices, Inc. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + * OTHER DEALINGS IN THE SOFTWARE. + * + */ + +#ifndef _GPU_SCHEDULER_H_ +#define _GPU_SCHEDULER_H_ + +#include <linux/kfifo.h> + +#define AMD_MAX_ACTIVE_HW_SUBMISSION 2 +#define AMD_MAX_JOB_ENTRY_PER_CONTEXT 16 + +#define AMD_KERNEL_CONTEXT_ID 0 +#define AMD_KERNEL_PROCESS_ID 0 + +#define AMD_GPU_WAIT_IDLE_TIMEOUT_IN_MS 3000 + +struct amd_gpu_scheduler; +struct amd_run_queue; + +/** + * A scheduler entity is a wrapper around a job queue or a group + * of other entities. Entities take turns emitting jobs from their + * job queues to corresponding hardware ring based on scheduling + * policy. +*/ +struct amd_sched_entity { + struct list_head list; + struct amd_run_queue *belongto_rq; + struct amd_sched_entity *parent; +}; + +/** + * Run queue is a set of entities scheduling command submissions for + * one specific ring. It implements the scheduling policy that selects + * the next entity to emit commands from. +*/ +struct amd_run_queue { + struct mutex lock; + atomic_t nr_entity; + struct amd_sched_entity head; + struct amd_sched_entity *current_entity; + /** + * Return 0 means this entity can be scheduled + * Return -1 means this entity cannot be scheduled for reasons, + * i.e, it is the head, or these is no job, etc + */ + int (*check_entity_status)(struct amd_sched_entity *entity); +}; + +/** + * Context based scheduler entity, there can be multiple entities for + * each context, and one entity per ring +*/ +struct amd_context_entity { + struct amd_sched_entity generic_entity; + spinlock_t lock; + /* the virtual_seq is unique per context per ring */ + atomic64_t last_queued_v_seq; + atomic64_t last_emitted_v_seq; + atomic64_t last_signaled_v_seq; + pid_t tgid; + uint32_t context_id; + /* the job_queue maintains the jobs submitted by clients */ + struct kfifo job_queue; + spinlock_t queue_lock; + struct amd_gpu_scheduler *scheduler; + wait_queue_head_t wait_queue; + wait_queue_head_t wait_emit; + bool is_pending; +}; + +/** + * Define the backend operations called by the scheduler, + * these functions should be implemented in driver side +*/ +struct amd_sched_backend_ops { + int (*prepare_job)(struct amd_gpu_scheduler *sched, + struct amd_context_entity *c_entity, + void *job); + void (*run_job)(struct amd_gpu_scheduler *sched, + struct amd_context_entity *c_entity, + void *job); + void (*process_job)(struct amd_gpu_scheduler *sched, void *job); +}; + +/** + * One scheduler is implemented for each hardware ring +*/ +struct amd_gpu_scheduler { + void *device; + struct task_struct *thread; + struct amd_run_queue sched_rq; + struct amd_run_queue kernel_rq; + struct kfifo active_hw_rq; + struct amd_sched_backend_ops *ops; + uint32_t ring_id; + uint32_t granularity; /* in ms unit */ + uint32_t preemption; + uint64_t last_handled_seq; + wait_queue_head_t wait_queue; + struct amd_context_entity *current_entity; + struct mutex sched_lock; + spinlock_t queue_lock; +}; + + +struct amd_gpu_scheduler *amd_sched_create(void *device, + struct amd_sched_backend_ops *ops, + uint32_t ring, + uint32_t granularity, + uint32_t preemption); + +int amd_sched_destroy(struct amd_gpu_scheduler *sched); + +int amd_sched_push_job(struct amd_gpu_scheduler *sched, + struct amd_context_entity *c_entity, + void *job); + +int amd_sched_check_ts(struct amd_context_entity *c_entity, uint64_t seq); + +int amd_sched_wait_signal(struct amd_context_entity *c_entity, + uint64_t seq, bool intr, long timeout); +int amd_sched_wait_emit(struct amd_context_entity *c_entity, + uint64_t seq, + bool intr, + long timeout); + +void amd_sched_isr(struct amd_gpu_scheduler *sched); +uint64_t amd_sched_get_handled_seq(struct amd_gpu_scheduler *sched); + +int amd_context_entity_fini(struct amd_gpu_scheduler *sched, + struct amd_context_entity *entity); + +int amd_context_entity_init(struct amd_gpu_scheduler *sched, + struct amd_context_entity *entity, + struct amd_sched_entity *parent, + struct amd_run_queue *rq, + uint32_t context_id); + +#endif
From: Jammy Zhou Jammy.Zhou@amd.com
1. Add kernel parameter option, default 0 2. Add scheduler initialization for amdgpu
Signed-off-by: Chunming Zhou David1.Zhou@amd.com Signed-off-by: Jammy Zhou Jammy.Zhou@amd.com Acked-by: Christian K?nig christian.koenig@amd.com Reviewed-by: Jammy Zhou Jammy.Zhou@amd.com --- drivers/gpu/drm/amd/amdgpu/Makefile | 7 ++++++- drivers/gpu/drm/amd/amdgpu/amdgpu.h | 4 ++++ drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c | 1 + drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c | 10 ++++++++++ 4 files changed, 21 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/Makefile b/drivers/gpu/drm/amd/amdgpu/Makefile index e07a250..8709026 100644 --- a/drivers/gpu/drm/amd/amdgpu/Makefile +++ b/drivers/gpu/drm/amd/amdgpu/Makefile @@ -6,7 +6,8 @@ ccflags-y := -Iinclude/drm -Idrivers/gpu/drm/amd/include/asic_reg \ -Idrivers/gpu/drm/amd/include \ -Idrivers/gpu/drm/amd/include/bus \ -Idrivers/gpu/drm/amd/acp/include \ - -Idrivers/gpu/drm/amd/amdgpu + -Idrivers/gpu/drm/amd/amdgpu \ + -Idrivers/gpu/drm/amd/scheduler
amdgpu-y := amdgpu_drv.o
@@ -93,6 +94,10 @@ include drivers/gpu/drm/amd/acp/Makefile amdgpu-y += $(AMD_ACP_FILES) endif
+# GPU scheduler +amdgpu-y += \ + ../scheduler/gpu_scheduler.o + amdgpu-$(CONFIG_COMPAT) += amdgpu_ioc32.o amdgpu-$(CONFIG_VGA_SWITCHEROO) += amdgpu_atpx_handler.o amdgpu-$(CONFIG_ACPI) += amdgpu_acpi.o diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h b/drivers/gpu/drm/amd/amdgpu/amdgpu.h index 16e7d16..a311029 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h @@ -54,6 +54,8 @@ #include "amdgpu_gds.h" #include "amdgpu_acp.h"
+#include "gpu_scheduler.h" + /* * Modules parameters. */ @@ -78,6 +80,7 @@ extern int amdgpu_bapm; extern int amdgpu_deep_color; extern int amdgpu_vm_size; extern int amdgpu_vm_block_size; +extern int amdgpu_enable_scheduler;
#define AMDGPU_MAX_USEC_TIMEOUT 100000 /* 100 ms */ #define AMDGPU_FENCE_JIFFIES_TIMEOUT (HZ / 2) @@ -859,6 +862,7 @@ struct amdgpu_ring { struct amdgpu_device *adev; const struct amdgpu_ring_funcs *funcs; struct amdgpu_fence_driver fence_drv; + struct amd_gpu_scheduler *scheduler;
struct mutex *ring_lock; struct amdgpu_bo *ring_obj; diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c index c3f9b49..c69611a 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c @@ -75,6 +75,7 @@ int amdgpu_deep_color = 0; int amdgpu_vm_size = 8; int amdgpu_vm_block_size = -1; int amdgpu_exp_hw_support = 0; +int amdgpu_enable_scheduler = 0;
MODULE_PARM_DESC(vramlimit, "Restrict VRAM for testing, in megabytes"); module_param_named(vramlimit, amdgpu_vram_limit, int, 0600); diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c index b89dafe..6cb3290 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c @@ -902,6 +902,14 @@ void amdgpu_fence_driver_init_ring(struct amdgpu_ring *ring) INIT_DELAYED_WORK(&ring->fence_drv.lockup_work, amdgpu_fence_check_lockup); ring->fence_drv.ring = ring; + + if (amdgpu_enable_scheduler) { + ring->scheduler = amd_sched_create((void *)ring->adev, + NULL, ring->idx, 5, 0); + if (!ring->scheduler) + DRM_ERROR("Failed to create scheduler on ring %d.\n", + ring->idx); + } }
/** @@ -950,6 +958,8 @@ void amdgpu_fence_driver_fini(struct amdgpu_device *adev) wake_up_all(&adev->fence_queue); amdgpu_irq_put(adev, ring->fence_drv.irq_src, ring->fence_drv.irq_type); + if (ring->scheduler) + amd_sched_destroy(ring->scheduler); ring->fence_drv.initialized = false; } mutex_unlock(&adev->ring_lock);
From: Chunming Zhou david1.zhou@amd.com
Signed-off-by: Chunming Zhou david1.zhou@amd.com Acked-by: Christian K?nig christian.koenig@amd.com Reviewed-by: Jammy Zhou Jammy.Zhou@amd.com --- drivers/gpu/drm/amd/amdgpu/amdgpu.h | 2 ++ drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c | 36 ++++++++++++++++++++++++++++++++- 2 files changed, 37 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h b/drivers/gpu/drm/amd/amdgpu/amdgpu.h index a311029..12ac818 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h @@ -993,10 +993,12 @@ struct amdgpu_vm_manager { struct amdgpu_ctx_ring { uint64_t sequence; struct fence *fences[AMDGPU_CTX_MAX_CS_PENDING]; + struct amd_context_entity c_entity; };
struct amdgpu_ctx { struct kref refcount; + struct amdgpu_device *adev; unsigned reset_counter; spinlock_t ring_lock; struct amdgpu_ctx_ring rings[AMDGPU_MAX_RINGS]; diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c index 144edc9..557fb60 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c @@ -28,13 +28,23 @@ static void amdgpu_ctx_do_release(struct kref *ref) { struct amdgpu_ctx *ctx; + struct amdgpu_device *adev; unsigned i, j;
ctx = container_of(ref, struct amdgpu_ctx, refcount); + adev = ctx->adev; +
for (i = 0; i < AMDGPU_MAX_RINGS; ++i) for (j = 0; j < AMDGPU_CTX_MAX_CS_PENDING; ++j) fence_put(ctx->rings[i].fences[j]); + + if (amdgpu_enable_scheduler) { + for (i = 0; i < adev->num_rings; i++) + amd_context_entity_fini(adev->rings[i]->scheduler, + &ctx->rings[i].c_entity); + } + kfree(ctx); }
@@ -43,7 +53,7 @@ int amdgpu_ctx_alloc(struct amdgpu_device *adev, struct amdgpu_fpriv *fpriv, { struct amdgpu_ctx *ctx; struct amdgpu_ctx_mgr *mgr = &fpriv->ctx_mgr; - int i, r; + int i, j, r;
ctx = kmalloc(sizeof(*ctx), GFP_KERNEL); if (!ctx) @@ -59,11 +69,35 @@ int amdgpu_ctx_alloc(struct amdgpu_device *adev, struct amdgpu_fpriv *fpriv, *id = (uint32_t)r;
memset(ctx, 0, sizeof(*ctx)); + ctx->adev = adev; kref_init(&ctx->refcount); spin_lock_init(&ctx->ring_lock); for (i = 0; i < AMDGPU_MAX_RINGS; ++i) ctx->rings[i].sequence = 1; mutex_unlock(&mgr->lock); + if (amdgpu_enable_scheduler) { + /* create context entity for each ring */ + for (i = 0; i < adev->num_rings; i++) { + struct amd_run_queue *rq; + if (fpriv) + rq = &adev->rings[i]->scheduler->sched_rq; + else + rq = &adev->rings[i]->scheduler->kernel_rq; + r = amd_context_entity_init(adev->rings[i]->scheduler, + &ctx->rings[i].c_entity, + NULL, rq, *id); + if (r) + break; + } + + if (i < adev->num_rings) { + for (j = 0; j < i; j++) + amd_context_entity_fini(adev->rings[j]->scheduler, + &ctx->rings[j].c_entity); + kfree(ctx); + return -EINVAL; + } + }
return 0; }
From: Chunming Zhou david1.zhou@amd.com
Signed-off-by: Chunming Zhou david1.zhou@amd.com Acked-by: Christian K?nig christian.koenig@amd.com Reviewed-by: Jammy Zhou Jammy.Zhou@amd.com --- drivers/gpu/drm/amd/amdgpu/amdgpu_sync.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_sync.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_sync.c index 9c292cf..105a3b5 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_sync.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_sync.c @@ -165,7 +165,7 @@ int amdgpu_sync_rings(struct amdgpu_sync *sync, return -EINVAL; }
- if (count >= AMDGPU_NUM_SYNCS) { + if (amdgpu_enable_scheduler || (count >= AMDGPU_NUM_SYNCS)) { /* not enough room, wait manually */ r = amdgpu_fence_wait(fence, false); if (r)
From: Chunming Zhou david1.zhou@amd.com
v2: fix rebase breakage
Signed-off-by: Chunming Zhou david1.zhou@amd.com Acked-by: Christian K?nig christian.koenig@amd.com Reviewed-by: Jammy Zhou Jammy.Zhou@amd.com --- drivers/gpu/drm/amd/amdgpu/Makefile | 3 +- drivers/gpu/drm/amd/amdgpu/amdgpu.h | 8 +++ drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c | 3 +- drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c | 107 ++++++++++++++++++++++++++++++ 4 files changed, 119 insertions(+), 2 deletions(-) create mode 100644 drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c
diff --git a/drivers/gpu/drm/amd/amdgpu/Makefile b/drivers/gpu/drm/amd/amdgpu/Makefile index 8709026..41d526e 100644 --- a/drivers/gpu/drm/amd/amdgpu/Makefile +++ b/drivers/gpu/drm/amd/amdgpu/Makefile @@ -96,7 +96,8 @@ endif
# GPU scheduler amdgpu-y += \ - ../scheduler/gpu_scheduler.o + ../scheduler/gpu_scheduler.o \ + amdgpu_sched.o
amdgpu-$(CONFIG_COMPAT) += amdgpu_ioc32.o amdgpu-$(CONFIG_VGA_SWITCHEROO) += amdgpu_atpx_handler.o diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h b/drivers/gpu/drm/amd/amdgpu/amdgpu.h index 12ac818..dd05335 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h @@ -417,6 +417,7 @@ struct amdgpu_user_fence { struct amdgpu_bo *bo; /* write-back address offset to bo start */ uint32_t offset; + uint64_t sequence; };
int amdgpu_fence_driver_init(struct amdgpu_device *adev); @@ -858,6 +859,8 @@ enum amdgpu_ring_type { AMDGPU_RING_TYPE_VCE };
+extern struct amd_sched_backend_ops amdgpu_sched_ops; + struct amdgpu_ring { struct amdgpu_device *adev; const struct amdgpu_ring_funcs *funcs; @@ -1228,6 +1231,11 @@ struct amdgpu_cs_parser {
/* user fence */ struct amdgpu_user_fence uf; + + struct mutex job_lock; + struct work_struct job_work; + int (*prepare_job)(struct amdgpu_cs_parser *sched_job); + int (*run_job)(struct amdgpu_cs_parser *sched_job); };
static inline u32 amdgpu_get_ib_value(struct amdgpu_cs_parser *p, uint32_t ib_idx, int idx) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c index 6cb3290..fdb3105 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c @@ -905,7 +905,8 @@ void amdgpu_fence_driver_init_ring(struct amdgpu_ring *ring)
if (amdgpu_enable_scheduler) { ring->scheduler = amd_sched_create((void *)ring->adev, - NULL, ring->idx, 5, 0); + &amdgpu_sched_ops, + ring->idx, 5, 0); if (!ring->scheduler) DRM_ERROR("Failed to create scheduler on ring %d.\n", ring->idx); diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c new file mode 100644 index 0000000..1f7bf31 --- /dev/null +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c @@ -0,0 +1,107 @@ +/* + * Copyright 2015 Advanced Micro Devices, Inc. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + * OTHER DEALINGS IN THE SOFTWARE. + * + * + */ +#include <linux/kthread.h> +#include <linux/wait.h> +#include <linux/sched.h> +#include <drm/drmP.h> +#include "amdgpu.h" + +static int amdgpu_sched_prepare_job(struct amd_gpu_scheduler *sched, + struct amd_context_entity *c_entity, + void *job) +{ + int r = 0; + struct amdgpu_cs_parser *sched_job = (struct amdgpu_cs_parser *)job; + if (sched_job->prepare_job) + r = sched_job->prepare_job(sched_job); + if (r) { + DRM_ERROR("Prepare job error\n"); + schedule_work(&sched_job->job_work); + } + return r; +} + +static void amdgpu_sched_run_job(struct amd_gpu_scheduler *sched, + struct amd_context_entity *c_entity, + void *job) +{ + int r = 0; + struct amdgpu_cs_parser *sched_job = (struct amdgpu_cs_parser *)job; + + mutex_lock(&sched_job->job_lock); + r = amdgpu_ib_schedule(sched_job->adev, + sched_job->num_ibs, + sched_job->ibs, + sched_job->filp); + if (r) + goto err; + + if (sched_job->run_job) { + r = sched_job->run_job(sched_job); + if (r) + goto err; + } + mutex_unlock(&sched_job->job_lock); + return; +err: + DRM_ERROR("Run job error\n"); + mutex_unlock(&sched_job->job_lock); + schedule_work(&sched_job->job_work); +} + +static void amdgpu_sched_process_job(struct amd_gpu_scheduler *sched, void *job) +{ + struct amdgpu_cs_parser *sched_job = NULL; + struct amdgpu_fence *fence = NULL; + struct amdgpu_ring *ring = NULL; + struct amdgpu_device *adev = NULL; + struct amd_context_entity *c_entity = NULL; + + if (!job) + return; + sched_job = (struct amdgpu_cs_parser *)job; + fence = sched_job->ibs[sched_job->num_ibs - 1].fence; + if (!fence) + return; + ring = fence->ring; + adev = ring->adev; + + if (sched_job->ctx) { + c_entity = &sched_job->ctx->rings[ring->idx].c_entity; + atomic64_set(&c_entity->last_signaled_v_seq, + sched_job->uf.sequence); + } + + /* wake up users waiting for time stamp */ + wake_up_all(&c_entity->wait_queue); + + schedule_work(&sched_job->job_work); +} + +struct amd_sched_backend_ops amdgpu_sched_ops = { + .prepare_job = amdgpu_sched_prepare_job, + .run_job = amdgpu_sched_run_job, + .process_job = amdgpu_sched_process_job +}; +
From: Chunming Zhou david1.zhou@amd.com
Signed-off-by: Chunming Zhou david1.zhou@amd.com Acked-by: Christian K?nig christian.koenig@amd.com Reviewed-by: Jammy Zhou Jammy.Zhou@amd.com --- drivers/gpu/drm/amd/amdgpu/amdgpu.h | 3 ++ drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.c | 50 +++++++++++++++++++++++++++++ 2 files changed, 53 insertions(+)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h b/drivers/gpu/drm/amd/amdgpu/amdgpu.h index dd05335..da924ed 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h @@ -1061,6 +1061,9 @@ struct amdgpu_bo_list { struct amdgpu_bo_list * amdgpu_bo_list_get(struct amdgpu_fpriv *fpriv, int id); void amdgpu_bo_list_put(struct amdgpu_bo_list *list); +void amdgpu_bo_list_copy(struct amdgpu_device *adev, + struct amdgpu_bo_list *dst, + struct amdgpu_bo_list *src); void amdgpu_bo_list_free(struct amdgpu_bo_list *list);
/* diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.c index f82a2dd..4d27fa1 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.c @@ -166,6 +166,56 @@ void amdgpu_bo_list_put(struct amdgpu_bo_list *list) mutex_unlock(&list->lock); }
+void amdgpu_bo_list_copy(struct amdgpu_device *adev, + struct amdgpu_bo_list *dst, + struct amdgpu_bo_list *src) +{ + struct amdgpu_bo_list_entry *array; + struct amdgpu_bo *gds_obj = adev->gds.gds_gfx_bo; + struct amdgpu_bo *gws_obj = adev->gds.gws_gfx_bo; + struct amdgpu_bo *oa_obj = adev->gds.oa_gfx_bo; + + bool has_userptr = false; + unsigned i; + + array = drm_calloc_large(src->num_entries, sizeof(struct amdgpu_bo_list_entry)); + if (!array) + return; + memset(array, 0, src->num_entries * sizeof(struct amdgpu_bo_list_entry)); + + for (i = 0; i < src->num_entries; ++i) { + memcpy(array, src->array, + src->num_entries * sizeof(struct amdgpu_bo_list_entry)); + array[i].robj = amdgpu_bo_ref(src->array[i].robj); + if (amdgpu_ttm_tt_has_userptr(array[i].robj->tbo.ttm)) { + has_userptr = true; + array[i].prefered_domains = AMDGPU_GEM_DOMAIN_GTT; + array[i].allowed_domains = AMDGPU_GEM_DOMAIN_GTT; + } + array[i].tv.bo = &array[i].robj->tbo; + array[i].tv.shared = true; + + if (array[i].prefered_domains == AMDGPU_GEM_DOMAIN_GDS) + gds_obj = array[i].robj; + if (array[i].prefered_domains == AMDGPU_GEM_DOMAIN_GWS) + gws_obj = array[i].robj; + if (array[i].prefered_domains == AMDGPU_GEM_DOMAIN_OA) + oa_obj = array[i].robj; + } + + for (i = 0; i < dst->num_entries; ++i) + amdgpu_bo_unref(&dst->array[i].robj); + + drm_free_large(dst->array); + + dst->gds_obj = gds_obj; + dst->gws_obj = gws_obj; + dst->oa_obj = oa_obj; + dst->has_userptr = has_userptr; + dst->array = array; + dst->num_entries = src->num_entries; +} + void amdgpu_bo_list_free(struct amdgpu_bo_list *list) { unsigned i;
From: Chunming Zhou david1.zhou@amd.com
BO validation is moved to scheduler except usrptr which must be validated in user process
Signed-off-by: Chunming Zhou david1.zhou@amd.com Acked-by: Christian K?nig christian.koenig@amd.com Reviewed-by: Jammy Zhou Jammy.Zhou@amd.com --- drivers/gpu/drm/amd/amdgpu/amdgpu.h | 1 + drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c | 256 +++++++++++++++++++++++++-------- 2 files changed, 200 insertions(+), 57 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h b/drivers/gpu/drm/amd/amdgpu/amdgpu.h index da924ed..20639d1 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h @@ -1239,6 +1239,7 @@ struct amdgpu_cs_parser { struct work_struct job_work; int (*prepare_job)(struct amdgpu_cs_parser *sched_job); int (*run_job)(struct amdgpu_cs_parser *sched_job); + int (*free_job)(struct amdgpu_cs_parser *sched_job); };
static inline u32 amdgpu_get_ib_value(struct amdgpu_cs_parser *p, uint32_t ib_idx, int idx) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c index bc0a704..f9d4fe9 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c @@ -41,6 +41,11 @@ struct amdgpu_cs_buckets { struct list_head bucket[AMDGPU_CS_NUM_BUCKETS]; };
+static void amdgpu_cs_parser_fini(struct amdgpu_cs_parser *parser, + int error, bool backoff); +static void amdgpu_cs_parser_fini_early(struct amdgpu_cs_parser *parser, int error, bool backoff); +static void amdgpu_cs_parser_fini_late(struct amdgpu_cs_parser *parser); + static void amdgpu_cs_buckets_init(struct amdgpu_cs_buckets *b) { unsigned i; @@ -126,12 +131,52 @@ int amdgpu_cs_get_ring(struct amdgpu_device *adev, u32 ip_type, return 0; }
+static void amdgpu_job_work_func(struct work_struct *work) +{ + struct amdgpu_cs_parser *sched_job = + container_of(work, struct amdgpu_cs_parser, + job_work); + mutex_lock(&sched_job->job_lock); + sched_job->free_job(sched_job); + mutex_unlock(&sched_job->job_lock); + /* after processing job, free memory */ + kfree(sched_job); +} +struct amdgpu_cs_parser *amdgpu_cs_parser_create(struct amdgpu_device *adev, + struct drm_file *filp, + struct amdgpu_ctx *ctx, + struct amdgpu_ib *ibs, + uint32_t num_ibs) +{ + struct amdgpu_cs_parser *parser; + int i; + + parser = kzalloc(sizeof(struct amdgpu_cs_parser), GFP_KERNEL); + if (!parser) + return NULL; + + parser->adev = adev; + parser->filp = filp; + parser->ctx = ctx; + parser->ibs = ibs; + parser->num_ibs = num_ibs; + if (amdgpu_enable_scheduler) { + mutex_init(&parser->job_lock); + INIT_WORK(&parser->job_work, amdgpu_job_work_func); + } + for (i = 0; i < num_ibs; i++) + ibs[i].ctx = ctx; + + return parser; +} + int amdgpu_cs_parser_init(struct amdgpu_cs_parser *p, void *data) { union drm_amdgpu_cs *cs = data; uint64_t *chunk_array_user; uint64_t *chunk_array = NULL; struct amdgpu_fpriv *fpriv = p->filp->driver_priv; + struct amdgpu_bo_list *bo_list = NULL; unsigned size, i; int r = 0;
@@ -143,7 +188,17 @@ int amdgpu_cs_parser_init(struct amdgpu_cs_parser *p, void *data) r = -EINVAL; goto out; } - p->bo_list = amdgpu_bo_list_get(fpriv, cs->in.bo_list_handle); + bo_list = amdgpu_bo_list_get(fpriv, cs->in.bo_list_handle); + if (bo_list && !bo_list->has_userptr) { + p->bo_list = kzalloc(sizeof(struct amdgpu_bo_list), GFP_KERNEL); + if (!p->bo_list) + return -ENOMEM; + amdgpu_bo_list_copy(p->adev, p->bo_list, bo_list); + amdgpu_bo_list_put(bo_list); + } else if (bo_list && bo_list->has_userptr) + p->bo_list = bo_list; + else + p->bo_list = NULL;
/* get chunks */ INIT_LIST_HEAD(&p->validated); @@ -424,8 +479,26 @@ static int cmp_size_smaller_first(void *priv, struct list_head *a, **/ static void amdgpu_cs_parser_fini(struct amdgpu_cs_parser *parser, int error, bool backoff) { - unsigned i; + amdgpu_cs_parser_fini_early(parser, error, backoff); + amdgpu_cs_parser_fini_late(parser); +}
+static int amdgpu_cs_parser_run_job( + struct amdgpu_cs_parser *sched_job) +{ + amdgpu_cs_parser_fini_early(sched_job, 0, true); + return 0; +} + +static int amdgpu_cs_parser_free_job( + struct amdgpu_cs_parser *sched_job) +{ + amdgpu_cs_parser_fini_late(sched_job); + return 0; +} + +static void amdgpu_cs_parser_fini_early(struct amdgpu_cs_parser *parser, int error, bool backoff) +{ if (!error) { /* Sort the buffer list from the smallest to largest buffer, * which affects the order of buffers in the LRU list. @@ -446,11 +519,19 @@ static void amdgpu_cs_parser_fini(struct amdgpu_cs_parser *parser, int error, bo ttm_eu_backoff_reservation(&parser->ticket, &parser->validated); } +}
+static void amdgpu_cs_parser_fini_late(struct amdgpu_cs_parser *parser) +{ + unsigned i; if (parser->ctx) amdgpu_ctx_put(parser->ctx); - if (parser->bo_list) - amdgpu_bo_list_put(parser->bo_list); + if (parser->bo_list) { + if (!parser->bo_list->has_userptr) + amdgpu_bo_list_free(parser->bo_list); + else + amdgpu_bo_list_put(parser->bo_list); + } drm_free_large(parser->vm_bos); for (i = 0; i < parser->nchunks; i++) drm_free_large(parser->chunks[i].kdata); @@ -461,6 +542,9 @@ static void amdgpu_cs_parser_fini(struct amdgpu_cs_parser *parser, int error, bo kfree(parser->ibs); if (parser->uf.bo) drm_gem_object_unreference_unlocked(&parser->uf.bo->gem_base); + + if (!amdgpu_enable_scheduler) + kfree(parser); }
static int amdgpu_bo_vm_update_pte(struct amdgpu_cs_parser *p, @@ -533,9 +617,9 @@ static int amdgpu_cs_ib_vm_chunk(struct amdgpu_device *adev, goto out; } amdgpu_cs_sync_rings(parser); - - r = amdgpu_ib_schedule(adev, parser->num_ibs, parser->ibs, - parser->filp); + if (!amdgpu_enable_scheduler) + r = amdgpu_ib_schedule(adev, parser->num_ibs, parser->ibs, + parser->filp);
out: mutex_unlock(&vm->mutex); @@ -731,35 +815,16 @@ static int amdgpu_cs_dependencies(struct amdgpu_device *adev, return 0; }
-int amdgpu_cs_ioctl(struct drm_device *dev, void *data, struct drm_file *filp) +static int amdgpu_cs_parser_prepare_job(struct amdgpu_cs_parser *sched_job) { - struct amdgpu_device *adev = dev->dev_private; - union drm_amdgpu_cs *cs = data; - struct amdgpu_cs_parser parser; - int r, i; - bool reserved_buffers = false; - - down_read(&adev->exclusive_lock); - if (!adev->accel_working) { - up_read(&adev->exclusive_lock); - return -EBUSY; - } - /* initialize parser */ - memset(&parser, 0, sizeof(struct amdgpu_cs_parser)); - parser.filp = filp; - parser.adev = adev; - r = amdgpu_cs_parser_init(&parser, data); - if (r) { - DRM_ERROR("Failed to initialize parser !\n"); - amdgpu_cs_parser_fini(&parser, r, false); - up_read(&adev->exclusive_lock); - r = amdgpu_cs_handle_lockup(adev, r); - return r; - } - - r = amdgpu_cs_parser_relocs(&parser); - if (r) { - if (r != -ERESTARTSYS) { + int r, i; + struct amdgpu_cs_parser *parser = sched_job; + struct amdgpu_device *adev = sched_job->adev; + bool reserved_buffers = false; + + r = amdgpu_cs_parser_relocs(parser); + if (r) { + if (r != -ERESTARTSYS) { if (r == -ENOMEM) DRM_ERROR("Not enough memory for command submission!\n"); else @@ -769,33 +834,104 @@ int amdgpu_cs_ioctl(struct drm_device *dev, void *data, struct drm_file *filp)
if (!r) { reserved_buffers = true; - r = amdgpu_cs_ib_fill(adev, &parser); + r = amdgpu_cs_ib_fill(adev, parser); } - if (!r) { - r = amdgpu_cs_dependencies(adev, &parser); + r = amdgpu_cs_dependencies(adev, parser); if (r) DRM_ERROR("Failed in the dependencies handling %d!\n", r); } + if (r) { + amdgpu_cs_parser_fini(parser, r, reserved_buffers); + return r; + } + + for (i = 0; i < parser->num_ibs; i++) + trace_amdgpu_cs(parser, i); + + r = amdgpu_cs_ib_vm_chunk(adev, parser); + return r; +} + +static struct amdgpu_ring *amdgpu_cs_parser_get_ring( + struct amdgpu_device *adev, + struct amdgpu_cs_parser *parser) +{ + int i, r; + + struct amdgpu_cs_chunk *chunk; + struct drm_amdgpu_cs_chunk_ib *chunk_ib; + struct amdgpu_ring *ring; + for (i = 0; i < parser->nchunks; i++) { + chunk = &parser->chunks[i]; + chunk_ib = (struct drm_amdgpu_cs_chunk_ib *)chunk->kdata; + + if (chunk->chunk_id != AMDGPU_CHUNK_ID_IB) + continue; + + r = amdgpu_cs_get_ring(adev, chunk_ib->ip_type, + chunk_ib->ip_instance, chunk_ib->ring, + &ring); + if (r) + return NULL; + break; + } + return ring; +} + +int amdgpu_cs_ioctl(struct drm_device *dev, void *data, struct drm_file *filp) +{ + struct amdgpu_device *adev = dev->dev_private; + union drm_amdgpu_cs *cs = data; + struct amdgpu_cs_parser *parser; + int r; + + down_read(&adev->exclusive_lock); + if (!adev->accel_working) { + up_read(&adev->exclusive_lock); + return -EBUSY; + }
+ parser = amdgpu_cs_parser_create(adev, filp, NULL, NULL, 0); + if (!parser) + return -ENOMEM; + r = amdgpu_cs_parser_init(parser, data); if (r) { - amdgpu_cs_parser_fini(&parser, r, reserved_buffers); + DRM_ERROR("Failed to initialize parser !\n"); + amdgpu_cs_parser_fini(parser, r, false); up_read(&adev->exclusive_lock); r = amdgpu_cs_handle_lockup(adev, r); return r; }
- for (i = 0; i < parser.num_ibs; i++) - trace_amdgpu_cs(&parser, i); - - r = amdgpu_cs_ib_vm_chunk(adev, &parser); - if (r) { - goto out; + if (amdgpu_enable_scheduler && parser->num_ibs) { + struct amdgpu_ring * ring = + amdgpu_cs_parser_get_ring(adev, parser); + parser->uf.sequence = atomic64_inc_return( + &parser->ctx->rings[ring->idx].c_entity.last_queued_v_seq); + if ((parser->bo_list && parser->bo_list->has_userptr)) { + r = amdgpu_cs_parser_prepare_job(parser); + if (r) + goto out; + } else + parser->prepare_job = amdgpu_cs_parser_prepare_job; + + parser->run_job = amdgpu_cs_parser_run_job; + parser->free_job = amdgpu_cs_parser_free_job; + amd_sched_push_job(ring->scheduler, + &parser->ctx->rings[ring->idx].c_entity, + parser); + cs->out.handle = parser->uf.sequence; + up_read(&adev->exclusive_lock); + return 0; } + r = amdgpu_cs_parser_prepare_job(parser); + if (r) + goto out;
- cs->out.handle = parser.ibs[parser.num_ibs - 1].sequence; + cs->out.handle = parser->ibs[parser->num_ibs - 1].sequence; out: - amdgpu_cs_parser_fini(&parser, r, true); + amdgpu_cs_parser_fini(parser, r, true); up_read(&adev->exclusive_lock); r = amdgpu_cs_handle_lockup(adev, r); return r; @@ -829,18 +965,24 @@ int amdgpu_cs_wait_ioctl(struct drm_device *dev, void *data, ctx = amdgpu_ctx_get(filp->driver_priv, wait->in.ctx_id); if (ctx == NULL) return -EINVAL; - - fence = amdgpu_ctx_get_fence(ctx, ring, wait->in.handle); - if (IS_ERR(fence)) - r = PTR_ERR(fence); - - else if (fence) { - r = fence_wait_timeout(fence, true, timeout); - fence_put(fence); - - } else + if (amdgpu_enable_scheduler) { + r = amd_sched_wait_ts(&ctx->rings[ring->idx].c_entity, + wait->in.handle, true, timeout); + if (r) + return r; r = 1; + } else { + fence = amdgpu_ctx_get_fence(ctx, ring, wait->in.handle); + if (IS_ERR(fence)) + r = PTR_ERR(fence);
+ else if (fence) { + r = fence_wait_timeout(fence, true, timeout); + fence_put(fence); + + } else + r = 1; + } amdgpu_ctx_put(ctx); if (r < 0) return r;
From: Chunming Zhou david1.zhou@amd.com
Signed-off-by: Chunming Zhou david1.zhou@amd.com Acked-by: Christian K?nig christian.koenig@amd.com Reviewed-by: Jammy Zhou Jammy.Zhou@amd.com --- drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c | 26 ++++++++++++++++++++------ 1 file changed, 20 insertions(+), 6 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c index 557fb60..b9be250 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c @@ -225,10 +225,16 @@ uint64_t amdgpu_ctx_add_fence(struct amdgpu_ctx *ctx, struct amdgpu_ring *ring, struct fence *fence) { struct amdgpu_ctx_ring *cring = & ctx->rings[ring->idx]; - uint64_t seq = cring->sequence; - unsigned idx = seq % AMDGPU_CTX_MAX_CS_PENDING; - struct fence *other = cring->fences[idx]; + uint64_t seq = 0; + unsigned idx = 0; + struct fence *other = NULL;
+ if (amdgpu_enable_scheduler) + seq = atomic64_read(&cring->c_entity.last_queued_v_seq); + else + seq = cring->sequence; + idx = seq % AMDGPU_CTX_MAX_CS_PENDING; + other = cring->fences[idx]; if (other) { signed long r; r = fence_wait_timeout(other, false, MAX_SCHEDULE_TIMEOUT); @@ -240,7 +246,8 @@ uint64_t amdgpu_ctx_add_fence(struct amdgpu_ctx *ctx, struct amdgpu_ring *ring,
spin_lock(&ctx->ring_lock); cring->fences[idx] = fence; - cring->sequence++; + if (!amdgpu_enable_scheduler) + cring->sequence++; spin_unlock(&ctx->ring_lock);
fence_put(other); @@ -253,14 +260,21 @@ struct fence *amdgpu_ctx_get_fence(struct amdgpu_ctx *ctx, { struct amdgpu_ctx_ring *cring = & ctx->rings[ring->idx]; struct fence *fence; + uint64_t queued_seq;
spin_lock(&ctx->ring_lock); - if (seq >= cring->sequence) { + if (amdgpu_enable_scheduler) + queued_seq = atomic64_read(&cring->c_entity.last_queued_v_seq) + 1; + else + queued_seq = cring->sequence; + + if (seq >= queued_seq) { spin_unlock(&ctx->ring_lock); return ERR_PTR(-EINVAL); }
- if (seq + AMDGPU_CTX_MAX_CS_PENDING < cring->sequence) { + + if (seq + AMDGPU_CTX_MAX_CS_PENDING < queued_seq) { spin_unlock(&ctx->ring_lock); return NULL; }
From: Chunming Zhou david1.zhou@amd.com
Signed-off-by: Chunming Zhou david1.zhou@amd.com Acked-by: Christian K?nig christian.koenig@amd.com Reviewed-by: Jammy Zhou Jammy.Zhou@amd.com --- drivers/gpu/drm/amd/amdgpu/amdgpu.h | 2 ++ drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c | 26 +++++++++----------------- drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c | 10 ++++++++++ drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c | 5 ++++- 4 files changed, 25 insertions(+), 18 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h b/drivers/gpu/drm/amd/amdgpu/amdgpu.h index 20639d1..754519e 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h @@ -82,6 +82,7 @@ extern int amdgpu_vm_size; extern int amdgpu_vm_block_size; extern int amdgpu_enable_scheduler;
+#define AMDGPU_WAIT_IDLE_TIMEOUT_IN_MS 3000 #define AMDGPU_MAX_USEC_TIMEOUT 100000 /* 100 ms */ #define AMDGPU_FENCE_JIFFIES_TIMEOUT (HZ / 2) /* AMDGPU_IB_POOL_SIZE must be a power of 2 */ @@ -1235,6 +1236,7 @@ struct amdgpu_cs_parser { /* user fence */ struct amdgpu_user_fence uf;
+ struct amdgpu_ring *ring; struct mutex job_lock; struct work_struct job_work; int (*prepare_job)(struct amdgpu_cs_parser *sched_job); diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c index f9d4fe9..5f24038 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c @@ -915,7 +915,7 @@ int amdgpu_cs_ioctl(struct drm_device *dev, void *data, struct drm_file *filp) goto out; } else parser->prepare_job = amdgpu_cs_parser_prepare_job; - + parser->ring = ring; parser->run_job = amdgpu_cs_parser_run_job; parser->free_job = amdgpu_cs_parser_free_job; amd_sched_push_job(ring->scheduler, @@ -965,24 +965,16 @@ int amdgpu_cs_wait_ioctl(struct drm_device *dev, void *data, ctx = amdgpu_ctx_get(filp->driver_priv, wait->in.ctx_id); if (ctx == NULL) return -EINVAL; - if (amdgpu_enable_scheduler) { - r = amd_sched_wait_ts(&ctx->rings[ring->idx].c_entity, - wait->in.handle, true, timeout); - if (r) - return r; - r = 1; - } else { - fence = amdgpu_ctx_get_fence(ctx, ring, wait->in.handle); - if (IS_ERR(fence)) - r = PTR_ERR(fence);
- else if (fence) { - r = fence_wait_timeout(fence, true, timeout); - fence_put(fence); + fence = amdgpu_ctx_get_fence(ctx, ring, wait->in.handle); + if (IS_ERR(fence)) + r = PTR_ERR(fence); + else if (fence) { + r = fence_wait_timeout(fence, true, timeout); + fence_put(fence); + } else + r = 1;
- } else - r = 1; - } amdgpu_ctx_put(ctx); if (r < 0) return r; diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c index b9be250..41bc7fc 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c @@ -261,6 +261,16 @@ struct fence *amdgpu_ctx_get_fence(struct amdgpu_ctx *ctx, struct amdgpu_ctx_ring *cring = & ctx->rings[ring->idx]; struct fence *fence; uint64_t queued_seq; + int r; + + if (amdgpu_enable_scheduler) { + r = amd_sched_wait_emit(&cring->c_entity, + seq, + true, + AMDGPU_WAIT_IDLE_TIMEOUT_IN_MS); + if (r) + return NULL; + }
spin_lock(&ctx->ring_lock); if (amdgpu_enable_scheduler) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c index 1f7bf31..46ec915 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c @@ -56,12 +56,15 @@ static void amdgpu_sched_run_job(struct amd_gpu_scheduler *sched, sched_job->filp); if (r) goto err; - if (sched_job->run_job) { r = sched_job->run_job(sched_job); if (r) goto err; } + atomic64_set(&c_entity->last_emitted_v_seq, + sched_job->uf.sequence); + wake_up_all(&c_entity->wait_emit); + mutex_unlock(&sched_job->job_lock); return; err:
From: Chunming Zhou david1.zhou@amd.com
user mode will still use pte ring as a normal ring. if the prepare job generates another command(update pte) on its ring in scheduler, then will kill scheduler which is going to waiting later job but pending running job.
Signed-off-by: Chunming Zhou david1.zhou@amd.com Acked-by: Christian K?nig christian.koenig@amd.com Reviewed-by: Jammy Zhou Jammy.Zhou@amd.com --- drivers/gpu/drm/amd/amdgpu/amdgpu.h | 1 + drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c | 2 +- drivers/gpu/drm/amd/amdgpu/cik_sdma.c | 1 + drivers/gpu/drm/amd/amdgpu/sdma_v2_4.c | 1 + drivers/gpu/drm/amd/amdgpu/sdma_v3_0.c | 1 + 5 files changed, 5 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h b/drivers/gpu/drm/amd/amdgpu/amdgpu.h index 754519e..d85df45 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h @@ -901,6 +901,7 @@ struct amdgpu_ring { struct amdgpu_ctx *current_ctx; enum amdgpu_ring_type type; char name[16]; + bool is_pte_ring; };
/* diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c index 5f24038..9ff4d27 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c @@ -909,7 +909,7 @@ int amdgpu_cs_ioctl(struct drm_device *dev, void *data, struct drm_file *filp) amdgpu_cs_parser_get_ring(adev, parser); parser->uf.sequence = atomic64_inc_return( &parser->ctx->rings[ring->idx].c_entity.last_queued_v_seq); - if ((parser->bo_list && parser->bo_list->has_userptr)) { + if (ring->is_pte_ring || (parser->bo_list && parser->bo_list->has_userptr)) { r = amdgpu_cs_parser_prepare_job(parser); if (r) goto out; diff --git a/drivers/gpu/drm/amd/amdgpu/cik_sdma.c b/drivers/gpu/drm/amd/amdgpu/cik_sdma.c index ab83cc1..6bb9d2f 100644 --- a/drivers/gpu/drm/amd/amdgpu/cik_sdma.c +++ b/drivers/gpu/drm/amd/amdgpu/cik_sdma.c @@ -1403,5 +1403,6 @@ static void cik_sdma_set_vm_pte_funcs(struct amdgpu_device *adev) if (adev->vm_manager.vm_pte_funcs == NULL) { adev->vm_manager.vm_pte_funcs = &cik_sdma_vm_pte_funcs; adev->vm_manager.vm_pte_funcs_ring = &adev->sdma[0].ring; + adev->vm_manager.vm_pte_funcs_ring->is_pte_ring = true; } } diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v2_4.c b/drivers/gpu/drm/amd/amdgpu/sdma_v2_4.c index d789588..78d4bbd 100644 --- a/drivers/gpu/drm/amd/amdgpu/sdma_v2_4.c +++ b/drivers/gpu/drm/amd/amdgpu/sdma_v2_4.c @@ -1413,5 +1413,6 @@ static void sdma_v2_4_set_vm_pte_funcs(struct amdgpu_device *adev) if (adev->vm_manager.vm_pte_funcs == NULL) { adev->vm_manager.vm_pte_funcs = &sdma_v2_4_vm_pte_funcs; adev->vm_manager.vm_pte_funcs_ring = &adev->sdma[0].ring; + adev->vm_manager.vm_pte_funcs_ring->is_pte_ring = true; } } diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v3_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v3_0.c index 7bb37b9..763e2cc 100644 --- a/drivers/gpu/drm/amd/amdgpu/sdma_v3_0.c +++ b/drivers/gpu/drm/amd/amdgpu/sdma_v3_0.c @@ -1507,5 +1507,6 @@ static void sdma_v3_0_set_vm_pte_funcs(struct amdgpu_device *adev) if (adev->vm_manager.vm_pte_funcs == NULL) { adev->vm_manager.vm_pte_funcs = &sdma_v3_0_vm_pte_funcs; adev->vm_manager.vm_pte_funcs_ring = &adev->sdma[0].ring; + adev->vm_manager.vm_pte_funcs_ring->is_pte_ring = true; } }
From: Chunming Zhou david1.zhou@amd.com
v2: rebase against kfd changes
Signed-off-by: Chunming Zhou david1.zhou@amd.com Acked-by: Christian K?nig christian.koenig@amd.com Reviewed-by: Jammy Zhou Jammy.Zhou@amd.com --- drivers/gpu/drm/amd/amdgpu/amdgpu.h | 3 ++ drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c | 83 +++++++++++++++++++++--------- drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 9 ++++ 3 files changed, 71 insertions(+), 24 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h b/drivers/gpu/drm/amd/amdgpu/amdgpu.h index d85df45..a9be614 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h @@ -2058,6 +2058,9 @@ struct amdgpu_device {
/* amdkfd interface */ struct kfd_dev *kfd; + + /* kernel conext for IB submission */ + struct amdgpu_ctx *kernel_ctx; };
bool amdgpu_device_is_px(struct drm_device *dev); diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c index 41bc7fc..a5d8242 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c @@ -48,33 +48,53 @@ static void amdgpu_ctx_do_release(struct kref *ref) kfree(ctx); }
+static void amdgpu_ctx_init(struct amdgpu_device *adev, + struct amdgpu_fpriv *fpriv, + struct amdgpu_ctx *ctx, + uint32_t id) +{ + int i; + memset(ctx, 0, sizeof(*ctx)); + ctx->adev = adev; + kref_init(&ctx->refcount); + spin_lock_init(&ctx->ring_lock); + for (i = 0; i < AMDGPU_MAX_RINGS; ++i) + ctx->rings[i].sequence = 1; +} + int amdgpu_ctx_alloc(struct amdgpu_device *adev, struct amdgpu_fpriv *fpriv, uint32_t *id) { struct amdgpu_ctx *ctx; - struct amdgpu_ctx_mgr *mgr = &fpriv->ctx_mgr; int i, j, r;
ctx = kmalloc(sizeof(*ctx), GFP_KERNEL); if (!ctx) return -ENOMEM; - - mutex_lock(&mgr->lock); - r = idr_alloc(&mgr->ctx_handles, ctx, 0, 0, GFP_KERNEL); - if (r < 0) { + if (fpriv) { + struct amdgpu_ctx_mgr *mgr = &fpriv->ctx_mgr; + mutex_lock(&mgr->lock); + r = idr_alloc(&mgr->ctx_handles, ctx, 1, 0, GFP_KERNEL); + if (r < 0) { + mutex_unlock(&mgr->lock); + kfree(ctx); + return r; + } + *id = (uint32_t)r; + amdgpu_ctx_init(adev, fpriv, ctx, *id); mutex_unlock(&mgr->lock); - kfree(ctx); - return r; + } else { + if (adev->kernel_ctx) { + DRM_ERROR("kernel cnotext has been created.\n"); + kfree(ctx); + return 0; + } + *id = AMD_KERNEL_CONTEXT_ID; + amdgpu_ctx_init(adev, fpriv, ctx, *id); + + adev->kernel_ctx = ctx; } - *id = (uint32_t)r;
- memset(ctx, 0, sizeof(*ctx)); - ctx->adev = adev; - kref_init(&ctx->refcount); - spin_lock_init(&ctx->ring_lock); - for (i = 0; i < AMDGPU_MAX_RINGS; ++i) - ctx->rings[i].sequence = 1; - mutex_unlock(&mgr->lock); if (amdgpu_enable_scheduler) { /* create context entity for each ring */ for (i = 0; i < adev->num_rings; i++) { @@ -105,17 +125,23 @@ int amdgpu_ctx_alloc(struct amdgpu_device *adev, struct amdgpu_fpriv *fpriv, int amdgpu_ctx_free(struct amdgpu_device *adev, struct amdgpu_fpriv *fpriv, uint32_t id) { struct amdgpu_ctx *ctx; - struct amdgpu_ctx_mgr *mgr = &fpriv->ctx_mgr;
- mutex_lock(&mgr->lock); - ctx = idr_find(&mgr->ctx_handles, id); - if (ctx) { - idr_remove(&mgr->ctx_handles, id); - kref_put(&ctx->refcount, amdgpu_ctx_do_release); + if (fpriv) { + struct amdgpu_ctx_mgr *mgr = &fpriv->ctx_mgr; + mutex_lock(&mgr->lock); + ctx = idr_find(&mgr->ctx_handles, id); + if (ctx) { + idr_remove(&mgr->ctx_handles, id); + kref_put(&ctx->refcount, amdgpu_ctx_do_release); + mutex_unlock(&mgr->lock); + return 0; + } mutex_unlock(&mgr->lock); + } else { + ctx = adev->kernel_ctx; + kref_put(&ctx->refcount, amdgpu_ctx_do_release); return 0; } - mutex_unlock(&mgr->lock); return -EINVAL; }
@@ -124,9 +150,13 @@ static int amdgpu_ctx_query(struct amdgpu_device *adev, union drm_amdgpu_ctx_out *out) { struct amdgpu_ctx *ctx; - struct amdgpu_ctx_mgr *mgr = &fpriv->ctx_mgr; + struct amdgpu_ctx_mgr *mgr; unsigned reset_counter;
+ if (!fpriv) + return -EINVAL; + + mgr = &fpriv->ctx_mgr; mutex_lock(&mgr->lock); ctx = idr_find(&mgr->ctx_handles, id); if (!ctx) { @@ -202,7 +232,12 @@ int amdgpu_ctx_ioctl(struct drm_device *dev, void *data, struct amdgpu_ctx *amdgpu_ctx_get(struct amdgpu_fpriv *fpriv, uint32_t id) { struct amdgpu_ctx *ctx; - struct amdgpu_ctx_mgr *mgr = &fpriv->ctx_mgr; + struct amdgpu_ctx_mgr *mgr; + + if (!fpriv) + return NULL; + + mgr = &fpriv->ctx_mgr;
mutex_lock(&mgr->lock); ctx = idr_find(&mgr->ctx_handles, id); diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c index 4b641f3..7e39b72 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c @@ -1517,6 +1517,14 @@ int amdgpu_device_init(struct amdgpu_device *adev, return r; }
+ if (!adev->kernel_ctx) { + uint32_t id = 0; + r = amdgpu_ctx_alloc(adev, NULL, &id); + if (r) { + dev_err(adev->dev, "failed to create kernel context (%d).\n", r); + return r; + } + } r = amdgpu_ib_ring_tests(adev); if (r) DRM_ERROR("ib ring test failed (%d).\n", r); @@ -1578,6 +1586,7 @@ void amdgpu_device_fini(struct amdgpu_device *adev) adev->shutdown = true; /* evict vram memory */ amdgpu_bo_evict_vram(adev); + amdgpu_ctx_free(adev, NULL, 0); amdgpu_ib_pool_fini(adev); amdgpu_fence_driver_fini(adev); amdgpu_fbdev_fini(adev);
From: Chunming Zhou david1.zhou@amd.com
use kernel context to submit command for vm
Signed-off-by: Chunming Zhou david1.zhou@amd.com Acked-by: Christian K?nig christian.koenig@amd.com Reviewed-by: Jammy Zhou Jammy.Zhou@amd.com --- drivers/gpu/drm/amd/amdgpu/amdgpu.h | 20 +++ drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 245 ++++++++++++++++++++++++++------- 2 files changed, 217 insertions(+), 48 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h b/drivers/gpu/drm/amd/amdgpu/amdgpu.h index a9be614..6a71047 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h @@ -1217,6 +1217,19 @@ struct amdgpu_cs_chunk { void __user *user_ptr; };
+union amdgpu_sched_job_param { + struct { + struct amdgpu_vm *vm; + uint64_t start; + uint64_t last; + struct amdgpu_fence **fence; + + } vm_mapping; + struct { + struct amdgpu_bo *bo; + } vm; +}; + struct amdgpu_cs_parser { struct amdgpu_device *adev; struct drm_file *filp; @@ -1241,6 +1254,7 @@ struct amdgpu_cs_parser { struct mutex job_lock; struct work_struct job_work; int (*prepare_job)(struct amdgpu_cs_parser *sched_job); + union amdgpu_sched_job_param job_param; int (*run_job)(struct amdgpu_cs_parser *sched_job); int (*free_job)(struct amdgpu_cs_parser *sched_job); }; @@ -2248,6 +2262,12 @@ void amdgpu_pci_config_reset(struct amdgpu_device *adev); bool amdgpu_card_posted(struct amdgpu_device *adev); void amdgpu_update_display_priority(struct amdgpu_device *adev); bool amdgpu_boot_test_post_card(struct amdgpu_device *adev); +struct amdgpu_cs_parser *amdgpu_cs_parser_create(struct amdgpu_device *adev, + struct drm_file *filp, + struct amdgpu_ctx *ctx, + struct amdgpu_ib *ibs, + uint32_t num_ibs); + int amdgpu_cs_parser_init(struct amdgpu_cs_parser *p, void *data); int amdgpu_cs_get_ring(struct amdgpu_device *adev, u32 ip_type, u32 ip_instance, u32 ring, diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c index fd8395f..34938d2 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c @@ -306,6 +306,24 @@ static void amdgpu_vm_update_pages(struct amdgpu_device *adev, } }
+static int amdgpu_vm_free_job( + struct amdgpu_cs_parser *sched_job) +{ + int i; + for (i = 0; i < sched_job->num_ibs; i++) + amdgpu_ib_free(sched_job->adev, &sched_job->ibs[i]); + kfree(sched_job->ibs); + return 0; +} + +static int amdgpu_vm_run_job( + struct amdgpu_cs_parser *sched_job) +{ + amdgpu_bo_fence(sched_job->job_param.vm.bo, + sched_job->ibs[sched_job->num_ibs -1].fence, true); + return 0; +} + /** * amdgpu_vm_clear_bo - initially clear the page dir/table * @@ -316,7 +334,8 @@ static int amdgpu_vm_clear_bo(struct amdgpu_device *adev, struct amdgpu_bo *bo) { struct amdgpu_ring *ring = adev->vm_manager.vm_pte_funcs_ring; - struct amdgpu_ib ib; + struct amdgpu_cs_parser *sched_job = NULL; + struct amdgpu_ib *ib; unsigned entries; uint64_t addr; int r; @@ -336,24 +355,54 @@ static int amdgpu_vm_clear_bo(struct amdgpu_device *adev, addr = amdgpu_bo_gpu_offset(bo); entries = amdgpu_bo_size(bo) / 8;
- r = amdgpu_ib_get(ring, NULL, entries * 2 + 64, &ib); - if (r) + ib = kzalloc(sizeof(struct amdgpu_ib), GFP_KERNEL); + if (!ib) goto error_unreserve;
- ib.length_dw = 0; - - amdgpu_vm_update_pages(adev, &ib, addr, 0, entries, 0, 0, 0); - amdgpu_vm_pad_ib(adev, &ib); - WARN_ON(ib.length_dw > 64); - - r = amdgpu_ib_schedule(adev, 1, &ib, AMDGPU_FENCE_OWNER_VM); + r = amdgpu_ib_get(ring, NULL, entries * 2 + 64, ib); if (r) goto error_free;
- amdgpu_bo_fence(bo, ib.fence, true); + ib->length_dw = 0; + + amdgpu_vm_update_pages(adev, ib, addr, 0, entries, 0, 0, 0); + amdgpu_vm_pad_ib(adev, ib); + WARN_ON(ib->length_dw > 64); + + if (amdgpu_enable_scheduler) { + int r; + uint64_t v_seq; + sched_job = amdgpu_cs_parser_create(adev, AMDGPU_FENCE_OWNER_VM, + adev->kernel_ctx, ib, 1); + if(!sched_job) + goto error_free; + sched_job->job_param.vm.bo = bo; + sched_job->run_job = amdgpu_vm_run_job; + sched_job->free_job = amdgpu_vm_free_job; + v_seq = atomic64_inc_return(&adev->kernel_ctx->rings[ring->idx].c_entity.last_queued_v_seq); + sched_job->uf.sequence = v_seq; + amd_sched_push_job(ring->scheduler, + &adev->kernel_ctx->rings[ring->idx].c_entity, + sched_job); + r = amd_sched_wait_emit(&adev->kernel_ctx->rings[ring->idx].c_entity, + v_seq, + true, + -1); + if (r) + DRM_ERROR("emit timeout\n"); + + amdgpu_bo_unreserve(bo); + return 0; + } else { + r = amdgpu_ib_schedule(adev, 1, ib, AMDGPU_FENCE_OWNER_VM); + if (r) + goto error_free; + amdgpu_bo_fence(bo, ib->fence, true); + }
error_free: - amdgpu_ib_free(adev, &ib); + amdgpu_ib_free(adev, ib); + kfree(ib);
error_unreserve: amdgpu_bo_unreserve(bo); @@ -406,7 +455,9 @@ int amdgpu_vm_update_page_directory(struct amdgpu_device *adev, uint32_t incr = AMDGPU_VM_PTE_COUNT * 8; uint64_t last_pde = ~0, last_pt = ~0; unsigned count = 0, pt_idx, ndw; - struct amdgpu_ib ib; + struct amdgpu_ib *ib; + struct amdgpu_cs_parser *sched_job = NULL; + int r;
/* padding, etc. */ @@ -419,10 +470,14 @@ int amdgpu_vm_update_page_directory(struct amdgpu_device *adev, if (ndw > 0xfffff) return -ENOMEM;
- r = amdgpu_ib_get(ring, NULL, ndw * 4, &ib); + ib = kzalloc(sizeof(struct amdgpu_ib), GFP_KERNEL); + if (!ib) + return -ENOMEM; + + r = amdgpu_ib_get(ring, NULL, ndw * 4, ib); if (r) return r; - ib.length_dw = 0; + ib->length_dw = 0;
/* walk over the address space and update the page directory */ for (pt_idx = 0; pt_idx <= vm->max_pde_used; ++pt_idx) { @@ -442,7 +497,7 @@ int amdgpu_vm_update_page_directory(struct amdgpu_device *adev, ((last_pt + incr * count) != pt)) {
if (count) { - amdgpu_vm_update_pages(adev, &ib, last_pde, + amdgpu_vm_update_pages(adev, ib, last_pde, last_pt, count, incr, AMDGPU_PTE_VALID, 0); } @@ -456,23 +511,59 @@ int amdgpu_vm_update_page_directory(struct amdgpu_device *adev, }
if (count) - amdgpu_vm_update_pages(adev, &ib, last_pde, last_pt, count, + amdgpu_vm_update_pages(adev, ib, last_pde, last_pt, count, incr, AMDGPU_PTE_VALID, 0);
- if (ib.length_dw != 0) { - amdgpu_vm_pad_ib(adev, &ib); - amdgpu_sync_resv(adev, &ib.sync, pd->tbo.resv, AMDGPU_FENCE_OWNER_VM); - WARN_ON(ib.length_dw > ndw); - r = amdgpu_ib_schedule(adev, 1, &ib, AMDGPU_FENCE_OWNER_VM); - if (r) { - amdgpu_ib_free(adev, &ib); - return r; + if (ib->length_dw != 0) { + amdgpu_vm_pad_ib(adev, ib); + amdgpu_sync_resv(adev, &ib->sync, pd->tbo.resv, AMDGPU_FENCE_OWNER_VM); + WARN_ON(ib->length_dw > ndw); + + if (amdgpu_enable_scheduler) { + int r; + uint64_t v_seq; + sched_job = amdgpu_cs_parser_create(adev, AMDGPU_FENCE_OWNER_VM, + adev->kernel_ctx, + ib, 1); + if(!sched_job) + goto error_free; + sched_job->job_param.vm.bo = pd; + sched_job->run_job = amdgpu_vm_run_job; + sched_job->free_job = amdgpu_vm_free_job; + v_seq = atomic64_inc_return(&adev->kernel_ctx->rings[ring->idx].c_entity.last_queued_v_seq); + sched_job->uf.sequence = v_seq; + amd_sched_push_job(ring->scheduler, + &adev->kernel_ctx->rings[ring->idx].c_entity, + sched_job); + r = amd_sched_wait_emit(&adev->kernel_ctx->rings[ring->idx].c_entity, + v_seq, + true, + -1); + if (r) + DRM_ERROR("emit timeout\n"); + } else { + r = amdgpu_ib_schedule(adev, 1, ib, AMDGPU_FENCE_OWNER_VM); + if (r) { + amdgpu_ib_free(adev, ib); + return r; + } + amdgpu_bo_fence(pd, ib->fence, true); } - amdgpu_bo_fence(pd, ib.fence, true); } - amdgpu_ib_free(adev, &ib); + + if (!amdgpu_enable_scheduler || ib->length_dw == 0) { + amdgpu_ib_free(adev, ib); + kfree(ib); + }
return 0; + +error_free: + if (sched_job) + kfree(sched_job); + amdgpu_ib_free(adev, ib); + kfree(ib); + return -ENOMEM; }
/** @@ -657,6 +748,20 @@ static void amdgpu_vm_fence_pts(struct amdgpu_vm *vm, amdgpu_bo_fence(vm->page_tables[i].bo, fence, true); }
+static int amdgpu_vm_bo_update_mapping_run_job( + struct amdgpu_cs_parser *sched_job) +{ + struct amdgpu_fence **fence = sched_job->job_param.vm_mapping.fence; + amdgpu_vm_fence_pts(sched_job->job_param.vm_mapping.vm, + sched_job->job_param.vm_mapping.start, + sched_job->job_param.vm_mapping.last + 1, + sched_job->ibs[sched_job->num_ibs -1].fence); + if (fence) { + amdgpu_fence_unref(fence); + *fence = amdgpu_fence_ref(sched_job->ibs[sched_job->num_ibs -1].fence); + } + return 0; +} /** * amdgpu_vm_bo_update_mapping - update a mapping in the vm page table * @@ -681,7 +786,8 @@ static int amdgpu_vm_bo_update_mapping(struct amdgpu_device *adev, struct amdgpu_ring *ring = adev->vm_manager.vm_pte_funcs_ring; unsigned nptes, ncmds, ndw; uint32_t flags = gtt_flags; - struct amdgpu_ib ib; + struct amdgpu_ib *ib; + struct amdgpu_cs_parser *sched_job = NULL; int r;
/* normally,bo_va->flags only contians READABLE and WIRTEABLE bit go here @@ -728,48 +834,91 @@ static int amdgpu_vm_bo_update_mapping(struct amdgpu_device *adev, if (ndw > 0xfffff) return -ENOMEM;
- r = amdgpu_ib_get(ring, NULL, ndw * 4, &ib); - if (r) + ib = kzalloc(sizeof(struct amdgpu_ib), GFP_KERNEL); + if (!ib) + return -ENOMEM; + + r = amdgpu_ib_get(ring, NULL, ndw * 4, ib); + if (r) { + kfree(ib); return r; - ib.length_dw = 0; + } + + ib->length_dw = 0;
if (!(flags & AMDGPU_PTE_VALID)) { unsigned i;
for (i = 0; i < AMDGPU_MAX_RINGS; ++i) { struct amdgpu_fence *f = vm->ids[i].last_id_use; - r = amdgpu_sync_fence(adev, &ib.sync, &f->base); + r = amdgpu_sync_fence(adev, &ib->sync, &f->base); if (r) return r; } }
- r = amdgpu_vm_update_ptes(adev, vm, &ib, mapping->it.start, + r = amdgpu_vm_update_ptes(adev, vm, ib, mapping->it.start, mapping->it.last + 1, addr + mapping->offset, flags, gtt_flags);
if (r) { - amdgpu_ib_free(adev, &ib); + amdgpu_ib_free(adev, ib); + kfree(ib); return r; }
- amdgpu_vm_pad_ib(adev, &ib); - WARN_ON(ib.length_dw > ndw); + amdgpu_vm_pad_ib(adev, ib); + WARN_ON(ib->length_dw > ndw);
- r = amdgpu_ib_schedule(adev, 1, &ib, AMDGPU_FENCE_OWNER_VM); - if (r) { - amdgpu_ib_free(adev, &ib); - return r; - } - amdgpu_vm_fence_pts(vm, mapping->it.start, - mapping->it.last + 1, ib.fence); - if (fence) { - amdgpu_fence_unref(fence); - *fence = amdgpu_fence_ref(ib.fence); - } - amdgpu_ib_free(adev, &ib); + if (amdgpu_enable_scheduler) { + int r; + uint64_t v_seq; + sched_job = amdgpu_cs_parser_create(adev, AMDGPU_FENCE_OWNER_VM, + adev->kernel_ctx, ib, 1); + if(!sched_job) + goto error_free; + sched_job->job_param.vm_mapping.vm = vm; + sched_job->job_param.vm_mapping.start = mapping->it.start; + sched_job->job_param.vm_mapping.last = mapping->it.last; + sched_job->job_param.vm_mapping.fence = fence; + sched_job->run_job = amdgpu_vm_bo_update_mapping_run_job; + sched_job->free_job = amdgpu_vm_free_job; + v_seq = atomic64_inc_return(&adev->kernel_ctx->rings[ring->idx].c_entity.last_queued_v_seq); + sched_job->uf.sequence = v_seq; + amd_sched_push_job(ring->scheduler, + &adev->kernel_ctx->rings[ring->idx].c_entity, + sched_job); + r = amd_sched_wait_emit(&adev->kernel_ctx->rings[ring->idx].c_entity, + v_seq, + true, + -1); + if (r) + DRM_ERROR("emit timeout\n"); + } else { + r = amdgpu_ib_schedule(adev, 1, ib, AMDGPU_FENCE_OWNER_VM); + if (r) { + amdgpu_ib_free(adev, ib); + return r; + } + + amdgpu_vm_fence_pts(vm, mapping->it.start, + mapping->it.last + 1, ib->fence); + if (fence) { + amdgpu_fence_unref(fence); + *fence = amdgpu_fence_ref(ib->fence); + }
+ amdgpu_ib_free(adev, ib); + kfree(ib); + } return 0; + +error_free: + if (sched_job) + kfree(sched_job); + amdgpu_ib_free(adev, ib); + kfree(ib); + return -ENOMEM; }
/**
From: Chunming Zhou david1.zhou@amd.com
Signed-off-by: Chunming Zhou david1.zhou@amd.com Reviewed-by: Jammy Zhou Jammy.Zhou@amd.com --- drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c | 18 +++++++++++++++++- 1 file changed, 17 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c index fdb3105..601f264 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c @@ -346,8 +346,24 @@ void amdgpu_fence_process(struct amdgpu_ring *ring) } } while (atomic64_xchg(&ring->fence_drv.last_seq, seq) > seq);
- if (wake) + if (wake) { + if (amdgpu_enable_scheduler) { + uint64_t handled_seq = + amd_sched_get_handled_seq(ring->scheduler); + uint64_t latest_seq = + atomic64_read(&ring->fence_drv.last_seq); + if (handled_seq == latest_seq) { + DRM_ERROR("ring %d, EOP without seq update (lastest_seq=%llu)\n", + ring->idx, latest_seq); + return; + } + do { + amd_sched_isr(ring->scheduler); + } while (amd_sched_get_handled_seq(ring->scheduler) < latest_seq); + } + wake_up_all(&ring->adev->fence_queue); + } }
/**
From: Chunming Zhou david1.zhou@amd.com
fence_process may be called from kthread, user thread and interrupt context. it is possible to called concurrently, then will wake up fence queue multiple times.
Signed-off-by: Chunming Zhou david1.zhou@amd.com Reviewed-by: Jammy Zhou Jammy.Zhou@amd.com --- drivers/gpu/drm/amd/amdgpu/amdgpu.h | 1 + drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c | 6 +++++- drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c | 2 +- 3 files changed, 7 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h b/drivers/gpu/drm/amd/amdgpu/amdgpu.h index 6a71047..5a7a3c4 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h @@ -868,6 +868,7 @@ struct amdgpu_ring { struct amdgpu_fence_driver fence_drv; struct amd_gpu_scheduler *scheduler;
+ spinlock_t fence_lock; struct mutex *ring_lock; struct amdgpu_bo *ring_obj; volatile uint32_t *ring; diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c index 601f264..faee350 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c @@ -295,6 +295,7 @@ void amdgpu_fence_process(struct amdgpu_ring *ring) uint64_t seq, last_seq, last_emitted; unsigned count_loop = 0; bool wake = false; + unsigned long irqflags;
/* Note there is a scenario here for an infinite loop but it's * very unlikely to happen. For it to happen, the current polling @@ -317,6 +318,7 @@ void amdgpu_fence_process(struct amdgpu_ring *ring) * have temporarly set the last_seq not to the true real last * seq but to an older one. */ + spin_lock_irqsave(&ring->fence_lock, irqflags); last_seq = atomic64_read(&ring->fence_drv.last_seq); do { last_emitted = ring->fence_drv.sync_seq[ring->idx]; @@ -355,7 +357,7 @@ void amdgpu_fence_process(struct amdgpu_ring *ring) if (handled_seq == latest_seq) { DRM_ERROR("ring %d, EOP without seq update (lastest_seq=%llu)\n", ring->idx, latest_seq); - return; + goto exit; } do { amd_sched_isr(ring->scheduler); @@ -364,6 +366,8 @@ void amdgpu_fence_process(struct amdgpu_ring *ring)
wake_up_all(&ring->adev->fence_queue); } +exit: + spin_unlock_irqrestore(&ring->fence_lock, irqflags); }
/** diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c index 855e219..1e68a56 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c @@ -367,7 +367,7 @@ int amdgpu_ring_init(struct amdgpu_device *adev, struct amdgpu_ring *ring, } ring->next_rptr_gpu_addr = adev->wb.gpu_addr + (ring->next_rptr_offs * 4); ring->next_rptr_cpu_addr = &adev->wb.wb[ring->next_rptr_offs]; - + spin_lock_init(&ring->fence_lock); r = amdgpu_fence_driver_start_ring(ring, irq_src, irq_type); if (r) { dev_err(adev->dev, "failed initializing fences (%d).\n", r);
From: Jammy Zhou Jammy.Zhou@amd.com
Signed-off-by: Jammy Zhou Jammy.Zhou@amd.com Acked-by: Christian K?nig christian.koenig@amd.com Reviewed-by: Jammy Zhou Jammy.Zhou@amd.com --- drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c | 3 +++ 1 file changed, 3 insertions(+)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c index c69611a..d278909 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c @@ -140,6 +140,9 @@ module_param_named(vm_block_size, amdgpu_vm_block_size, int, 0444); MODULE_PARM_DESC(exp_hw_support, "experimental hw support (1 = enable, 0 = disable (default))"); module_param_named(exp_hw_support, amdgpu_exp_hw_support, int, 0444);
+MODULE_PARM_DESC(enable_scheduler, "enable SW GPU scheduler (1 = enable, 0 = disable ((default))"); +module_param_named(enable_scheduler, amdgpu_enable_scheduler, int, 0444); + static struct pci_device_id pciidlist[] = { #ifdef CONFIG_DRM_AMDGPU_CIK /* Kaveri */
From: Chunming Zhou david1.zhou@amd.com
it is possible that the callback isn't defined sometimes.
Signed-off-by: Chunming Zhou david1.zhou@amd.com Reviewed-by: Christian K?nig christian.koenig@amd.com --- drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c index 9ff4d27..c41360e 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c @@ -137,7 +137,8 @@ static void amdgpu_job_work_func(struct work_struct *work) container_of(work, struct amdgpu_cs_parser, job_work); mutex_lock(&sched_job->job_lock); - sched_job->free_job(sched_job); + if (sched_job->free_job) + sched_job->free_job(sched_job); mutex_unlock(&sched_job->job_lock); /* after processing job, free memory */ kfree(sched_job);
From: Christian König christian.koenig@amd.com
Signed-off-by: Christian König christian.koenig@amd.com Reviewed-by: Alex Deucher alexander.deucher@amd.com --- drivers/gpu/drm/amd/amdgpu/amdgpu_sync.c | 19 ++++++++++++++++--- 1 file changed, 16 insertions(+), 3 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_sync.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_sync.c index 105a3b5..2c42f50 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_sync.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_sync.c @@ -122,11 +122,24 @@ int amdgpu_sync_resv(struct amdgpu_device *adev, f = rcu_dereference_protected(flist->shared[i], reservation_object_held(resv)); fence = f ? to_amdgpu_fence(f) : NULL; - if (fence && fence->ring->adev == adev && - fence->owner == owner && - fence->owner != AMDGPU_FENCE_OWNER_UNDEFINED) + if (fence && fence->ring->adev == adev) { + /* VM updates are only interesting + * for other VM updates and moves. + */ + if ((owner != AMDGPU_FENCE_OWNER_MOVE) && + (fence->owner != AMDGPU_FENCE_OWNER_MOVE) && + ((owner == AMDGPU_FENCE_OWNER_VM) != + (fence->owner == AMDGPU_FENCE_OWNER_VM))) continue;
+ /* Ignore fence from the same owner as + * long as it isn't undefined. + */ + if (owner != AMDGPU_FENCE_OWNER_UNDEFINED && + fence->owner == owner) + continue; + } + r = amdgpu_sync_fence(adev, sync, f); if (r) break;
From: Jammy Zhou Jammy.Zhou@amd.com
Signed-off-by: Jammy Zhou Jammy.Zhou@amd.com Reviewed-by: Chunming Zhou david1.zhou@amd.com --- drivers/gpu/drm/amd/scheduler/gpu_scheduler.c | 2 -- 1 file changed, 2 deletions(-)
diff --git a/drivers/gpu/drm/amd/scheduler/gpu_scheduler.c b/drivers/gpu/drm/amd/scheduler/gpu_scheduler.c index 296496c..5799474 100644 --- a/drivers/gpu/drm/amd/scheduler/gpu_scheduler.c +++ b/drivers/gpu/drm/amd/scheduler/gpu_scheduler.c @@ -503,8 +503,6 @@ struct amd_gpu_scheduler *amd_sched_create(void *device, sched->thread = kthread_create(amd_sched_main, sched, name); if (sched->thread) { wake_up_process(sched->thread); - DRM_INFO("Create gpu scheduler for id %d successfully.\n", - ring); return sched; }
From: Jammy Zhou Jammy.Zhou@amd.com
This option can be used to specify the max job number in the job queue, and it is 16 by default.
Signed-off-by: Jammy Zhou Jammy.Zhou@amd.com Reviewed-by: Chunming Zhou david1.zhou@amd.com --- drivers/gpu/drm/amd/amdgpu/amdgpu.h | 1 + drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c | 3 ++- drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c | 4 ++++ drivers/gpu/drm/amd/scheduler/gpu_scheduler.c | 6 ++++-- drivers/gpu/drm/amd/scheduler/gpu_scheduler.h | 4 ++-- 5 files changed, 13 insertions(+), 5 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h b/drivers/gpu/drm/amd/amdgpu/amdgpu.h index 5a7a3c4..e4311ac 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h @@ -81,6 +81,7 @@ extern int amdgpu_deep_color; extern int amdgpu_vm_size; extern int amdgpu_vm_block_size; extern int amdgpu_enable_scheduler; +extern int amdgpu_sched_jobs;
#define AMDGPU_WAIT_IDLE_TIMEOUT_IN_MS 3000 #define AMDGPU_MAX_USEC_TIMEOUT 100000 /* 100 ms */ diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c index a5d8242..58ce265 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c @@ -105,7 +105,8 @@ int amdgpu_ctx_alloc(struct amdgpu_device *adev, struct amdgpu_fpriv *fpriv, rq = &adev->rings[i]->scheduler->kernel_rq; r = amd_context_entity_init(adev->rings[i]->scheduler, &ctx->rings[i].c_entity, - NULL, rq, *id); + NULL, rq, *id, + amdgpu_sched_jobs); if (r) break; } diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c index d278909..0e7e0c3 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c @@ -76,6 +76,7 @@ int amdgpu_vm_size = 8; int amdgpu_vm_block_size = -1; int amdgpu_exp_hw_support = 0; int amdgpu_enable_scheduler = 0; +int amdgpu_sched_jobs = 16;
MODULE_PARM_DESC(vramlimit, "Restrict VRAM for testing, in megabytes"); module_param_named(vramlimit, amdgpu_vram_limit, int, 0600); @@ -143,6 +144,9 @@ module_param_named(exp_hw_support, amdgpu_exp_hw_support, int, 0444); MODULE_PARM_DESC(enable_scheduler, "enable SW GPU scheduler (1 = enable, 0 = disable ((default))"); module_param_named(enable_scheduler, amdgpu_enable_scheduler, int, 0444);
+MODULE_PARM_DESC(sched_jobs, "the max number of jobs supported in the sw queue (default 16)"); +module_param_named(sched_jobs, amdgpu_sched_jobs, int, 0444); + static struct pci_device_id pciidlist[] = { #ifdef CONFIG_DRM_AMDGPU_CIK /* Kaveri */ diff --git a/drivers/gpu/drm/amd/scheduler/gpu_scheduler.c b/drivers/gpu/drm/amd/scheduler/gpu_scheduler.c index 5799474..87993e0 100644 --- a/drivers/gpu/drm/amd/scheduler/gpu_scheduler.c +++ b/drivers/gpu/drm/amd/scheduler/gpu_scheduler.c @@ -173,6 +173,7 @@ exit: * @parent The parent entity of this amd_context_entity * @rq The run queue this entity belongs * @context_id The context id for this entity + * @jobs The max number of jobs in the job queue * * return 0 if succeed. negative error code on failure */ @@ -180,7 +181,8 @@ int amd_context_entity_init(struct amd_gpu_scheduler *sched, struct amd_context_entity *entity, struct amd_sched_entity *parent, struct amd_run_queue *rq, - uint32_t context_id) + uint32_t context_id, + uint32_t jobs) { uint64_t seq_ring = 0;
@@ -196,7 +198,7 @@ int amd_context_entity_init(struct amd_gpu_scheduler *sched, init_waitqueue_head(&entity->wait_queue); init_waitqueue_head(&entity->wait_emit); if(kfifo_alloc(&entity->job_queue, - AMD_MAX_JOB_ENTRY_PER_CONTEXT * sizeof(void *), + jobs * sizeof(void *), GFP_KERNEL)) return -EINVAL;
diff --git a/drivers/gpu/drm/amd/scheduler/gpu_scheduler.h b/drivers/gpu/drm/amd/scheduler/gpu_scheduler.h index a6226e1..52577a88 100644 --- a/drivers/gpu/drm/amd/scheduler/gpu_scheduler.h +++ b/drivers/gpu/drm/amd/scheduler/gpu_scheduler.h @@ -27,7 +27,6 @@ #include <linux/kfifo.h>
#define AMD_MAX_ACTIVE_HW_SUBMISSION 2 -#define AMD_MAX_JOB_ENTRY_PER_CONTEXT 16
#define AMD_KERNEL_CONTEXT_ID 0 #define AMD_KERNEL_PROCESS_ID 0 @@ -155,6 +154,7 @@ int amd_context_entity_init(struct amd_gpu_scheduler *sched, struct amd_context_entity *entity, struct amd_sched_entity *parent, struct amd_run_queue *rq, - uint32_t context_id); + uint32_t context_id, + uint32_t jobs);
#endif
From: Jammy Zhou Jammy.Zhou@amd.com
This option can be used to specify the max number of submissions in the active HW queue. The default value is 2 now.
Signed-off-by: Jammy Zhou Jammy.Zhou@amd.com Reviewed-by: Chunming Zhou david1.zhou@amd.com --- drivers/gpu/drm/amd/amdgpu/amdgpu.h | 1 + drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c | 4 ++++ drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c | 3 ++- drivers/gpu/drm/amd/scheduler/gpu_scheduler.c | 5 +++-- drivers/gpu/drm/amd/scheduler/gpu_scheduler.h | 5 ++--- 5 files changed, 12 insertions(+), 6 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h b/drivers/gpu/drm/amd/amdgpu/amdgpu.h index e4311ac..ee55099 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h @@ -82,6 +82,7 @@ extern int amdgpu_vm_size; extern int amdgpu_vm_block_size; extern int amdgpu_enable_scheduler; extern int amdgpu_sched_jobs; +extern int amdgpu_sched_hw_submission;
#define AMDGPU_WAIT_IDLE_TIMEOUT_IN_MS 3000 #define AMDGPU_MAX_USEC_TIMEOUT 100000 /* 100 ms */ diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c index 0e7e0c3..6578258 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c @@ -77,6 +77,7 @@ int amdgpu_vm_block_size = -1; int amdgpu_exp_hw_support = 0; int amdgpu_enable_scheduler = 0; int amdgpu_sched_jobs = 16; +int amdgpu_sched_hw_submission = 2;
MODULE_PARM_DESC(vramlimit, "Restrict VRAM for testing, in megabytes"); module_param_named(vramlimit, amdgpu_vram_limit, int, 0600); @@ -147,6 +148,9 @@ module_param_named(enable_scheduler, amdgpu_enable_scheduler, int, 0444); MODULE_PARM_DESC(sched_jobs, "the max number of jobs supported in the sw queue (default 16)"); module_param_named(sched_jobs, amdgpu_sched_jobs, int, 0444);
+MODULE_PARM_DESC(sched_hw_submission, "the max number of HW submissions (default 2)"); +module_param_named(sched_hw_submission, amdgpu_sched_hw_submission, int, 0444); + static struct pci_device_id pciidlist[] = { #ifdef CONFIG_DRM_AMDGPU_CIK /* Kaveri */ diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c index faee350..c4ad6bb 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c @@ -926,7 +926,8 @@ void amdgpu_fence_driver_init_ring(struct amdgpu_ring *ring) if (amdgpu_enable_scheduler) { ring->scheduler = amd_sched_create((void *)ring->adev, &amdgpu_sched_ops, - ring->idx, 5, 0); + ring->idx, 5, 0, + amdgpu_sched_hw_submission); if (!ring->scheduler) DRM_ERROR("Failed to create scheduler on ring %d.\n", ring->idx); diff --git a/drivers/gpu/drm/amd/scheduler/gpu_scheduler.c b/drivers/gpu/drm/amd/scheduler/gpu_scheduler.c index 87993e0..042da7d 100644 --- a/drivers/gpu/drm/amd/scheduler/gpu_scheduler.c +++ b/drivers/gpu/drm/amd/scheduler/gpu_scheduler.c @@ -468,7 +468,8 @@ struct amd_gpu_scheduler *amd_sched_create(void *device, struct amd_sched_backend_ops *ops, unsigned ring, unsigned granularity, - unsigned preemption) + unsigned preemption, + unsigned hw_submission) { struct amd_gpu_scheduler *sched; char name[20] = "gpu_sched[0]"; @@ -495,7 +496,7 @@ struct amd_gpu_scheduler *amd_sched_create(void *device,
init_waitqueue_head(&sched->wait_queue); if(kfifo_alloc(&sched->active_hw_rq, - AMD_MAX_ACTIVE_HW_SUBMISSION * sizeof(void *), + hw_submission * sizeof(void *), GFP_KERNEL)) { kfree(sched); return NULL; diff --git a/drivers/gpu/drm/amd/scheduler/gpu_scheduler.h b/drivers/gpu/drm/amd/scheduler/gpu_scheduler.h index 52577a88..7f6bc26 100644 --- a/drivers/gpu/drm/amd/scheduler/gpu_scheduler.h +++ b/drivers/gpu/drm/amd/scheduler/gpu_scheduler.h @@ -26,8 +26,6 @@
#include <linux/kfifo.h>
-#define AMD_MAX_ACTIVE_HW_SUBMISSION 2 - #define AMD_KERNEL_CONTEXT_ID 0 #define AMD_KERNEL_PROCESS_ID 0
@@ -127,7 +125,8 @@ struct amd_gpu_scheduler *amd_sched_create(void *device, struct amd_sched_backend_ops *ops, uint32_t ring, uint32_t granularity, - uint32_t preemption); + uint32_t preemption, + uint32_t hw_submission);
int amd_sched_destroy(struct amd_gpu_scheduler *sched);
From: Chunming Zhou david1.zhou@amd.com
the job must be emitted by scheduler, otherwise scheduler is abnormal.
Signed-off-by: Chunming Zhou david1.zhou@amd.com Reviewed-by: Christian K?nig christian.koenig@amd.com --- drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c | 4 ++-- drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 6 +++--- 2 files changed, 5 insertions(+), 5 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c index 58ce265..95807b6 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c @@ -302,8 +302,8 @@ struct fence *amdgpu_ctx_get_fence(struct amdgpu_ctx *ctx, if (amdgpu_enable_scheduler) { r = amd_sched_wait_emit(&cring->c_entity, seq, - true, - AMDGPU_WAIT_IDLE_TIMEOUT_IN_MS); + false, + -1); if (r) return NULL; } diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c index 34938d2..26c55a7 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c @@ -386,7 +386,7 @@ static int amdgpu_vm_clear_bo(struct amdgpu_device *adev, sched_job); r = amd_sched_wait_emit(&adev->kernel_ctx->rings[ring->idx].c_entity, v_seq, - true, + false, -1); if (r) DRM_ERROR("emit timeout\n"); @@ -537,7 +537,7 @@ int amdgpu_vm_update_page_directory(struct amdgpu_device *adev, sched_job); r = amd_sched_wait_emit(&adev->kernel_ctx->rings[ring->idx].c_entity, v_seq, - true, + false, -1); if (r) DRM_ERROR("emit timeout\n"); @@ -890,7 +890,7 @@ static int amdgpu_vm_bo_update_mapping(struct amdgpu_device *adev, sched_job); r = amd_sched_wait_emit(&adev->kernel_ctx->rings[ring->idx].c_entity, v_seq, - true, + false, -1); if (r) DRM_ERROR("emit timeout\n");
From: Chunming Zhou david1.zhou@amd.com
if enabling scheduler, then the queued seq is assigned when pushing job before emitting job.
Signed-off-by: Chunming Zhou david1.zhou@amd.com Reviewed-by: Christian K?nig christian.koenig@amd.com --- drivers/gpu/drm/amd/amdgpu/amdgpu.h | 3 +-- drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c | 5 ++--- drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c | 4 ++-- drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c | 6 +++++- drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c | 4 ++-- drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 6 +++--- 6 files changed, 15 insertions(+), 13 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h b/drivers/gpu/drm/amd/amdgpu/amdgpu.h index ee55099..3dfff89 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h @@ -420,7 +420,6 @@ struct amdgpu_user_fence { struct amdgpu_bo *bo; /* write-back address offset to bo start */ uint32_t offset; - uint64_t sequence; };
int amdgpu_fence_driver_init(struct amdgpu_device *adev); @@ -1030,7 +1029,7 @@ struct amdgpu_ctx *amdgpu_ctx_get(struct amdgpu_fpriv *fpriv, uint32_t id); int amdgpu_ctx_put(struct amdgpu_ctx *ctx);
uint64_t amdgpu_ctx_add_fence(struct amdgpu_ctx *ctx, struct amdgpu_ring *ring, - struct fence *fence); + struct fence *fence, uint64_t queued_seq); struct fence *amdgpu_ctx_get_fence(struct amdgpu_ctx *ctx, struct amdgpu_ring *ring, uint64_t seq);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c index c41360e..40e85bf 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c @@ -739,7 +739,6 @@ static int amdgpu_cs_ib_fill(struct amdgpu_device *adev, ib->oa_size = amdgpu_bo_size(oa); } } - /* wrap the last IB with user fence */ if (parser->uf.bo) { struct amdgpu_ib *ib = &parser->ibs[parser->num_ibs - 1]; @@ -908,7 +907,7 @@ int amdgpu_cs_ioctl(struct drm_device *dev, void *data, struct drm_file *filp) if (amdgpu_enable_scheduler && parser->num_ibs) { struct amdgpu_ring * ring = amdgpu_cs_parser_get_ring(adev, parser); - parser->uf.sequence = atomic64_inc_return( + parser->ibs[parser->num_ibs - 1].sequence = atomic64_inc_return( &parser->ctx->rings[ring->idx].c_entity.last_queued_v_seq); if (ring->is_pte_ring || (parser->bo_list && parser->bo_list->has_userptr)) { r = amdgpu_cs_parser_prepare_job(parser); @@ -922,7 +921,7 @@ int amdgpu_cs_ioctl(struct drm_device *dev, void *data, struct drm_file *filp) amd_sched_push_job(ring->scheduler, &parser->ctx->rings[ring->idx].c_entity, parser); - cs->out.handle = parser->uf.sequence; + cs->out.handle = parser->ibs[parser->num_ibs - 1].sequence; up_read(&adev->exclusive_lock); return 0; } diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c index 95807b6..e0eaa55 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c @@ -258,7 +258,7 @@ int amdgpu_ctx_put(struct amdgpu_ctx *ctx) }
uint64_t amdgpu_ctx_add_fence(struct amdgpu_ctx *ctx, struct amdgpu_ring *ring, - struct fence *fence) + struct fence *fence, uint64_t queued_seq) { struct amdgpu_ctx_ring *cring = & ctx->rings[ring->idx]; uint64_t seq = 0; @@ -266,7 +266,7 @@ uint64_t amdgpu_ctx_add_fence(struct amdgpu_ctx *ctx, struct amdgpu_ring *ring, struct fence *other = NULL;
if (amdgpu_enable_scheduler) - seq = atomic64_read(&cring->c_entity.last_queued_v_seq); + seq = queued_seq; else seq = cring->sequence; idx = seq % AMDGPU_CTX_MAX_CS_PENDING; diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c index 42d6298..eed409c 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c @@ -143,6 +143,7 @@ int amdgpu_ib_schedule(struct amdgpu_device *adev, unsigned num_ibs, struct amdgpu_ring *ring; struct amdgpu_ctx *ctx, *old_ctx; struct amdgpu_vm *vm; + uint64_t sequence; unsigned i; int r = 0;
@@ -215,9 +216,12 @@ int amdgpu_ib_schedule(struct amdgpu_device *adev, unsigned num_ibs, return r; }
+ sequence = amdgpu_enable_scheduler ? ib->sequence : 0; + if (ib->ctx) ib->sequence = amdgpu_ctx_add_fence(ib->ctx, ring, - &ib->fence->base); + &ib->fence->base, + sequence);
/* wrap the last IB with fence */ if (ib->user) { diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c index 46ec915..b913c22 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c @@ -62,7 +62,7 @@ static void amdgpu_sched_run_job(struct amd_gpu_scheduler *sched, goto err; } atomic64_set(&c_entity->last_emitted_v_seq, - sched_job->uf.sequence); + sched_job->ibs[sched_job->num_ibs - 1].sequence); wake_up_all(&c_entity->wait_emit);
mutex_unlock(&sched_job->job_lock); @@ -93,7 +93,7 @@ static void amdgpu_sched_process_job(struct amd_gpu_scheduler *sched, void *job) if (sched_job->ctx) { c_entity = &sched_job->ctx->rings[ring->idx].c_entity; atomic64_set(&c_entity->last_signaled_v_seq, - sched_job->uf.sequence); + sched_job->ibs[sched_job->num_ibs - 1].sequence); }
/* wake up users waiting for time stamp */ diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c index 26c55a7..5624d44 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c @@ -380,7 +380,7 @@ static int amdgpu_vm_clear_bo(struct amdgpu_device *adev, sched_job->run_job = amdgpu_vm_run_job; sched_job->free_job = amdgpu_vm_free_job; v_seq = atomic64_inc_return(&adev->kernel_ctx->rings[ring->idx].c_entity.last_queued_v_seq); - sched_job->uf.sequence = v_seq; + ib->sequence = v_seq; amd_sched_push_job(ring->scheduler, &adev->kernel_ctx->rings[ring->idx].c_entity, sched_job); @@ -531,7 +531,7 @@ int amdgpu_vm_update_page_directory(struct amdgpu_device *adev, sched_job->run_job = amdgpu_vm_run_job; sched_job->free_job = amdgpu_vm_free_job; v_seq = atomic64_inc_return(&adev->kernel_ctx->rings[ring->idx].c_entity.last_queued_v_seq); - sched_job->uf.sequence = v_seq; + ib->sequence = v_seq; amd_sched_push_job(ring->scheduler, &adev->kernel_ctx->rings[ring->idx].c_entity, sched_job); @@ -884,7 +884,7 @@ static int amdgpu_vm_bo_update_mapping(struct amdgpu_device *adev, sched_job->run_job = amdgpu_vm_bo_update_mapping_run_job; sched_job->free_job = amdgpu_vm_free_job; v_seq = atomic64_inc_return(&adev->kernel_ctx->rings[ring->idx].c_entity.last_queued_v_seq); - sched_job->uf.sequence = v_seq; + ib->sequence = v_seq; amd_sched_push_job(ring->scheduler, &adev->kernel_ctx->rings[ring->idx].c_entity, sched_job);
From: Chunming Zhou david1.zhou@amd.com
Signed-off-by: Chunming Zhou david1.zhou@amd.com Reviewed-by: Christian K?nig christian.koenig@amd.com --- drivers/gpu/drm/amd/amdgpu/amdgpu.h | 7 +++++++ drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c | 35 +++++++++++++++++++++++++++++++ 2 files changed, 42 insertions(+)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h b/drivers/gpu/drm/amd/amdgpu/amdgpu.h index 3dfff89..7fecb44 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h @@ -863,6 +863,13 @@ enum amdgpu_ring_type {
extern struct amd_sched_backend_ops amdgpu_sched_ops;
+int amdgpu_sched_ib_submit_kernel_helper(struct amdgpu_device *adev, + struct amdgpu_ring *ring, + struct amdgpu_ib *ibs, + unsigned num_ibs, + int (*free_job)(struct amdgpu_cs_parser *), + void *owner); + struct amdgpu_ring { struct amdgpu_device *adev; const struct amdgpu_ring_funcs *funcs; diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c index b913c22..d682fab 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c @@ -108,3 +108,38 @@ struct amd_sched_backend_ops amdgpu_sched_ops = { .process_job = amdgpu_sched_process_job };
+int amdgpu_sched_ib_submit_kernel_helper(struct amdgpu_device *adev, + struct amdgpu_ring *ring, + struct amdgpu_ib *ibs, + unsigned num_ibs, + int (*free_job)(struct amdgpu_cs_parser *), + void *owner) +{ + int r = 0; + if (amdgpu_enable_scheduler) { + uint64_t v_seq; + struct amdgpu_cs_parser *sched_job = + amdgpu_cs_parser_create(adev, + owner, + adev->kernel_ctx, + ibs, 1); + if(!sched_job) { + return -ENOMEM; + } + sched_job->free_job = free_job; + v_seq = atomic64_inc_return(&adev->kernel_ctx->rings[ring->idx].c_entity.last_queued_v_seq); + ibs[num_ibs - 1].sequence = v_seq; + amd_sched_push_job(ring->scheduler, + &adev->kernel_ctx->rings[ring->idx].c_entity, + sched_job); + r = amd_sched_wait_emit( + &adev->kernel_ctx->rings[ring->idx].c_entity, + v_seq, + false, + -1); + if (r) + WARN(true, "emit timeout\n"); + } else + r = amdgpu_ib_schedule(adev, 1, ibs, owner); + return r; +}
From: Chunming Zhou david1.zhou@amd.com
Signed-off-by: Chunming Zhou david1.zhou@amd.com Reviewed-by: Christian K?nig christian.koenig@amd.com --- drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c | 28 ++++++++++++++-------------- drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c | 27 +++++++++++++-------------- 2 files changed, 27 insertions(+), 28 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c index c968ddf..5445599 100644 --- a/drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c +++ b/drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c @@ -2662,26 +2662,22 @@ static int gfx_v7_0_ring_test_ib(struct amdgpu_ring *ring) r = amdgpu_ib_get(ring, NULL, 256, &ib); if (r) { DRM_ERROR("amdgpu: failed to get ib (%d).\n", r); - amdgpu_gfx_scratch_free(adev, scratch); - return r; + goto err1; } ib.ptr[0] = PACKET3(PACKET3_SET_UCONFIG_REG, 1); ib.ptr[1] = ((scratch - PACKET3_SET_UCONFIG_REG_START)); ib.ptr[2] = 0xDEADBEEF; ib.length_dw = 3; - r = amdgpu_ib_schedule(adev, 1, &ib, AMDGPU_FENCE_OWNER_UNDEFINED); - if (r) { - amdgpu_gfx_scratch_free(adev, scratch); - amdgpu_ib_free(adev, &ib); - DRM_ERROR("amdgpu: failed to schedule ib (%d).\n", r); - return r; - } + + r = amdgpu_sched_ib_submit_kernel_helper(adev, ring, &ib, 1, NULL, + AMDGPU_FENCE_OWNER_UNDEFINED); + if (r) + goto err2; + r = amdgpu_fence_wait(ib.fence, false); if (r) { DRM_ERROR("amdgpu: fence wait failed (%d).\n", r); - amdgpu_gfx_scratch_free(adev, scratch); - amdgpu_ib_free(adev, &ib); - return r; + goto err2; } for (i = 0; i < adev->usec_timeout; i++) { tmp = RREG32(scratch); @@ -2691,14 +2687,18 @@ static int gfx_v7_0_ring_test_ib(struct amdgpu_ring *ring) } if (i < adev->usec_timeout) { DRM_INFO("ib test on ring %d succeeded in %u usecs\n", - ib.fence->ring->idx, i); + ring->idx, i); + goto err2; } else { DRM_ERROR("amdgpu: ib test failed (scratch(0x%04X)=0x%08X)\n", scratch, tmp); r = -EINVAL; } - amdgpu_gfx_scratch_free(adev, scratch); + +err2: amdgpu_ib_free(adev, &ib); +err1: + amdgpu_gfx_scratch_free(adev, scratch); return r; }
diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c index e9df2d1..14768dc 100644 --- a/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c +++ b/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c @@ -540,26 +540,22 @@ static int gfx_v8_0_ring_test_ib(struct amdgpu_ring *ring) r = amdgpu_ib_get(ring, NULL, 256, &ib); if (r) { DRM_ERROR("amdgpu: failed to get ib (%d).\n", r); - amdgpu_gfx_scratch_free(adev, scratch); - return r; + goto err1; } ib.ptr[0] = PACKET3(PACKET3_SET_UCONFIG_REG, 1); ib.ptr[1] = ((scratch - PACKET3_SET_UCONFIG_REG_START)); ib.ptr[2] = 0xDEADBEEF; ib.length_dw = 3; - r = amdgpu_ib_schedule(adev, 1, &ib, AMDGPU_FENCE_OWNER_UNDEFINED); - if (r) { - amdgpu_gfx_scratch_free(adev, scratch); - amdgpu_ib_free(adev, &ib); - DRM_ERROR("amdgpu: failed to schedule ib (%d).\n", r); - return r; - } + + r = amdgpu_sched_ib_submit_kernel_helper(adev, ring, &ib, 1, NULL, + AMDGPU_FENCE_OWNER_UNDEFINED); + if (r) + goto err2; + r = amdgpu_fence_wait(ib.fence, false); if (r) { DRM_ERROR("amdgpu: fence wait failed (%d).\n", r); - amdgpu_gfx_scratch_free(adev, scratch); - amdgpu_ib_free(adev, &ib); - return r; + goto err2; } for (i = 0; i < adev->usec_timeout; i++) { tmp = RREG32(scratch); @@ -569,14 +565,17 @@ static int gfx_v8_0_ring_test_ib(struct amdgpu_ring *ring) } if (i < adev->usec_timeout) { DRM_INFO("ib test on ring %d succeeded in %u usecs\n", - ib.fence->ring->idx, i); + ring->idx, i); + goto err2; } else { DRM_ERROR("amdgpu: ib test failed (scratch(0x%04X)=0x%08X)\n", scratch, tmp); r = -EINVAL; } - amdgpu_gfx_scratch_free(adev, scratch); +err2: amdgpu_ib_free(adev, &ib); +err1: + amdgpu_gfx_scratch_free(adev, scratch); return r; }
From: Chunming Zhou david1.zhou@amd.com
Signed-off-by: Chunming Zhou david1.zhou@amd.com Reviewed-by: Christian K?nig christian.koenig@amd.com --- drivers/gpu/drm/amd/amdgpu/cik_sdma.c | 25 +++++++++++-------------- drivers/gpu/drm/amd/amdgpu/sdma_v2_4.c | 26 ++++++++++++-------------- drivers/gpu/drm/amd/amdgpu/sdma_v3_0.c | 25 +++++++++++-------------- 3 files changed, 34 insertions(+), 42 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/cik_sdma.c b/drivers/gpu/drm/amd/amdgpu/cik_sdma.c index 6bb9d2f..d5d2c77 100644 --- a/drivers/gpu/drm/amd/amdgpu/cik_sdma.c +++ b/drivers/gpu/drm/amd/amdgpu/cik_sdma.c @@ -628,12 +628,10 @@ static int cik_sdma_ring_test_ib(struct amdgpu_ring *ring) gpu_addr = adev->wb.gpu_addr + (index * 4); tmp = 0xCAFEDEAD; adev->wb.wb[index] = cpu_to_le32(tmp); - r = amdgpu_ib_get(ring, NULL, 256, &ib); if (r) { - amdgpu_wb_free(adev, index); DRM_ERROR("amdgpu: failed to get ib (%d).\n", r); - return r; + goto err0; }
ib.ptr[0] = SDMA_PACKET(SDMA_OPCODE_WRITE, SDMA_WRITE_SUB_OPCODE_LINEAR, 0); @@ -642,20 +640,15 @@ static int cik_sdma_ring_test_ib(struct amdgpu_ring *ring) ib.ptr[3] = 1; ib.ptr[4] = 0xDEADBEEF; ib.length_dw = 5; + r = amdgpu_sched_ib_submit_kernel_helper(adev, ring, &ib, 1, NULL, + AMDGPU_FENCE_OWNER_UNDEFINED); + if (r) + goto err1;
- r = amdgpu_ib_schedule(adev, 1, &ib, AMDGPU_FENCE_OWNER_UNDEFINED); - if (r) { - amdgpu_ib_free(adev, &ib); - amdgpu_wb_free(adev, index); - DRM_ERROR("amdgpu: failed to schedule ib (%d).\n", r); - return r; - } r = amdgpu_fence_wait(ib.fence, false); if (r) { - amdgpu_ib_free(adev, &ib); - amdgpu_wb_free(adev, index); DRM_ERROR("amdgpu: fence wait failed (%d).\n", r); - return r; + goto err1; } for (i = 0; i < adev->usec_timeout; i++) { tmp = le32_to_cpu(adev->wb.wb[index]); @@ -665,12 +658,16 @@ static int cik_sdma_ring_test_ib(struct amdgpu_ring *ring) } if (i < adev->usec_timeout) { DRM_INFO("ib test on ring %d succeeded in %u usecs\n", - ib.fence->ring->idx, i); + ring->idx, i); + goto err1; } else { DRM_ERROR("amdgpu: ib test failed (0x%08X)\n", tmp); r = -EINVAL; } + +err1: amdgpu_ib_free(adev, &ib); +err0: amdgpu_wb_free(adev, index); return r; } diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v2_4.c b/drivers/gpu/drm/amd/amdgpu/sdma_v2_4.c index 78d4bbd..247cfa7 100644 --- a/drivers/gpu/drm/amd/amdgpu/sdma_v2_4.c +++ b/drivers/gpu/drm/amd/amdgpu/sdma_v2_4.c @@ -686,12 +686,10 @@ static int sdma_v2_4_ring_test_ib(struct amdgpu_ring *ring) gpu_addr = adev->wb.gpu_addr + (index * 4); tmp = 0xCAFEDEAD; adev->wb.wb[index] = cpu_to_le32(tmp); - r = amdgpu_ib_get(ring, NULL, 256, &ib); if (r) { - amdgpu_wb_free(adev, index); DRM_ERROR("amdgpu: failed to get ib (%d).\n", r); - return r; + goto err0; }
ib.ptr[0] = SDMA_PKT_HEADER_OP(SDMA_OP_WRITE) | @@ -705,19 +703,15 @@ static int sdma_v2_4_ring_test_ib(struct amdgpu_ring *ring) ib.ptr[7] = SDMA_PKT_HEADER_OP(SDMA_OP_NOP); ib.length_dw = 8;
- r = amdgpu_ib_schedule(adev, 1, &ib, AMDGPU_FENCE_OWNER_UNDEFINED); - if (r) { - amdgpu_ib_free(adev, &ib); - amdgpu_wb_free(adev, index); - DRM_ERROR("amdgpu: failed to schedule ib (%d).\n", r); - return r; - } + r = amdgpu_sched_ib_submit_kernel_helper(adev, ring, &ib, 1, NULL, + AMDGPU_FENCE_OWNER_UNDEFINED); + if (r) + goto err1; + r = amdgpu_fence_wait(ib.fence, false); if (r) { - amdgpu_ib_free(adev, &ib); - amdgpu_wb_free(adev, index); DRM_ERROR("amdgpu: fence wait failed (%d).\n", r); - return r; + goto err1; } for (i = 0; i < adev->usec_timeout; i++) { tmp = le32_to_cpu(adev->wb.wb[index]); @@ -727,12 +721,16 @@ static int sdma_v2_4_ring_test_ib(struct amdgpu_ring *ring) } if (i < adev->usec_timeout) { DRM_INFO("ib test on ring %d succeeded in %u usecs\n", - ib.fence->ring->idx, i); + ring->idx, i); + goto err1; } else { DRM_ERROR("amdgpu: ib test failed (0x%08X)\n", tmp); r = -EINVAL; } + +err1: amdgpu_ib_free(adev, &ib); +err0: amdgpu_wb_free(adev, index); return r; } diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v3_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v3_0.c index 763e2cc..2b7cb33 100644 --- a/drivers/gpu/drm/amd/amdgpu/sdma_v3_0.c +++ b/drivers/gpu/drm/amd/amdgpu/sdma_v3_0.c @@ -776,12 +776,10 @@ static int sdma_v3_0_ring_test_ib(struct amdgpu_ring *ring) gpu_addr = adev->wb.gpu_addr + (index * 4); tmp = 0xCAFEDEAD; adev->wb.wb[index] = cpu_to_le32(tmp); - r = amdgpu_ib_get(ring, NULL, 256, &ib); if (r) { - amdgpu_wb_free(adev, index); DRM_ERROR("amdgpu: failed to get ib (%d).\n", r); - return r; + goto err0; }
ib.ptr[0] = SDMA_PKT_HEADER_OP(SDMA_OP_WRITE) | @@ -795,19 +793,15 @@ static int sdma_v3_0_ring_test_ib(struct amdgpu_ring *ring) ib.ptr[7] = SDMA_PKT_NOP_HEADER_OP(SDMA_OP_NOP); ib.length_dw = 8;
- r = amdgpu_ib_schedule(adev, 1, &ib, AMDGPU_FENCE_OWNER_UNDEFINED); - if (r) { - amdgpu_ib_free(adev, &ib); - amdgpu_wb_free(adev, index); - DRM_ERROR("amdgpu: failed to schedule ib (%d).\n", r); - return r; - } + r = amdgpu_sched_ib_submit_kernel_helper(adev, ring, &ib, 1, NULL, + AMDGPU_FENCE_OWNER_UNDEFINED); + if (r) + goto err1; + r = amdgpu_fence_wait(ib.fence, false); if (r) { - amdgpu_ib_free(adev, &ib); - amdgpu_wb_free(adev, index); DRM_ERROR("amdgpu: fence wait failed (%d).\n", r); - return r; + goto err1; } for (i = 0; i < adev->usec_timeout; i++) { tmp = le32_to_cpu(adev->wb.wb[index]); @@ -817,12 +811,15 @@ static int sdma_v3_0_ring_test_ib(struct amdgpu_ring *ring) } if (i < adev->usec_timeout) { DRM_INFO("ib test on ring %d succeeded in %u usecs\n", - ib.fence->ring->idx, i); + ring->idx, i); + goto err1; } else { DRM_ERROR("amdgpu: ib test failed (0x%08X)\n", tmp); r = -EINVAL; } +err1: amdgpu_ib_free(adev, &ib); +err0: amdgpu_wb_free(adev, index); return r; }
From: Chunming Zhou david1.zhou@amd.com
Signed-off-by: Chunming Zhou david1.zhou@amd.com Reviewed-by: Christian K?nig christian.koenig@amd.com --- drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c | 61 +++++++++++++++++++++++---------- 1 file changed, 42 insertions(+), 19 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c index 2f7a5ef..7d20aca 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c @@ -792,6 +792,14 @@ int amdgpu_uvd_ring_parse_cs(struct amdgpu_cs_parser *parser, uint32_t ib_idx) return 0; }
+static int amdgpu_uvd_free_job( + struct amdgpu_cs_parser *sched_job) +{ + amdgpu_ib_free(sched_job->adev, sched_job->ibs); + kfree(sched_job->ibs); + return 0; +} + static int amdgpu_uvd_send_msg(struct amdgpu_ring *ring, struct amdgpu_bo *bo, struct amdgpu_fence **fence) @@ -799,7 +807,8 @@ static int amdgpu_uvd_send_msg(struct amdgpu_ring *ring, struct ttm_validate_buffer tv; struct ww_acquire_ctx ticket; struct list_head head; - struct amdgpu_ib ib; + struct amdgpu_ib *ib = NULL; + struct amdgpu_device *adev = ring->adev; uint64_t addr; int i, r;
@@ -821,34 +830,48 @@ static int amdgpu_uvd_send_msg(struct amdgpu_ring *ring, r = ttm_bo_validate(&bo->tbo, &bo->placement, true, false); if (r) goto err; - - r = amdgpu_ib_get(ring, NULL, 64, &ib); - if (r) + ib = kzalloc(sizeof(struct amdgpu_ib), GFP_KERNEL); + if (!ib) { + r = -ENOMEM; goto err; + } + r = amdgpu_ib_get(ring, NULL, 64, ib); + if (r) + goto err1;
addr = amdgpu_bo_gpu_offset(bo); - ib.ptr[0] = PACKET0(mmUVD_GPCOM_VCPU_DATA0, 0); - ib.ptr[1] = addr; - ib.ptr[2] = PACKET0(mmUVD_GPCOM_VCPU_DATA1, 0); - ib.ptr[3] = addr >> 32; - ib.ptr[4] = PACKET0(mmUVD_GPCOM_VCPU_CMD, 0); - ib.ptr[5] = 0; + ib->ptr[0] = PACKET0(mmUVD_GPCOM_VCPU_DATA0, 0); + ib->ptr[1] = addr; + ib->ptr[2] = PACKET0(mmUVD_GPCOM_VCPU_DATA1, 0); + ib->ptr[3] = addr >> 32; + ib->ptr[4] = PACKET0(mmUVD_GPCOM_VCPU_CMD, 0); + ib->ptr[5] = 0; for (i = 6; i < 16; ++i) - ib.ptr[i] = PACKET2(0); - ib.length_dw = 16; + ib->ptr[i] = PACKET2(0); + ib->length_dw = 16;
- r = amdgpu_ib_schedule(ring->adev, 1, &ib, AMDGPU_FENCE_OWNER_UNDEFINED); + r = amdgpu_sched_ib_submit_kernel_helper(adev, ring, ib, 1, + &amdgpu_uvd_free_job, + AMDGPU_FENCE_OWNER_UNDEFINED); if (r) - goto err; - ttm_eu_fence_buffer_objects(&ticket, &head, &ib.fence->base); + goto err2;
- if (fence) - *fence = amdgpu_fence_ref(ib.fence); + ttm_eu_fence_buffer_objects(&ticket, &head, &ib->fence->base);
- amdgpu_ib_free(ring->adev, &ib); + if (fence) + *fence = amdgpu_fence_ref(ib->fence); amdgpu_bo_unref(&bo); - return 0;
+ if (amdgpu_enable_scheduler) + return 0; + + amdgpu_ib_free(ring->adev, ib); + kfree(ib); + return 0; +err2: + amdgpu_ib_free(ring->adev, ib); +err1: + kfree(ib); err: ttm_eu_backoff_reservation(&ticket, &head); return r;
From: Chunming Zhou david1.zhou@amd.com
Signed-off-by: Chunming Zhou david1.zhou@amd.com Reviewed-by: Christian K?nig christian.koenig@amd.com --- drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c | 158 ++++++++++++++++++-------------- 1 file changed, 90 insertions(+), 68 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c index d3ca730..e17467f 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c @@ -334,6 +334,14 @@ void amdgpu_vce_free_handles(struct amdgpu_device *adev, struct drm_file *filp) } }
+static int amdgpu_vce_free_job( + struct amdgpu_cs_parser *sched_job) +{ + amdgpu_ib_free(sched_job->adev, sched_job->ibs); + kfree(sched_job->ibs); + return 0; +} + /** * amdgpu_vce_get_create_msg - generate a VCE create msg * @@ -348,56 +356,63 @@ int amdgpu_vce_get_create_msg(struct amdgpu_ring *ring, uint32_t handle, struct amdgpu_fence **fence) { const unsigned ib_size_dw = 1024; - struct amdgpu_ib ib; + struct amdgpu_ib *ib = NULL; + struct amdgpu_device *adev = ring->adev; uint64_t dummy; int i, r;
- r = amdgpu_ib_get(ring, NULL, ib_size_dw * 4, &ib); + ib = kzalloc(sizeof(struct amdgpu_ib), GFP_KERNEL); + if (!ib) + return -ENOMEM; + r = amdgpu_ib_get(ring, NULL, ib_size_dw * 4, ib); if (r) { DRM_ERROR("amdgpu: failed to get ib (%d).\n", r); + kfree(ib); return r; }
- dummy = ib.gpu_addr + 1024; + dummy = ib->gpu_addr + 1024;
/* stitch together an VCE create msg */ - ib.length_dw = 0; - ib.ptr[ib.length_dw++] = 0x0000000c; /* len */ - ib.ptr[ib.length_dw++] = 0x00000001; /* session cmd */ - ib.ptr[ib.length_dw++] = handle; - - ib.ptr[ib.length_dw++] = 0x00000030; /* len */ - ib.ptr[ib.length_dw++] = 0x01000001; /* create cmd */ - ib.ptr[ib.length_dw++] = 0x00000000; - ib.ptr[ib.length_dw++] = 0x00000042; - ib.ptr[ib.length_dw++] = 0x0000000a; - ib.ptr[ib.length_dw++] = 0x00000001; - ib.ptr[ib.length_dw++] = 0x00000080; - ib.ptr[ib.length_dw++] = 0x00000060; - ib.ptr[ib.length_dw++] = 0x00000100; - ib.ptr[ib.length_dw++] = 0x00000100; - ib.ptr[ib.length_dw++] = 0x0000000c; - ib.ptr[ib.length_dw++] = 0x00000000; - - ib.ptr[ib.length_dw++] = 0x00000014; /* len */ - ib.ptr[ib.length_dw++] = 0x05000005; /* feedback buffer */ - ib.ptr[ib.length_dw++] = upper_32_bits(dummy); - ib.ptr[ib.length_dw++] = dummy; - ib.ptr[ib.length_dw++] = 0x00000001; - - for (i = ib.length_dw; i < ib_size_dw; ++i) - ib.ptr[i] = 0x0; - - r = amdgpu_ib_schedule(ring->adev, 1, &ib, AMDGPU_FENCE_OWNER_UNDEFINED); - if (r) { - DRM_ERROR("amdgpu: failed to schedule ib (%d).\n", r); - } - + ib->length_dw = 0; + ib->ptr[ib->length_dw++] = 0x0000000c; /* len */ + ib->ptr[ib->length_dw++] = 0x00000001; /* session cmd */ + ib->ptr[ib->length_dw++] = handle; + + ib->ptr[ib->length_dw++] = 0x00000030; /* len */ + ib->ptr[ib->length_dw++] = 0x01000001; /* create cmd */ + ib->ptr[ib->length_dw++] = 0x00000000; + ib->ptr[ib->length_dw++] = 0x00000042; + ib->ptr[ib->length_dw++] = 0x0000000a; + ib->ptr[ib->length_dw++] = 0x00000001; + ib->ptr[ib->length_dw++] = 0x00000080; + ib->ptr[ib->length_dw++] = 0x00000060; + ib->ptr[ib->length_dw++] = 0x00000100; + ib->ptr[ib->length_dw++] = 0x00000100; + ib->ptr[ib->length_dw++] = 0x0000000c; + ib->ptr[ib->length_dw++] = 0x00000000; + + ib->ptr[ib->length_dw++] = 0x00000014; /* len */ + ib->ptr[ib->length_dw++] = 0x05000005; /* feedback buffer */ + ib->ptr[ib->length_dw++] = upper_32_bits(dummy); + ib->ptr[ib->length_dw++] = dummy; + ib->ptr[ib->length_dw++] = 0x00000001; + + for (i = ib->length_dw; i < ib_size_dw; ++i) + ib->ptr[i] = 0x0; + + r = amdgpu_sched_ib_submit_kernel_helper(adev, ring, ib, 1, + &amdgpu_vce_free_job, + AMDGPU_FENCE_OWNER_UNDEFINED); + if (r) + goto err; if (fence) - *fence = amdgpu_fence_ref(ib.fence); - - amdgpu_ib_free(ring->adev, &ib); - + *fence = amdgpu_fence_ref(ib->fence); + if (amdgpu_enable_scheduler) + return 0; +err: + amdgpu_ib_free(adev, ib); + kfree(ib); return r; }
@@ -415,46 +430,53 @@ int amdgpu_vce_get_destroy_msg(struct amdgpu_ring *ring, uint32_t handle, struct amdgpu_fence **fence) { const unsigned ib_size_dw = 1024; - struct amdgpu_ib ib; + struct amdgpu_ib *ib = NULL; + struct amdgpu_device *adev = ring->adev; uint64_t dummy; int i, r;
- r = amdgpu_ib_get(ring, NULL, ib_size_dw * 4, &ib); + ib = kzalloc(sizeof(struct amdgpu_ib), GFP_KERNEL); + if (!ib) + return -ENOMEM; + + r = amdgpu_ib_get(ring, NULL, ib_size_dw * 4, ib); if (r) { + kfree(ib); DRM_ERROR("amdgpu: failed to get ib (%d).\n", r); return r; }
- dummy = ib.gpu_addr + 1024; + dummy = ib->gpu_addr + 1024;
/* stitch together an VCE destroy msg */ - ib.length_dw = 0; - ib.ptr[ib.length_dw++] = 0x0000000c; /* len */ - ib.ptr[ib.length_dw++] = 0x00000001; /* session cmd */ - ib.ptr[ib.length_dw++] = handle; - - ib.ptr[ib.length_dw++] = 0x00000014; /* len */ - ib.ptr[ib.length_dw++] = 0x05000005; /* feedback buffer */ - ib.ptr[ib.length_dw++] = upper_32_bits(dummy); - ib.ptr[ib.length_dw++] = dummy; - ib.ptr[ib.length_dw++] = 0x00000001; - - ib.ptr[ib.length_dw++] = 0x00000008; /* len */ - ib.ptr[ib.length_dw++] = 0x02000001; /* destroy cmd */ - - for (i = ib.length_dw; i < ib_size_dw; ++i) - ib.ptr[i] = 0x0; - - r = amdgpu_ib_schedule(ring->adev, 1, &ib, AMDGPU_FENCE_OWNER_UNDEFINED); - if (r) { - DRM_ERROR("amdgpu: failed to schedule ib (%d).\n", r); - } - + ib->length_dw = 0; + ib->ptr[ib->length_dw++] = 0x0000000c; /* len */ + ib->ptr[ib->length_dw++] = 0x00000001; /* session cmd */ + ib->ptr[ib->length_dw++] = handle; + + ib->ptr[ib->length_dw++] = 0x00000014; /* len */ + ib->ptr[ib->length_dw++] = 0x05000005; /* feedback buffer */ + ib->ptr[ib->length_dw++] = upper_32_bits(dummy); + ib->ptr[ib->length_dw++] = dummy; + ib->ptr[ib->length_dw++] = 0x00000001; + + ib->ptr[ib->length_dw++] = 0x00000008; /* len */ + ib->ptr[ib->length_dw++] = 0x02000001; /* destroy cmd */ + + for (i = ib->length_dw; i < ib_size_dw; ++i) + ib->ptr[i] = 0x0; + r = amdgpu_sched_ib_submit_kernel_helper(adev, ring, ib, 1, + &amdgpu_vce_free_job, + AMDGPU_FENCE_OWNER_UNDEFINED); + if (r) + goto err; if (fence) - *fence = amdgpu_fence_ref(ib.fence); - - amdgpu_ib_free(ring->adev, &ib); - + *fence = amdgpu_fence_ref(ib->fence); + if (amdgpu_enable_scheduler) + return 0; +err: + amdgpu_ib_free(adev, ib); + kfree(ib); return r; }
From: "monk.liu" monk.liu@amd.com
Signed-off-by: monk.liu monk.liu@amd.com Reviewed-by: Christian König christian.koenig@amd.com --- drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c | 13 +++---------- 1 file changed, 3 insertions(+), 10 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c index c4ad6bb..c1af262 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c @@ -622,18 +622,11 @@ static long amdgpu_fence_wait_seq_timeout(struct amdgpu_device *adev, */ int amdgpu_fence_wait(struct amdgpu_fence *fence, bool intr) { - uint64_t seq[AMDGPU_MAX_RINGS] = {}; long r;
- seq[fence->ring->idx] = fence->seq; - r = amdgpu_fence_wait_seq_timeout(fence->ring->adev, seq, intr, MAX_SCHEDULE_TIMEOUT); - if (r < 0) { - return r; - } - - r = fence_signal(&fence->base); - if (!r) - FENCE_TRACE(&fence->base, "signaled from fence_wait\n"); + r = fence_wait_timeout(&fence->base, intr, MAX_SCHEDULE_TIMEOUT); + if (r < 0) + return r; return 0; }
From: "monk.liu" monk.liu@amd.com
origninal method will sleep/schedule at the granurarity of HZ/2 and based on seq signal method, the new implement is based on kernel fance interface, no unnecessary schedule at all
v2: replace logic of original amdgpu_fence_wait_any
Signed-off-by: monk.liu monk.liu@amd.com Reviewed-by: Christian König christian.koenig@amd.com --- drivers/gpu/drm/amd/amdgpu/amdgpu.h | 6 +- drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c | 115 +++++++++++++++++++----------- drivers/gpu/drm/amd/amdgpu/amdgpu_sa.c | 3 +- 3 files changed, 77 insertions(+), 47 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h b/drivers/gpu/drm/amd/amdgpu/amdgpu.h index 7fecb44..bf26f4b 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h @@ -439,9 +439,9 @@ unsigned amdgpu_fence_count_emitted(struct amdgpu_ring *ring);
bool amdgpu_fence_signaled(struct amdgpu_fence *fence); int amdgpu_fence_wait(struct amdgpu_fence *fence, bool interruptible); -int amdgpu_fence_wait_any(struct amdgpu_device *adev, +signed long amdgpu_fence_wait_any(struct amdgpu_device *adev, struct amdgpu_fence **fences, - bool intr); + bool intr, long t); struct amdgpu_fence *amdgpu_fence_ref(struct amdgpu_fence *fence); void amdgpu_fence_unref(struct amdgpu_fence **fence);
@@ -486,7 +486,7 @@ static inline bool amdgpu_fence_is_earlier(struct amdgpu_fence *a, return a->seq < b->seq; }
-int amdgpu_user_fence_emit(struct amdgpu_ring *ring, struct amdgpu_user_fence *user, +int amdgpu_user_fence_emit(struct amdgpu_ring *ring, struct amdgpu_user_fence *user, void *owner, struct amdgpu_fence **fence);
/* diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c index c1af262..f59f737 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c @@ -631,49 +631,6 @@ int amdgpu_fence_wait(struct amdgpu_fence *fence, bool intr) }
/** - * amdgpu_fence_wait_any - wait for a fence to signal on any ring - * - * @adev: amdgpu device pointer - * @fences: amdgpu fence object(s) - * @intr: use interruptable sleep - * - * Wait for any requested fence to signal (all asics). Fence - * array is indexed by ring id. @intr selects whether to use - * interruptable (true) or non-interruptable (false) sleep when - * waiting for the fences. Used by the suballocator. - * Returns 0 if any fence has passed, error for all other cases. - */ -int amdgpu_fence_wait_any(struct amdgpu_device *adev, - struct amdgpu_fence **fences, - bool intr) -{ - uint64_t seq[AMDGPU_MAX_RINGS]; - unsigned i, num_rings = 0; - long r; - - for (i = 0; i < AMDGPU_MAX_RINGS; ++i) { - seq[i] = 0; - - if (!fences[i]) { - continue; - } - - seq[i] = fences[i]->seq; - ++num_rings; - } - - /* nothing to wait for ? */ - if (num_rings == 0) - return -ENOENT; - - r = amdgpu_fence_wait_seq_timeout(adev, seq, intr, MAX_SCHEDULE_TIMEOUT); - if (r < 0) { - return r; - } - return 0; -} - -/** * amdgpu_fence_wait_next - wait for the next fence to signal * * @adev: amdgpu device pointer @@ -1067,6 +1024,22 @@ static inline bool amdgpu_test_signaled(struct amdgpu_fence *fence) return test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->base.flags); }
+static inline bool amdgpu_test_signaled_any(struct amdgpu_fence **fences) +{ + int idx; + struct amdgpu_fence *fence; + + idx = 0; + for (idx = 0; idx < AMDGPU_MAX_RINGS; ++idx) { + fence = fences[idx]; + if (fence) { + if (test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->base.flags)) + return true; + } + } + return false; +} + struct amdgpu_wait_cb { struct fence_cb base; struct task_struct *task; @@ -1121,6 +1094,62 @@ static signed long amdgpu_fence_default_wait(struct fence *f, bool intr, return t; }
+/* wait until any fence in array signaled */ +signed long amdgpu_fence_wait_any(struct amdgpu_device *adev, + struct amdgpu_fence **array, bool intr, signed long t) +{ + long idx = 0; + struct amdgpu_wait_cb cb[AMDGPU_MAX_RINGS]; + struct amdgpu_fence *fence; + + BUG_ON(!array); + + for (idx = 0; idx < AMDGPU_MAX_RINGS; ++idx) { + fence = array[idx]; + if (fence) { + cb[idx].task = current; + if (fence_add_callback(&fence->base, + &cb[idx].base, amdgpu_fence_wait_cb)) + return t; /* return if fence is already signaled */ + } + } + + while (t > 0) { + if (intr) + set_current_state(TASK_INTERRUPTIBLE); + else + set_current_state(TASK_UNINTERRUPTIBLE); + + /* + * amdgpu_test_signaled_any must be called after + * set_current_state to prevent a race with wake_up_process + */ + if (amdgpu_test_signaled_any(array)) + break; + + if (adev->needs_reset) { + t = -EDEADLK; + break; + } + + t = schedule_timeout(t); + + if (t > 0 && intr && signal_pending(current)) + t = -ERESTARTSYS; + } + + __set_current_state(TASK_RUNNING); + + idx = 0; + for (idx = 0; idx < AMDGPU_MAX_RINGS; ++idx) { + fence = array[idx]; + if (fence) + fence_remove_callback(&fence->base, &cb[idx].base); + } + + return t; +} + const struct fence_ops amdgpu_fence_ops = { .get_driver_name = amdgpu_fence_get_driver_name, .get_timeline_name = amdgpu_fence_get_timeline_name, diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_sa.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_sa.c index eb20987..f4e20ea 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_sa.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_sa.c @@ -350,7 +350,8 @@ int amdgpu_sa_bo_new(struct amdgpu_device *adev, } while (amdgpu_sa_bo_next_hole(sa_manager, fences, tries));
spin_unlock(&sa_manager->wq.lock); - r = amdgpu_fence_wait_any(adev, fences, false); + r = amdgpu_fence_wait_any(adev, fences, false, MAX_SCHEDULE_TIMEOUT); + r = (r > 0) ? 0 : r; spin_lock(&sa_manager->wq.lock); /* if we have nothing to wait for block */ if (r == -ENOENT) {
From: "monk.liu" monk.liu@amd.com
use fence_wait_any to implement fence_default_wait
Signed-off-by: monk.liu monk.liu@amd.com Reviewed-by: Christian König christian.koenig@amd.com --- drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c | 37 ++++--------------------------- 1 file changed, 4 insertions(+), 33 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c index f59f737..c0f4910 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c @@ -1055,43 +1055,14 @@ static void amdgpu_fence_wait_cb(struct fence *fence, struct fence_cb *cb) static signed long amdgpu_fence_default_wait(struct fence *f, bool intr, signed long t) { + struct amdgpu_fence *array[AMDGPU_MAX_RINGS]; struct amdgpu_fence *fence = to_amdgpu_fence(f); struct amdgpu_device *adev = fence->ring->adev; - struct amdgpu_wait_cb cb;
- cb.task = current; + memset(&array[0], 0, sizeof(array)); + array[0] = fence;
- if (fence_add_callback(f, &cb.base, amdgpu_fence_wait_cb)) - return t; - - while (t > 0) { - if (intr) - set_current_state(TASK_INTERRUPTIBLE); - else - set_current_state(TASK_UNINTERRUPTIBLE); - - /* - * amdgpu_test_signaled must be called after - * set_current_state to prevent a race with wake_up_process - */ - if (amdgpu_test_signaled(fence)) - break; - - if (adev->needs_reset) { - t = -EDEADLK; - break; - } - - t = schedule_timeout(t); - - if (t > 0 && intr && signal_pending(current)) - t = -ERESTARTSYS; - } - - __set_current_state(TASK_RUNNING); - fence_remove_callback(f, &cb.base); - - return t; + return amdgpu_fence_wait_any(adev, array, intr, t); }
/* wait until any fence in array signaled */
From: "monk.liu" monk.liu@amd.com
thus unnecessary wake up could be avoid between rings v2: move wait_queue_head to fence_drv from ring
Signed-off-by: monk.liu monk.liu@amd.com Reviewed-by: Christian König christian.koenig@amd.com --- drivers/gpu/drm/amd/amdgpu/amdgpu.h | 2 +- drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c | 225 ++++++++++-------------------- drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c | 2 + 3 files changed, 77 insertions(+), 152 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h b/drivers/gpu/drm/amd/amdgpu/amdgpu.h index bf26f4b..7de6ab5 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h @@ -392,6 +392,7 @@ struct amdgpu_fence_driver { struct amdgpu_irq_src *irq_src; unsigned irq_type; struct delayed_work lockup_work; + wait_queue_head_t fence_queue; };
/* some special values for the owner field */ @@ -2029,7 +2030,6 @@ struct amdgpu_device { struct amdgpu_irq_src hpd_irq;
/* rings */ - wait_queue_head_t fence_queue; unsigned fence_context; struct mutex ring_lock; unsigned num_rings; diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c index c0f4910..08fdfb3 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c @@ -126,7 +126,8 @@ int amdgpu_fence_emit(struct amdgpu_ring *ring, void *owner, (*fence)->ring = ring; (*fence)->owner = owner; fence_init(&(*fence)->base, &amdgpu_fence_ops, - &adev->fence_queue.lock, adev->fence_context + ring->idx, + &ring->fence_drv.fence_queue.lock, + adev->fence_context + ring->idx, (*fence)->seq); amdgpu_ring_emit_fence(ring, ring->fence_drv.gpu_addr, (*fence)->seq, @@ -164,7 +165,7 @@ static int amdgpu_fence_check_signaled(wait_queue_t *wait, unsigned mode, int fl else FENCE_TRACE(&fence->base, "was already signaled\n");
- __remove_wait_queue(&adev->fence_queue, &fence->fence_wake); + __remove_wait_queue(&fence->ring->fence_drv.fence_queue, &fence->fence_wake); fence_put(&fence->base); } else FENCE_TRACE(&fence->base, "pending\n"); @@ -265,8 +266,9 @@ static void amdgpu_fence_check_lockup(struct work_struct *work) return; }
- if (amdgpu_fence_activity(ring)) - wake_up_all(&ring->adev->fence_queue); + if (amdgpu_fence_activity(ring)) { + wake_up_all(&ring->fence_drv.fence_queue); + } else if (amdgpu_ring_is_lockup(ring)) { /* good news we believe it's a lockup */ dev_warn(ring->adev->dev, "GPU lockup (current fence id " @@ -276,7 +278,7 @@ static void amdgpu_fence_check_lockup(struct work_struct *work)
/* remember that we need an reset */ ring->adev->needs_reset = true; - wake_up_all(&ring->adev->fence_queue); + wake_up_all(&ring->fence_drv.fence_queue); } up_read(&ring->adev->exclusive_lock); } @@ -364,7 +366,7 @@ void amdgpu_fence_process(struct amdgpu_ring *ring) } while (amd_sched_get_handled_seq(ring->scheduler) < latest_seq); }
- wake_up_all(&ring->adev->fence_queue); + wake_up_all(&ring->fence_drv.fence_queue); } exit: spin_unlock_irqrestore(&ring->fence_lock, irqflags); @@ -427,7 +429,6 @@ static bool amdgpu_fence_enable_signaling(struct fence *f) { struct amdgpu_fence *fence = to_amdgpu_fence(f); struct amdgpu_ring *ring = fence->ring; - struct amdgpu_device *adev = ring->adev;
if (atomic64_read(&ring->fence_drv.last_seq) >= fence->seq) return false; @@ -435,7 +436,7 @@ static bool amdgpu_fence_enable_signaling(struct fence *f) fence->fence_wake.flags = 0; fence->fence_wake.private = NULL; fence->fence_wake.func = amdgpu_fence_check_signaled; - __add_wait_queue(&adev->fence_queue, &fence->fence_wake); + __add_wait_queue(&ring->fence_drv.fence_queue, &fence->fence_wake); fence_get(f); FENCE_TRACE(&fence->base, "armed on ring %i!\n", ring->idx); return true; @@ -463,152 +464,79 @@ bool amdgpu_fence_signaled(struct amdgpu_fence *fence) return false; }
-/** - * amdgpu_fence_any_seq_signaled - check if any sequence number is signaled - * - * @adev: amdgpu device pointer - * @seq: sequence numbers - * - * Check if the last signaled fence sequnce number is >= the requested - * sequence number (all asics). - * Returns true if any has signaled (current value is >= requested value) - * or false if it has not. Helper function for amdgpu_fence_wait_seq. - */ -static bool amdgpu_fence_any_seq_signaled(struct amdgpu_device *adev, u64 *seq) -{ - unsigned i; - - for (i = 0; i < AMDGPU_MAX_RINGS; ++i) { - if (!adev->rings[i] || !seq[i]) - continue; - - if (amdgpu_fence_seq_signaled(adev->rings[i], seq[i])) - return true; - } - - return false; -} - -/** - * amdgpu_fence_wait_seq_timeout - wait for a specific sequence numbers - * - * @adev: amdgpu device pointer - * @target_seq: sequence number(s) we want to wait for - * @intr: use interruptable sleep - * @timeout: maximum time to wait, or MAX_SCHEDULE_TIMEOUT for infinite wait +/* + * amdgpu_ring_wait_seq_timeout - wait for seq of the specific ring to signal + * @ring: ring to wait on for the seq number + * @seq: seq number wait for + * @intr: if interruptible + * @timeout: jiffies before time out * - * Wait for the requested sequence number(s) to be written by any ring - * (all asics). Sequnce number array is indexed by ring id. - * @intr selects whether to use interruptable (true) or non-interruptable - * (false) sleep when waiting for the sequence number. Helper function - * for amdgpu_fence_wait_*(). - * Returns remaining time if the sequence number has passed, 0 when - * the wait timeout, or an error for all other cases. - * -EDEADLK is returned when a GPU lockup has been detected. + * return value: + * 0: time out but seq not signaled, and gpu not hang + * X (X > 0): seq signaled and X means how many jiffies remains before time out + * -EDEADL: GPU hang before time out + * -ESYSRESTART: interrupted before seq signaled + * -EINVAL: some paramter is not valid */ -static long amdgpu_fence_wait_seq_timeout(struct amdgpu_device *adev, - u64 *target_seq, bool intr, - long timeout) +static long amdgpu_fence_ring_wait_seq_timeout(struct amdgpu_ring *ring, uint64_t seq, + bool intr, long timeout) { - uint64_t last_seq[AMDGPU_MAX_RINGS]; - bool signaled; - int i; - long r; - - if (timeout == 0) { - return amdgpu_fence_any_seq_signaled(adev, target_seq); - } - - while (!amdgpu_fence_any_seq_signaled(adev, target_seq)) { - - /* Save current sequence values, used to check for GPU lockups */ - for (i = 0; i < AMDGPU_MAX_RINGS; ++i) { - struct amdgpu_ring *ring = adev->rings[i]; + struct amdgpu_device *adev = ring->adev; + long r = 0; + bool signaled = false;
- if (!ring || !target_seq[i]) - continue; + BUG_ON(!ring); + if (seq > ring->fence_drv.sync_seq[ring->idx]) + return -EINVAL;
- last_seq[i] = atomic64_read(&ring->fence_drv.last_seq); - trace_amdgpu_fence_wait_begin(adev->ddev, i, target_seq[i]); - } + if (atomic64_read(&ring->fence_drv.last_seq) >= seq) + return timeout;
+ while (1) { if (intr) { - r = wait_event_interruptible_timeout(adev->fence_queue, ( - (signaled = amdgpu_fence_any_seq_signaled(adev, target_seq)) - || adev->needs_reset), AMDGPU_FENCE_JIFFIES_TIMEOUT); + r = wait_event_interruptible_timeout(ring->fence_drv.fence_queue, ( + (signaled = amdgpu_fence_seq_signaled(ring, seq)) + || adev->needs_reset), AMDGPU_FENCE_JIFFIES_TIMEOUT); + + if (r == -ERESTARTSYS) /* interrupted */ + return r; } else { - r = wait_event_timeout(adev->fence_queue, ( - (signaled = amdgpu_fence_any_seq_signaled(adev, target_seq)) - || adev->needs_reset), AMDGPU_FENCE_JIFFIES_TIMEOUT); + r = wait_event_timeout(ring->fence_drv.fence_queue, ( + (signaled = amdgpu_fence_seq_signaled(ring, seq)) + || adev->needs_reset), AMDGPU_FENCE_JIFFIES_TIMEOUT); }
- for (i = 0; i < AMDGPU_MAX_RINGS; ++i) { - struct amdgpu_ring *ring = adev->rings[i]; - - if (!ring || !target_seq[i]) - continue; - - trace_amdgpu_fence_wait_end(adev->ddev, i, target_seq[i]); + if (signaled) { + /* seq signaled */ + if (timeout == MAX_SCHEDULE_TIMEOUT) + return timeout; + return (timeout - AMDGPU_FENCE_JIFFIES_TIMEOUT - r); + } + else if (adev->needs_reset) { + return -EDEADLK; }
- if (unlikely(r < 0)) - return r; - - if (unlikely(!signaled)) { - - if (adev->needs_reset) - return -EDEADLK; - - /* we were interrupted for some reason and fence - * isn't signaled yet, resume waiting */ - if (r) - continue; - - for (i = 0; i < AMDGPU_MAX_RINGS; ++i) { - struct amdgpu_ring *ring = adev->rings[i]; - - if (!ring || !target_seq[i]) - continue; - - if (last_seq[i] != atomic64_read(&ring->fence_drv.last_seq)) - break; - } - - if (i != AMDGPU_MAX_RINGS) - continue; - - for (i = 0; i < AMDGPU_MAX_RINGS; ++i) { - if (!adev->rings[i] || !target_seq[i]) - continue; - - if (amdgpu_ring_is_lockup(adev->rings[i])) - break; - } - - if (i < AMDGPU_MAX_RINGS) { - /* good news we believe it's a lockup */ - dev_warn(adev->dev, "GPU lockup (waiting for " + /* check if it's a lockup */ + if (amdgpu_ring_is_lockup(ring)) { + uint64_t last_seq = atomic64_read(&ring->fence_drv.last_seq); + /* ring lookup */ + dev_warn(adev->dev, "GPU lockup (waiting for " "0x%016llx last fence id 0x%016llx on" " ring %d)\n", - target_seq[i], last_seq[i], i); - - /* remember that we need an reset */ - adev->needs_reset = true; - wake_up_all(&adev->fence_queue); - return -EDEADLK; - } + seq, last_seq, ring->idx); + wake_up_all(&ring->fence_drv.fence_queue); + return -EDEADLK; + }
- if (timeout < MAX_SCHEDULE_TIMEOUT) { - timeout -= AMDGPU_FENCE_JIFFIES_TIMEOUT; - if (timeout <= 0) { - return 0; - } - } + if (timeout < MAX_SCHEDULE_TIMEOUT) { + timeout -= AMDGPU_FENCE_JIFFIES_TIMEOUT; + if (timeout < 1) + return 0; } } - return timeout; }
+ /** * amdgpu_fence_wait - wait for a fence to signal * @@ -642,18 +570,15 @@ int amdgpu_fence_wait(struct amdgpu_fence *fence, bool intr) */ int amdgpu_fence_wait_next(struct amdgpu_ring *ring) { - uint64_t seq[AMDGPU_MAX_RINGS] = {}; long r;
- seq[ring->idx] = atomic64_read(&ring->fence_drv.last_seq) + 1ULL; - if (seq[ring->idx] >= ring->fence_drv.sync_seq[ring->idx]) { - /* nothing to wait for, last_seq is - already the last emited fence */ + uint64_t seq = atomic64_read(&ring->fence_drv.last_seq) + 1ULL; + if (seq >= ring->fence_drv.sync_seq[ring->idx]) return -ENOENT; - } - r = amdgpu_fence_wait_seq_timeout(ring->adev, seq, false, MAX_SCHEDULE_TIMEOUT); + r = amdgpu_fence_ring_wait_seq_timeout(ring, seq, false, MAX_SCHEDULE_TIMEOUT); if (r < 0) return r; + return 0; }
@@ -669,21 +594,20 @@ int amdgpu_fence_wait_next(struct amdgpu_ring *ring) */ int amdgpu_fence_wait_empty(struct amdgpu_ring *ring) { - struct amdgpu_device *adev = ring->adev; - uint64_t seq[AMDGPU_MAX_RINGS] = {}; long r;
- seq[ring->idx] = ring->fence_drv.sync_seq[ring->idx]; - if (!seq[ring->idx]) + uint64_t seq = ring->fence_drv.sync_seq[ring->idx]; + if (!seq) return 0;
- r = amdgpu_fence_wait_seq_timeout(adev, seq, false, MAX_SCHEDULE_TIMEOUT); + r = amdgpu_fence_ring_wait_seq_timeout(ring, seq, false, MAX_SCHEDULE_TIMEOUT); + if (r < 0) { if (r == -EDEADLK) return -EDEADLK;
- dev_err(adev->dev, "error waiting for ring[%d] to become idle (%ld)\n", - ring->idx, r); + dev_err(ring->adev->dev, "error waiting for ring[%d] to become idle (%ld)\n", + ring->idx, r); } return 0; } @@ -898,7 +822,6 @@ void amdgpu_fence_driver_init_ring(struct amdgpu_ring *ring) */ int amdgpu_fence_driver_init(struct amdgpu_device *adev) { - init_waitqueue_head(&adev->fence_queue); if (amdgpu_debugfs_fence_init(adev)) dev_err(adev->dev, "fence debugfs file creation failed\n");
@@ -927,7 +850,7 @@ void amdgpu_fence_driver_fini(struct amdgpu_device *adev) /* no need to trigger GPU reset as we are unloading */ amdgpu_fence_driver_force_completion(adev); } - wake_up_all(&adev->fence_queue); + wake_up_all(&ring->fence_drv.fence_queue); amdgpu_irq_put(adev, ring->fence_drv.irq_src, ring->fence_drv.irq_type); if (ring->scheduler) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c index 1e68a56..7d442c5 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c @@ -342,6 +342,8 @@ int amdgpu_ring_init(struct amdgpu_device *adev, struct amdgpu_ring *ring, amdgpu_fence_driver_init_ring(ring); }
+ init_waitqueue_head(&ring->fence_drv.fence_queue); + r = amdgpu_wb_get(adev, &ring->rptr_offs); if (r) { dev_err(adev->dev, "(%d) ring rptr_offs wb alloc failed\n", r);
dri-devel@lists.freedesktop.org