On Thu, Feb 6, 2020 at 1:49 AM Gerd Hoffmann kraxel@redhat.com wrote:
On Wed, Feb 05, 2020 at 10:19:53AM -0800, Chia-I Wu wrote:
Make sure elemcnt does not exceed the maximum element count in virtio_gpu_queue_ctrl_sgs. We should improve our error handling or impose a size limit on execbuffer, which are TODOs.
Hmm, virtio supports indirect ring entries, so large execbuffers should not be a problem ...
So I've waded through the virtio code. Figured our logic is wrong. Luckily we err on the safe side (waiting for more free entries than we actually need). The patch below should fix that (not tested yet).
That is good to know! I was not sure if we have VIRTIO_RING_F_INDIRECT_DESC so I kept our logic. I will drop this patch in v2.
cheers, Gerd
diff --git a/drivers/gpu/drm/virtio/virtgpu_vq.c b/drivers/gpu/drm/virtio/virtgpu_vq.c index aa25e8781404..535399b3a3ea 100644 --- a/drivers/gpu/drm/virtio/virtgpu_vq.c +++ b/drivers/gpu/drm/virtio/virtgpu_vq.c @@ -328,7 +328,7 @@ static bool virtio_gpu_queue_ctrl_sgs(struct virtio_gpu_device *vgdev, { struct virtqueue *vq = vgdev->ctrlq.vq; bool notify = false;
int ret;
int vqcnt, ret;
again: spin_lock(&vgdev->ctrlq.qlock); @@ -341,9 +341,10 @@ static bool virtio_gpu_queue_ctrl_sgs(struct virtio_gpu_device *vgdev, return notify; }
if (vq->num_free < elemcnt) {
vqcnt = virtqueue_use_indirect(vq, elemcnt) ? 1 : elemcnt;
if (vq->num_free < vqcnt) { spin_unlock(&vgdev->ctrlq.qlock);
wait_event(vgdev->ctrlq.ack_queue, vq->num_free >= elemcnt);
wait_event(vgdev->ctrlq.ack_queue, vq->num_free >= vq); goto again; }