On Mon, May 24, 2021 at 03:43:11PM +0200, Michal Wajdeczko wrote:
On 06.05.2021 21:13, Matthew Brost wrote:
With the introduction of non-blocking CTBs more than one CTB can be in flight at a time. Increasing the size of the CTBs should reduce how often software hits the case where no space is available in the CTB buffer.
Cc: John Harrison john.c.harrison@intel.com Signed-off-by: Matthew Brost matthew.brost@intel.com
drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c | 11 ++++++++--- 1 file changed, 8 insertions(+), 3 deletions(-)
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c index 77dfbc94dcc3..d6895d29ed2d 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c @@ -63,11 +63,16 @@ static inline struct drm_device *ct_to_drm(struct intel_guc_ct *ct)
+--------+-----------------------------------------------+------+
- Size of each `CT Buffer`_ must be multiple of 4K.
- As we don't expect too many messages, for now use minimum sizes.
- We don't expect too many messages in flight at any time, unless we are
- using the GuC submission. In that case each request requires a minimum
- 16 bytes which gives us a maximum 256 queue'd requests. Hopefully this
nit: all our CTB calculations are in dwords now, not bytes
I can change the wording to DW sizes.
- enough space to avoid backpressure on the driver. We increase the size
- of the receive buffer (relative to the send) to ensure a G2H response
- CTB has a landing spot.
hmm, but we are not checking G2H CTB yet will start doing it around patch 54/97 so maybe this other patch should be introduced earlier ?
Yes, that patch is going to be pulled down to an earlier spot in the series.
*/ #define CTB_DESC_SIZE ALIGN(sizeof(struct guc_ct_buffer_desc), SZ_2K) #define CTB_H2G_BUFFER_SIZE (SZ_4K) -#define CTB_G2H_BUFFER_SIZE (SZ_4K) +#define CTB_G2H_BUFFER_SIZE (4 * CTB_H2G_BUFFER_SIZE)
in theory, we (host) should be faster than GuC, so G2H CTB shall be almost always empty, if this is not a case, maybe we should start monitoring what is happening and report some warnings if G2H is half full ?
Certainly some IGTs put some more pressure on the G2H channel than the H2G channel at least I think. This is something we can tune over time after this lands upstream. IMO a message at this point is overkill.
Matt
#define MAX_US_STALL_CTB 1000000
@@ -753,7 +758,7 @@ static int ct_read(struct intel_guc_ct *ct, struct ct_incoming_msg **msg) /* beware of buffer wrap case */ if (unlikely(available < 0)) available += size;
- CT_DEBUG(ct, "available %d (%u:%u)\n", available, head, tail);
CT_DEBUG(ct, "available %d (%u:%u:%u)\n", available, head, tail, size); GEM_BUG_ON(available < 0);
header = cmds[head];