On Xe-HP and later devices, we use dedicated compression control state (CCS) stored in local memory for each surface, to support the 3D and media compression formats.
The memory required for the CCS of the entire local memory is 1/256 of the local memory size. So before the kernel boot, the required memory is reserved for the CCS data and a secure register will be programmed with the CCS base address
So when we allocate a object in local memory we dont need to explicitly allocate the space for ccs data. But when we evict the obj into the smem to hold the compression related data along with the obj we need smem space of obj_size + (obj_size/256).
Hence when we create smem for an obj with lmem placement possibility we create with the extra space.
When we are swapping out the local memory obj on flat-ccs capable platform, we need to capture the ccs data too along with main meory and we need to restore it when we are swapping in the content.
When lmem object is swapped into a smem obj, smem obj will have the extra pages required to hold the ccs data corresponding to the lmem main memory. So main memory of lmem will be copied into the initial pages of the smem and then ccs data corresponding to the main memory will be copied to the subsequent pages of smem.
Swapin happens exactly in reverse order. First main memory of lmem is restored from the smem's initial pages and the ccs data will be restored from the subsequent pages of smem.
Extracting and restoring the CCS data is done through a special cmd called XY_CTRL_SURF_COPY_BLT
Test-with: 20220301212513.30772-1-ramalingam.c@intel.com
Ayaz A Siddiqui (1): drm/i915/gt: Clear compress metadata for Xe_HP platforms
Ramalingam C (3): drm/ttm: parameter to add extra pages into ttm_tt drm/i915/gem: Extra pages in ttm_tt for ccs data drm/i915/migrate: Evict and restore the flatccs capable lmem obj
drivers/gpu/drm/drm_gem_vram_helper.c | 2 +- drivers/gpu/drm/i915/gem/i915_gem_ttm.c | 23 +- drivers/gpu/drm/i915/gt/intel_gpu_commands.h | 15 + drivers/gpu/drm/i915/gt/intel_migrate.c | 327 +++++++++++++++++-- drivers/gpu/drm/qxl/qxl_ttm.c | 2 +- drivers/gpu/drm/ttm/ttm_agp_backend.c | 2 +- drivers/gpu/drm/ttm/ttm_tt.c | 12 +- drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c | 2 +- include/drm/ttm/ttm_tt.h | 4 +- 9 files changed, 357 insertions(+), 32 deletions(-)
From: Ayaz A Siddiqui ayaz.siddiqui@intel.com
Xe-HP and latest devices support Flat CCS which reserved a portion of the device memory to store compression metadata, during the clearing of device memory buffer object we also need to clear the associated CCS buffer.
Flat CCS memory can not be directly accessed by S/W. Address of CCS buffer associated main BO is automatically calculated by device itself. KMD/UMD can only access this buffer indirectly using XY_CTRL_SURF_COPY_BLT cmd via the address of device memory buffer.
v2: Fixed issues with platform naming [Lucas] v3: Rebased [Ram] Used the round_up funcs [Bob] v4: Fixed ccs blk calculation [Ram] Added Kdoc on flat-ccs. v5: GENMASK is used [Matt] mocs fix [Matt] Comments Fix [Matt] Flush address programming [Ram] v6: FLUSH_DW is fixed Few coding style fix
Signed-off-by: Ayaz A Siddiqui ayaz.siddiqui@intel.com Signed-off-by: Ramalingam C ramalingam.c@intel.com --- drivers/gpu/drm/i915/gt/intel_gpu_commands.h | 15 ++ drivers/gpu/drm/i915/gt/intel_migrate.c | 143 ++++++++++++++++++- 2 files changed, 154 insertions(+), 4 deletions(-)
diff --git a/drivers/gpu/drm/i915/gt/intel_gpu_commands.h b/drivers/gpu/drm/i915/gt/intel_gpu_commands.h index f8253012d166..237c1baccc64 100644 --- a/drivers/gpu/drm/i915/gt/intel_gpu_commands.h +++ b/drivers/gpu/drm/i915/gt/intel_gpu_commands.h @@ -203,6 +203,21 @@ #define GFX_OP_DRAWRECT_INFO ((0x3<<29)|(0x1d<<24)|(0x80<<16)|(0x3)) #define GFX_OP_DRAWRECT_INFO_I965 ((0x7900<<16)|0x2)
+#define XY_CTRL_SURF_INSTR_SIZE 5 +#define MI_FLUSH_DW_SIZE 3 +#define XY_CTRL_SURF_COPY_BLT ((2 << 29) | (0x48 << 22) | 3) +#define SRC_ACCESS_TYPE_SHIFT 21 +#define DST_ACCESS_TYPE_SHIFT 20 +#define CCS_SIZE_MASK GENMASK(17, 8) +#define XY_CTRL_SURF_MOCS_MASK GENMASK(31, 25) +#define NUM_CCS_BYTES_PER_BLOCK 256 +#define NUM_BYTES_PER_CCS_BYTE 256 +#define NUM_CCS_BLKS_PER_XFER 1024 +#define INDIRECT_ACCESS 0 +#define DIRECT_ACCESS 1 +#define MI_FLUSH_LLC BIT(9) +#define MI_FLUSH_CCS BIT(16) + #define COLOR_BLT_CMD (2 << 29 | 0x40 << 22 | (5 - 2)) #define XY_COLOR_BLT_CMD (2 << 29 | 0x50 << 22) #define SRC_COPY_BLT_CMD (2 << 29 | 0x43 << 22) diff --git a/drivers/gpu/drm/i915/gt/intel_migrate.c b/drivers/gpu/drm/i915/gt/intel_migrate.c index 20444d6ceb3c..330fcdc3e0cf 100644 --- a/drivers/gpu/drm/i915/gt/intel_migrate.c +++ b/drivers/gpu/drm/i915/gt/intel_migrate.c @@ -16,6 +16,8 @@ struct insert_pte_data { };
#define CHUNK_SZ SZ_8M /* ~1ms at 8GiB/s preemption delay */ +#define GET_CCS_BYTES(i915, size) (HAS_FLAT_CCS(i915) ? \ + DIV_ROUND_UP(size, NUM_BYTES_PER_CCS_BYTE) : 0)
static bool engine_supports_migration(struct intel_engine_cs *engine) { @@ -467,6 +469,110 @@ static bool wa_1209644611_applies(int ver, u32 size) return height % 4 == 3 && height <= 8; }
+/** + * DOC: Flat-CCS - Memory compression for Local memory + * + * On Xe-HP and later devices, we use dedicated compression control state (CCS) + * stored in local memory for each surface, to support the 3D and media + * compression formats. + * + * The memory required for the CCS of the entire local memory is 1/256 of the + * local memory size. So before the kernel boot, the required memory is reserved + * for the CCS data and a secure register will be programmed with the CCS base + * address. + * + * Flat CCS data needs to be cleared when a lmem object is allocated. + * And CCS data can be copied in and out of CCS region through + * XY_CTRL_SURF_COPY_BLT. CPU can't access the CCS data directly. + * + * When we exhaust the lmem, if the object's placements support smem, then we can + * directly decompress the compressed lmem object into smem and start using it + * from smem itself. + * + * But when we need to swapout the compressed lmem object into a smem region + * though objects' placement doesn't support smem, then we copy the lmem content + * as it is into smem region along with ccs data (using XY_CTRL_SURF_COPY_BLT). + * When the object is referred, lmem content will be swaped in along with + * restoration of the CCS data (using XY_CTRL_SURF_COPY_BLT) at corresponding + * location. + */ + +static inline u32 *i915_flush_dw(u32 *cmd, u32 flags) +{ + *cmd++ = MI_FLUSH_DW | flags; + *cmd++ = 0; + *cmd++ = 0; + + return cmd; +} + +static u32 calc_ctrl_surf_instr_size(struct drm_i915_private *i915, int size) +{ + u32 num_cmds, num_blks, total_size; + + if (!GET_CCS_BYTES(i915, size)) + return 0; + + /* + * XY_CTRL_SURF_COPY_BLT transfers CCS in 256 byte + * blocks. one XY_CTRL_SURF_COPY_BLT command can + * transfer upto 1024 blocks. + */ + num_blks = DIV_ROUND_UP(GET_CCS_BYTES(i915, size), + NUM_CCS_BYTES_PER_BLOCK); + num_cmds = DIV_ROUND_UP(num_blks, NUM_CCS_BLKS_PER_XFER); + total_size = XY_CTRL_SURF_INSTR_SIZE * num_cmds; + + /* + * Adding a flush before and after XY_CTRL_SURF_COPY_BLT + */ + total_size += 2 * MI_FLUSH_DW_SIZE; + + return total_size; +} + +static u32 *_i915_ctrl_surf_copy_blt(u32 *cmd, u64 src_addr, u64 dst_addr, + u8 src_mem_access, u8 dst_mem_access, + int src_mocs, int dst_mocs, + u32 ccs_blocks) +{ + /* + * The XY_CTRL_SURF_COPY_BLT instruction is used to copy the CCS + * data in and out of the CCS region. + * + * We can copy at most 1024 blocks of 256 bytes using one + * XY_CTRL_SURF_COPY_BLT instruction. + * + * In case we need to copy more than 1024 blocks, we need to add + * another instruction to the same batch buffer. + * + * 1024 blocks of 256 bytes of CCS represent a total 256KB of CCS. + * + * 256 KB of CCS represents 256 * 256 KB = 64 MB of LMEM. + */ + do { + int blks_per_copy; + + blks_per_copy = ccs_blocks >= NUM_CCS_BLKS_PER_XFER ? + NUM_CCS_BLKS_PER_XFER : ccs_blocks; + *cmd++ = XY_CTRL_SURF_COPY_BLT | + src_mem_access << SRC_ACCESS_TYPE_SHIFT | + dst_mem_access << DST_ACCESS_TYPE_SHIFT | + FIELD_PREP(CCS_SIZE_MASK, blks_per_copy - 1); + *cmd++ = lower_32_bits(src_addr); + *cmd++ = (upper_32_bits(src_addr) & 0xFFFF) | + FIELD_PREP(XY_CTRL_SURF_MOCS_MASK, src_mocs); + *cmd++ = lower_32_bits(dst_addr); + *cmd++ = (upper_32_bits(dst_addr) & 0xFFFF) | + FIELD_PREP(XY_CTRL_SURF_MOCS_MASK, dst_mocs); + src_addr += SZ_64M; + dst_addr += SZ_64M; + ccs_blocks -= blks_per_copy; + } while (ccs_blocks > 0); + + return cmd; +} + static int emit_copy(struct i915_request *rq, u32 dst_offset, u32 src_offset, int size) { @@ -614,16 +720,24 @@ intel_context_migrate_copy(struct intel_context *ce, return err; }
-static int emit_clear(struct i915_request *rq, u64 offset, int size, u32 value) +static int emit_clear(struct i915_request *rq, u64 offset, int size, + u32 value, bool is_lmem) { - const int ver = GRAPHICS_VER(rq->engine->i915); + struct drm_i915_private *i915 = rq->engine->i915; + const int ver = GRAPHICS_VER(i915); + u32 num_ccs_blks, ccs_ring_size; + int mocs = rq->engine->gt->mocs.uc_index << 1; u32 *cs;
GEM_BUG_ON(size >> PAGE_SHIFT > S16_MAX);
offset += (u64)rq->engine->instance << 32;
- cs = intel_ring_begin(rq, ver >= 8 ? 8 : 6); + /* Clear CCS only when value is 0 */ + ccs_ring_size = (is_lmem && !value) ? + calc_ctrl_surf_instr_size(i915, size) : 0; + + cs = intel_ring_begin(rq, round_up(ver >= 8 ? 8 + ccs_ring_size : 6, 2)); if (IS_ERR(cs)) return PTR_ERR(cs);
@@ -646,6 +760,27 @@ static int emit_clear(struct i915_request *rq, u64 offset, int size, u32 value) *cs++ = value; }
+ if (is_lmem && HAS_FLAT_CCS(i915) && !value) { + num_ccs_blks = DIV_ROUND_UP(GET_CCS_BYTES(i915, size), + NUM_CCS_BYTES_PER_BLOCK); + + /* + * Flat CCS surface can only be accessed via + * XY_CTRL_SURF_COPY_BLT CMD and using indirect + * mapping of associated LMEM. + * We can clear ccs surface by writing all 0s, + * so we will flush the previously cleared buffer + * and use it as a source. + */ + cs = i915_flush_dw(cs, MI_FLUSH_LLC | MI_FLUSH_CCS); + cs = _i915_ctrl_surf_copy_blt(cs, offset, offset, + DIRECT_ACCESS, INDIRECT_ACCESS, + mocs, mocs, num_ccs_blks); + cs = i915_flush_dw(cs, MI_FLUSH_LLC | MI_FLUSH_CCS); + + if (ccs_ring_size & 1) + *cs++ = MI_NOOP; + } intel_ring_advance(rq, cs); return 0; } @@ -711,7 +846,7 @@ intel_context_migrate_clear(struct intel_context *ce, if (err) goto out_rq;
- err = emit_clear(rq, offset, len, value); + err = emit_clear(rq, offset, len, value, is_lmem);
/* Arbitration is re-enabled between requests. */ out_rq:
On Wed, 2022-03-02 at 03:23 +0530, Ramalingam C wrote:
From: Ayaz A Siddiqui ayaz.siddiqui@intel.com
Xe-HP and latest devices support Flat CCS which reserved a portion of the device memory to store compression metadata, during the clearing of device memory buffer object we also need to clear the associated CCS buffer.
Flat CCS memory can not be directly accessed by S/W. Address of CCS buffer associated main BO is automatically calculated by device itself. KMD/UMD can only access this buffer indirectly using XY_CTRL_SURF_COPY_BLT cmd via the address of device memory buffer.
v2: Fixed issues with platform naming [Lucas] v3: Rebased [Ram] Used the round_up funcs [Bob] v4: Fixed ccs blk calculation [Ram] Added Kdoc on flat-ccs. v5: GENMASK is used [Matt] mocs fix [Matt] Comments Fix [Matt] Flush address programming [Ram] v6: FLUSH_DW is fixed Few coding style fix
Signed-off-by: Ayaz A Siddiqui ayaz.siddiqui@intel.com Signed-off-by: Ramalingam C ramalingam.c@intel.com
drivers/gpu/drm/i915/gt/intel_gpu_commands.h | 15 ++ drivers/gpu/drm/i915/gt/intel_migrate.c | 143 ++++++++++++++++++- 2 files changed, 154 insertions(+), 4 deletions(-)
diff --git a/drivers/gpu/drm/i915/gt/intel_gpu_commands.h b/drivers/gpu/drm/i915/gt/intel_gpu_commands.h index f8253012d166..237c1baccc64 100644 --- a/drivers/gpu/drm/i915/gt/intel_gpu_commands.h +++ b/drivers/gpu/drm/i915/gt/intel_gpu_commands.h @@ -203,6 +203,21 @@ #define GFX_OP_DRAWRECT_INFO ((0x3<<29)|(0x1d<<24)|(0x80<<16)|(0x3)) #define GFX_OP_DRAWRECT_INFO_I965 ((0x7900<<16)|0x2) +#define XY_CTRL_SURF_INSTR_SIZE 5 +#define MI_FLUSH_DW_SIZE 3 +#define XY_CTRL_SURF_COPY_BLT ((2 << 29) | (0x48 << 22) | 3) +#define SRC_ACCESS_TYPE_SHIFT 21 +#define DST_ACCESS_TYPE_SHIFT 20 +#define CCS_SIZE_MASK GENMASK(17, 8) +#define XY_CTRL_SURF_MOCS_MASK GENMASK(31, 25) +#define NUM_CCS_BYTES_PER_BLOCK 256 +#define NUM_BYTES_PER_CCS_BYTE 256 +#define NUM_CCS_BLKS_PER_XFER 1024 +#define INDIRECT_ACCESS 0 +#define DIRECT_ACCESS 1 +#define MI_FLUSH_LLC BIT(9) +#define MI_FLUSH_CCS BIT(16)
#define COLOR_BLT_CMD (2 << 29 | 0x40 << 22 | (5 - 2)) #define XY_COLOR_BLT_CMD (2 << 29 | 0x50 << 22) #define SRC_COPY_BLT_CMD (2 << 29 | 0x43 << 22) diff --git a/drivers/gpu/drm/i915/gt/intel_migrate.c b/drivers/gpu/drm/i915/gt/intel_migrate.c index 20444d6ceb3c..330fcdc3e0cf 100644 --- a/drivers/gpu/drm/i915/gt/intel_migrate.c +++ b/drivers/gpu/drm/i915/gt/intel_migrate.c @@ -16,6 +16,8 @@ struct insert_pte_data { }; #define CHUNK_SZ SZ_8M /* ~1ms at 8GiB/s preemption delay */ +#define GET_CCS_BYTES(i915, size) (HAS_FLAT_CCS(i915) ? \ + DIV_ROUND_UP(size, NUM_BYTES_PER_CCS_BYTE) : 0) static bool engine_supports_migration(struct intel_engine_cs *engine) { @@ -467,6 +469,110 @@ static bool wa_1209644611_applies(int ver, u32 size) return height % 4 == 3 && height <= 8; } +/**
- DOC: Flat-CCS - Memory compression for Local memory
- On Xe-HP and later devices, we use dedicated compression control
state (CCS)
- stored in local memory for each surface, to support the 3D and
media
- compression formats.
- The memory required for the CCS of the entire local memory is
1/256 of the
- local memory size. So before the kernel boot, the required memory
is reserved
- for the CCS data and a secure register will be programmed with
the CCS base
- address.
- Flat CCS data needs to be cleared when a lmem object is
allocated.
- And CCS data can be copied in and out of CCS region through
- XY_CTRL_SURF_COPY_BLT. CPU can't access the CCS data directly.
- When we exhaust the lmem, if the object's placements support
smem, then we can
- directly decompress the compressed lmem object into smem and
start using it
- from smem itself.
- But when we need to swapout the compressed lmem object into a
smem region
- though objects' placement doesn't support smem, then we copy the
lmem content
- as it is into smem region along with ccs data (using
XY_CTRL_SURF_COPY_BLT).
- When the object is referred, lmem content will be swaped in along
with
- restoration of the CCS data (using XY_CTRL_SURF_COPY_BLT) at
corresponding
- location.
- */
+static inline u32 *i915_flush_dw(u32 *cmd, u32 flags) +{ + *cmd++ = MI_FLUSH_DW | flags; + *cmd++ = 0; + *cmd++ = 0;
+ return cmd; +}
+static u32 calc_ctrl_surf_instr_size(struct drm_i915_private *i915, int size) +{ + u32 num_cmds, num_blks, total_size;
+ if (!GET_CCS_BYTES(i915, size)) + return 0;
+ /* + * XY_CTRL_SURF_COPY_BLT transfers CCS in 256 byte + * blocks. one XY_CTRL_SURF_COPY_BLT command can + * transfer upto 1024 blocks. + */ + num_blks = DIV_ROUND_UP(GET_CCS_BYTES(i915, size), + NUM_CCS_BYTES_PER_BLOCK); + num_cmds = DIV_ROUND_UP(num_blks, NUM_CCS_BLKS_PER_XFER); + total_size = XY_CTRL_SURF_INSTR_SIZE * num_cmds;
+ /* + * Adding a flush before and after XY_CTRL_SURF_COPY_BLT + */ + total_size += 2 * MI_FLUSH_DW_SIZE;
+ return total_size; +}
Since we should always interleave the ctrl_surf_copy_blt() on max CHUNK_SZ pieces of LMEM (See also patch 4/4), I figure we would never need to split the command since it can do 64M worth of LMEM in a single command vs a CHUNK_SZ of 8M. Instead perhaps an assert that CHUNK_SZ never exceeds the capability of the XY_CTRL_SURF_COPY_BLT?
Also I think it's important that we try to figure out whether we can use the XY_FAST_COLOR_BLT command to clear also CCS on DG2. Would save us a lot of code, and also at least on DG1 (without CCS) it speeds clearing up significantly.
/Thomas
+static u32 *_i915_ctrl_surf_copy_blt(u32 *cmd, u64 src_addr, u64 dst_addr, + u8 src_mem_access, u8 dst_mem_access, + int src_mocs, int dst_mocs, + u32 ccs_blocks) +{ + /* + * The XY_CTRL_SURF_COPY_BLT instruction is used to copy the CCS + * data in and out of the CCS region. + * + * We can copy at most 1024 blocks of 256 bytes using one + * XY_CTRL_SURF_COPY_BLT instruction. + * + * In case we need to copy more than 1024 blocks, we need to add + * another instruction to the same batch buffer. + * + * 1024 blocks of 256 bytes of CCS represent a total 256KB of CCS. + * + * 256 KB of CCS represents 256 * 256 KB = 64 MB of LMEM. + */ + do { + int blks_per_copy;
+ blks_per_copy = ccs_blocks >= NUM_CCS_BLKS_PER_XFER ? + NUM_CCS_BLKS_PER_XFER : ccs_blocks; + *cmd++ = XY_CTRL_SURF_COPY_BLT | + src_mem_access << SRC_ACCESS_TYPE_SHIFT | + dst_mem_access << DST_ACCESS_TYPE_SHIFT | + FIELD_PREP(CCS_SIZE_MASK, blks_per_copy - 1); + *cmd++ = lower_32_bits(src_addr); + *cmd++ = (upper_32_bits(src_addr) & 0xFFFF) | + FIELD_PREP(XY_CTRL_SURF_MOCS_MASK, src_mocs); + *cmd++ = lower_32_bits(dst_addr); + *cmd++ = (upper_32_bits(dst_addr) & 0xFFFF) | + FIELD_PREP(XY_CTRL_SURF_MOCS_MASK, dst_mocs); + src_addr += SZ_64M; + dst_addr += SZ_64M; + ccs_blocks -= blks_per_copy; + } while (ccs_blocks > 0);
+ return cmd; +}
static int emit_copy(struct i915_request *rq, u32 dst_offset, u32 src_offset, int size) { @@ -614,16 +720,24 @@ intel_context_migrate_copy(struct intel_context *ce, return err; } -static int emit_clear(struct i915_request *rq, u64 offset, int size, u32 value) +static int emit_clear(struct i915_request *rq, u64 offset, int size, + u32 value, bool is_lmem) { - const int ver = GRAPHICS_VER(rq->engine->i915); + struct drm_i915_private *i915 = rq->engine->i915; + const int ver = GRAPHICS_VER(i915); + u32 num_ccs_blks, ccs_ring_size; + int mocs = rq->engine->gt->mocs.uc_index << 1; u32 *cs; GEM_BUG_ON(size >> PAGE_SHIFT > S16_MAX); offset += (u64)rq->engine->instance << 32; - cs = intel_ring_begin(rq, ver >= 8 ? 8 : 6); + /* Clear CCS only when value is 0 */ + ccs_ring_size = (is_lmem && !value) ? + calc_ctrl_surf_instr_size(i915, size) : 0;
+ cs = intel_ring_begin(rq, round_up(ver >= 8 ? 8 + ccs_ring_size : 6, 2)); if (IS_ERR(cs)) return PTR_ERR(cs); @@ -646,6 +760,27 @@ static int emit_clear(struct i915_request *rq, u64 offset, int size, u32 value) *cs++ = value; } + if (is_lmem && HAS_FLAT_CCS(i915) && !value) { + num_ccs_blks = DIV_ROUND_UP(GET_CCS_BYTES(i915, size), + NUM_CCS_BYTES_PER_BLOCK);
+ /* + * Flat CCS surface can only be accessed via + * XY_CTRL_SURF_COPY_BLT CMD and using indirect + * mapping of associated LMEM. + * We can clear ccs surface by writing all 0s, + * so we will flush the previously cleared buffer + * and use it as a source. + */ + cs = i915_flush_dw(cs, MI_FLUSH_LLC | MI_FLUSH_CCS); + cs = _i915_ctrl_surf_copy_blt(cs, offset, offset, + DIRECT_ACCESS, INDIRECT_ACCESS, + mocs, mocs, num_ccs_blks); + cs = i915_flush_dw(cs, MI_FLUSH_LLC | MI_FLUSH_CCS);
+ if (ccs_ring_size & 1) + *cs++ = MI_NOOP; + } intel_ring_advance(rq, cs); return 0; } @@ -711,7 +846,7 @@ intel_context_migrate_clear(struct intel_context *ce, if (err) goto out_rq; - err = emit_clear(rq, offset, len, value); + err = emit_clear(rq, offset, len, value, is_lmem); /* Arbitration is re-enabled between requests. */ out_rq:
---------------------------------------------------------------------- Intel Sweden AB Registered Office: Isafjordsgatan 30B, 164 40 Kista, Stockholm, Sweden Registration Number: 556189-6027
This e-mail and any attachments may contain confidential material for the sole use of the intended recipient(s). Any review or distribution by others is strictly prohibited. If you are not the intended recipient, please contact the sender and delete all copies.
When a driver needs extra pages in ttm_tt, to facilidate such requirement, parameter called "extra_pages" is added for ttm_tt_init
Signed-off-by: Ramalingam C ramalingam.c@intel.com cc: Christian Koenig christian.koenig@amd.com cc: Hellstrom Thomas thomas.hellstrom@intel.com --- drivers/gpu/drm/drm_gem_vram_helper.c | 2 +- drivers/gpu/drm/i915/gem/i915_gem_ttm.c | 2 +- drivers/gpu/drm/qxl/qxl_ttm.c | 2 +- drivers/gpu/drm/ttm/ttm_agp_backend.c | 2 +- drivers/gpu/drm/ttm/ttm_tt.c | 12 +++++++----- drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c | 2 +- include/drm/ttm/ttm_tt.h | 4 +++- 7 files changed, 15 insertions(+), 11 deletions(-)
diff --git a/drivers/gpu/drm/drm_gem_vram_helper.c b/drivers/gpu/drm/drm_gem_vram_helper.c index dc7f938bfff2..123045b58fec 100644 --- a/drivers/gpu/drm/drm_gem_vram_helper.c +++ b/drivers/gpu/drm/drm_gem_vram_helper.c @@ -867,7 +867,7 @@ static struct ttm_tt *bo_driver_ttm_tt_create(struct ttm_buffer_object *bo, if (!tt) return NULL;
- ret = ttm_tt_init(tt, bo, page_flags, ttm_cached); + ret = ttm_tt_init(tt, bo, page_flags, ttm_cached, 0); if (ret < 0) goto err_ttm_tt_init;
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c index 45cc5837ce00..1a8262f5f692 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c @@ -283,7 +283,7 @@ static struct ttm_tt *i915_ttm_tt_create(struct ttm_buffer_object *bo, i915_tt->is_shmem = true; }
- ret = ttm_tt_init(&i915_tt->ttm, bo, page_flags, caching); + ret = ttm_tt_init(&i915_tt->ttm, bo, page_flags, caching, 0); if (ret) goto err_free;
diff --git a/drivers/gpu/drm/qxl/qxl_ttm.c b/drivers/gpu/drm/qxl/qxl_ttm.c index b2e33d5ba5d0..52156b54498f 100644 --- a/drivers/gpu/drm/qxl/qxl_ttm.c +++ b/drivers/gpu/drm/qxl/qxl_ttm.c @@ -113,7 +113,7 @@ static struct ttm_tt *qxl_ttm_tt_create(struct ttm_buffer_object *bo, ttm = kzalloc(sizeof(struct ttm_tt), GFP_KERNEL); if (ttm == NULL) return NULL; - if (ttm_tt_init(ttm, bo, page_flags, ttm_cached)) { + if (ttm_tt_init(ttm, bo, page_flags, ttm_cached, 0)) { kfree(ttm); return NULL; } diff --git a/drivers/gpu/drm/ttm/ttm_agp_backend.c b/drivers/gpu/drm/ttm/ttm_agp_backend.c index 6ddc16f0fe2b..d27691f2e451 100644 --- a/drivers/gpu/drm/ttm/ttm_agp_backend.c +++ b/drivers/gpu/drm/ttm/ttm_agp_backend.c @@ -134,7 +134,7 @@ struct ttm_tt *ttm_agp_tt_create(struct ttm_buffer_object *bo, agp_be->mem = NULL; agp_be->bridge = bridge;
- if (ttm_tt_init(&agp_be->ttm, bo, page_flags, ttm_write_combined)) { + if (ttm_tt_init(&agp_be->ttm, bo, page_flags, ttm_write_combined, 0)) { kfree(agp_be); return NULL; } diff --git a/drivers/gpu/drm/ttm/ttm_tt.c b/drivers/gpu/drm/ttm/ttm_tt.c index d234aab800a0..1a66d9fc589a 100644 --- a/drivers/gpu/drm/ttm/ttm_tt.c +++ b/drivers/gpu/drm/ttm/ttm_tt.c @@ -134,9 +134,10 @@ void ttm_tt_destroy(struct ttm_device *bdev, struct ttm_tt *ttm) static void ttm_tt_init_fields(struct ttm_tt *ttm, struct ttm_buffer_object *bo, uint32_t page_flags, - enum ttm_caching caching) + enum ttm_caching caching, + unsigned long extra_pages) { - ttm->num_pages = PAGE_ALIGN(bo->base.size) >> PAGE_SHIFT; + ttm->num_pages = (PAGE_ALIGN(bo->base.size) >> PAGE_SHIFT) + extra_pages; ttm->caching = ttm_cached; ttm->page_flags = page_flags; ttm->dma_address = NULL; @@ -146,9 +147,10 @@ static void ttm_tt_init_fields(struct ttm_tt *ttm, }
int ttm_tt_init(struct ttm_tt *ttm, struct ttm_buffer_object *bo, - uint32_t page_flags, enum ttm_caching caching) + uint32_t page_flags, enum ttm_caching caching, + unsigned long extra_pages) { - ttm_tt_init_fields(ttm, bo, page_flags, caching); + ttm_tt_init_fields(ttm, bo, page_flags, caching, extra_pages);
if (ttm_tt_alloc_page_directory(ttm)) { pr_err("Failed allocating page table\n"); @@ -180,7 +182,7 @@ int ttm_sg_tt_init(struct ttm_tt *ttm, struct ttm_buffer_object *bo, { int ret;
- ttm_tt_init_fields(ttm, bo, page_flags, caching); + ttm_tt_init_fields(ttm, bo, page_flags, caching, 0);
if (page_flags & TTM_TT_FLAG_EXTERNAL) ret = ttm_sg_tt_alloc_page_directory(ttm); diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c b/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c index b84ecc6d6611..4e3938e62c08 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c @@ -517,7 +517,7 @@ static struct ttm_tt *vmw_ttm_tt_create(struct ttm_buffer_object *bo, ttm_cached); else ret = ttm_tt_init(&vmw_be->dma_ttm, bo, page_flags, - ttm_cached); + ttm_cached, 0); if (unlikely(ret != 0)) goto out_no_init;
diff --git a/include/drm/ttm/ttm_tt.h b/include/drm/ttm/ttm_tt.h index f20832139815..17a0310e8aaa 100644 --- a/include/drm/ttm/ttm_tt.h +++ b/include/drm/ttm/ttm_tt.h @@ -140,6 +140,7 @@ int ttm_tt_create(struct ttm_buffer_object *bo, bool zero_alloc); * @bo: The buffer object we create the ttm for. * @page_flags: Page flags as identified by TTM_TT_FLAG_XX flags. * @caching: the desired caching state of the pages + * @extra_pages: Extra pages needed for the driver. * * Create a struct ttm_tt to back data with system memory pages. * No pages are actually allocated. @@ -147,7 +148,8 @@ int ttm_tt_create(struct ttm_buffer_object *bo, bool zero_alloc); * NULL: Out of memory. */ int ttm_tt_init(struct ttm_tt *ttm, struct ttm_buffer_object *bo, - uint32_t page_flags, enum ttm_caching caching); + uint32_t page_flags, enum ttm_caching caching, + unsigned long extra_pages); int ttm_sg_tt_init(struct ttm_tt *ttm_dma, struct ttm_buffer_object *bo, uint32_t page_flags, enum ttm_caching caching);
On Wed, 2022-03-02 at 03:23 +0530, Ramalingam C wrote:
When a driver needs extra pages in ttm_tt, to facilidate such requirement, parameter called "extra_pages" is added for ttm_tt_init
nit: Please use imperative wording in commit title and description, "Add a parameter to add extra pages.."
Signed-off-by: Ramalingam C ramalingam.c@intel.com cc: Christian Koenig christian.koenig@amd.com cc: Hellstrom Thomas thomas.hellstrom@intel.com
Otherwise LGTM. Reviewed-by: Thomas Hellström thomas.hellstrom@linux.intel.com
drivers/gpu/drm/drm_gem_vram_helper.c | 2 +- drivers/gpu/drm/i915/gem/i915_gem_ttm.c | 2 +- drivers/gpu/drm/qxl/qxl_ttm.c | 2 +- drivers/gpu/drm/ttm/ttm_agp_backend.c | 2 +- drivers/gpu/drm/ttm/ttm_tt.c | 12 +++++++----- drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c | 2 +- include/drm/ttm/ttm_tt.h | 4 +++- 7 files changed, 15 insertions(+), 11 deletions(-)
diff --git a/drivers/gpu/drm/drm_gem_vram_helper.c b/drivers/gpu/drm/drm_gem_vram_helper.c index dc7f938bfff2..123045b58fec 100644 --- a/drivers/gpu/drm/drm_gem_vram_helper.c +++ b/drivers/gpu/drm/drm_gem_vram_helper.c @@ -867,7 +867,7 @@ static struct ttm_tt *bo_driver_ttm_tt_create(struct ttm_buffer_object *bo, if (!tt) return NULL; - ret = ttm_tt_init(tt, bo, page_flags, ttm_cached); + ret = ttm_tt_init(tt, bo, page_flags, ttm_cached, 0); if (ret < 0) goto err_ttm_tt_init; diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c index 45cc5837ce00..1a8262f5f692 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c @@ -283,7 +283,7 @@ static struct ttm_tt *i915_ttm_tt_create(struct ttm_buffer_object *bo, i915_tt->is_shmem = true; } - ret = ttm_tt_init(&i915_tt->ttm, bo, page_flags, caching); + ret = ttm_tt_init(&i915_tt->ttm, bo, page_flags, caching, 0); if (ret) goto err_free; diff --git a/drivers/gpu/drm/qxl/qxl_ttm.c b/drivers/gpu/drm/qxl/qxl_ttm.c index b2e33d5ba5d0..52156b54498f 100644 --- a/drivers/gpu/drm/qxl/qxl_ttm.c +++ b/drivers/gpu/drm/qxl/qxl_ttm.c @@ -113,7 +113,7 @@ static struct ttm_tt *qxl_ttm_tt_create(struct ttm_buffer_object *bo, ttm = kzalloc(sizeof(struct ttm_tt), GFP_KERNEL); if (ttm == NULL) return NULL; - if (ttm_tt_init(ttm, bo, page_flags, ttm_cached)) { + if (ttm_tt_init(ttm, bo, page_flags, ttm_cached, 0)) { kfree(ttm); return NULL; } diff --git a/drivers/gpu/drm/ttm/ttm_agp_backend.c b/drivers/gpu/drm/ttm/ttm_agp_backend.c index 6ddc16f0fe2b..d27691f2e451 100644 --- a/drivers/gpu/drm/ttm/ttm_agp_backend.c +++ b/drivers/gpu/drm/ttm/ttm_agp_backend.c @@ -134,7 +134,7 @@ struct ttm_tt *ttm_agp_tt_create(struct ttm_buffer_object *bo, agp_be->mem = NULL; agp_be->bridge = bridge; - if (ttm_tt_init(&agp_be->ttm, bo, page_flags, ttm_write_combined)) { + if (ttm_tt_init(&agp_be->ttm, bo, page_flags, ttm_write_combined, 0)) { kfree(agp_be); return NULL; } diff --git a/drivers/gpu/drm/ttm/ttm_tt.c b/drivers/gpu/drm/ttm/ttm_tt.c index d234aab800a0..1a66d9fc589a 100644 --- a/drivers/gpu/drm/ttm/ttm_tt.c +++ b/drivers/gpu/drm/ttm/ttm_tt.c @@ -134,9 +134,10 @@ void ttm_tt_destroy(struct ttm_device *bdev, struct ttm_tt *ttm) static void ttm_tt_init_fields(struct ttm_tt *ttm, struct ttm_buffer_object *bo, uint32_t page_flags, - enum ttm_caching caching) + enum ttm_caching caching, + unsigned long extra_pages) { - ttm->num_pages = PAGE_ALIGN(bo->base.size) >> PAGE_SHIFT; + ttm->num_pages = (PAGE_ALIGN(bo->base.size) >> PAGE_SHIFT) + extra_pages; ttm->caching = ttm_cached; ttm->page_flags = page_flags; ttm->dma_address = NULL; @@ -146,9 +147,10 @@ static void ttm_tt_init_fields(struct ttm_tt *ttm, } int ttm_tt_init(struct ttm_tt *ttm, struct ttm_buffer_object *bo, - uint32_t page_flags, enum ttm_caching caching) + uint32_t page_flags, enum ttm_caching caching, + unsigned long extra_pages) { - ttm_tt_init_fields(ttm, bo, page_flags, caching); + ttm_tt_init_fields(ttm, bo, page_flags, caching, extra_pages); if (ttm_tt_alloc_page_directory(ttm)) { pr_err("Failed allocating page table\n"); @@ -180,7 +182,7 @@ int ttm_sg_tt_init(struct ttm_tt *ttm, struct ttm_buffer_object *bo, { int ret; - ttm_tt_init_fields(ttm, bo, page_flags, caching); + ttm_tt_init_fields(ttm, bo, page_flags, caching, 0); if (page_flags & TTM_TT_FLAG_EXTERNAL) ret = ttm_sg_tt_alloc_page_directory(ttm); diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c b/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c index b84ecc6d6611..4e3938e62c08 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c @@ -517,7 +517,7 @@ static struct ttm_tt *vmw_ttm_tt_create(struct ttm_buffer_object *bo, ttm_cached); else ret = ttm_tt_init(&vmw_be->dma_ttm, bo, page_flags, - ttm_cached); + ttm_cached, 0); if (unlikely(ret != 0)) goto out_no_init; diff --git a/include/drm/ttm/ttm_tt.h b/include/drm/ttm/ttm_tt.h index f20832139815..17a0310e8aaa 100644 --- a/include/drm/ttm/ttm_tt.h +++ b/include/drm/ttm/ttm_tt.h @@ -140,6 +140,7 @@ int ttm_tt_create(struct ttm_buffer_object *bo, bool zero_alloc); * @bo: The buffer object we create the ttm for. * @page_flags: Page flags as identified by TTM_TT_FLAG_XX flags. * @caching: the desired caching state of the pages
- @extra_pages: Extra pages needed for the driver.
* * Create a struct ttm_tt to back data with system memory pages. * No pages are actually allocated. @@ -147,7 +148,8 @@ int ttm_tt_create(struct ttm_buffer_object *bo, bool zero_alloc); * NULL: Out of memory. */ int ttm_tt_init(struct ttm_tt *ttm, struct ttm_buffer_object *bo, - uint32_t page_flags, enum ttm_caching caching); + uint32_t page_flags, enum ttm_caching caching, + unsigned long extra_pages); int ttm_sg_tt_init(struct ttm_tt *ttm_dma, struct ttm_buffer_object *bo, uint32_t page_flags, enum ttm_caching caching);
Am 01.03.22 um 22:53 schrieb Ramalingam C:
When a driver needs extra pages in ttm_tt, to facilidate such requirement, parameter called "extra_pages" is added for ttm_tt_init
Signed-off-by: Ramalingam C ramalingam.c@intel.com cc: Christian Koenig christian.koenig@amd.com cc: Hellstrom Thomas thomas.hellstrom@intel.com
With the nits pointed out by Thomas the patch is Reviewed-by: Christian König christian.koenig@amd.com as well.
Let me know through which branch you want to push this upstream (i915 or drm-misc-next).
Thanks, Christian.
drivers/gpu/drm/drm_gem_vram_helper.c | 2 +- drivers/gpu/drm/i915/gem/i915_gem_ttm.c | 2 +- drivers/gpu/drm/qxl/qxl_ttm.c | 2 +- drivers/gpu/drm/ttm/ttm_agp_backend.c | 2 +- drivers/gpu/drm/ttm/ttm_tt.c | 12 +++++++----- drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c | 2 +- include/drm/ttm/ttm_tt.h | 4 +++- 7 files changed, 15 insertions(+), 11 deletions(-)
diff --git a/drivers/gpu/drm/drm_gem_vram_helper.c b/drivers/gpu/drm/drm_gem_vram_helper.c index dc7f938bfff2..123045b58fec 100644 --- a/drivers/gpu/drm/drm_gem_vram_helper.c +++ b/drivers/gpu/drm/drm_gem_vram_helper.c @@ -867,7 +867,7 @@ static struct ttm_tt *bo_driver_ttm_tt_create(struct ttm_buffer_object *bo, if (!tt) return NULL;
- ret = ttm_tt_init(tt, bo, page_flags, ttm_cached);
- ret = ttm_tt_init(tt, bo, page_flags, ttm_cached, 0); if (ret < 0) goto err_ttm_tt_init;
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c index 45cc5837ce00..1a8262f5f692 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c @@ -283,7 +283,7 @@ static struct ttm_tt *i915_ttm_tt_create(struct ttm_buffer_object *bo, i915_tt->is_shmem = true; }
- ret = ttm_tt_init(&i915_tt->ttm, bo, page_flags, caching);
- ret = ttm_tt_init(&i915_tt->ttm, bo, page_flags, caching, 0); if (ret) goto err_free;
diff --git a/drivers/gpu/drm/qxl/qxl_ttm.c b/drivers/gpu/drm/qxl/qxl_ttm.c index b2e33d5ba5d0..52156b54498f 100644 --- a/drivers/gpu/drm/qxl/qxl_ttm.c +++ b/drivers/gpu/drm/qxl/qxl_ttm.c @@ -113,7 +113,7 @@ static struct ttm_tt *qxl_ttm_tt_create(struct ttm_buffer_object *bo, ttm = kzalloc(sizeof(struct ttm_tt), GFP_KERNEL); if (ttm == NULL) return NULL;
- if (ttm_tt_init(ttm, bo, page_flags, ttm_cached)) {
- if (ttm_tt_init(ttm, bo, page_flags, ttm_cached, 0)) { kfree(ttm); return NULL; }
diff --git a/drivers/gpu/drm/ttm/ttm_agp_backend.c b/drivers/gpu/drm/ttm/ttm_agp_backend.c index 6ddc16f0fe2b..d27691f2e451 100644 --- a/drivers/gpu/drm/ttm/ttm_agp_backend.c +++ b/drivers/gpu/drm/ttm/ttm_agp_backend.c @@ -134,7 +134,7 @@ struct ttm_tt *ttm_agp_tt_create(struct ttm_buffer_object *bo, agp_be->mem = NULL; agp_be->bridge = bridge;
- if (ttm_tt_init(&agp_be->ttm, bo, page_flags, ttm_write_combined)) {
- if (ttm_tt_init(&agp_be->ttm, bo, page_flags, ttm_write_combined, 0)) { kfree(agp_be); return NULL; }
diff --git a/drivers/gpu/drm/ttm/ttm_tt.c b/drivers/gpu/drm/ttm/ttm_tt.c index d234aab800a0..1a66d9fc589a 100644 --- a/drivers/gpu/drm/ttm/ttm_tt.c +++ b/drivers/gpu/drm/ttm/ttm_tt.c @@ -134,9 +134,10 @@ void ttm_tt_destroy(struct ttm_device *bdev, struct ttm_tt *ttm) static void ttm_tt_init_fields(struct ttm_tt *ttm, struct ttm_buffer_object *bo, uint32_t page_flags,
enum ttm_caching caching)
enum ttm_caching caching,
{unsigned long extra_pages)
- ttm->num_pages = PAGE_ALIGN(bo->base.size) >> PAGE_SHIFT;
- ttm->num_pages = (PAGE_ALIGN(bo->base.size) >> PAGE_SHIFT) + extra_pages; ttm->caching = ttm_cached; ttm->page_flags = page_flags; ttm->dma_address = NULL;
@@ -146,9 +147,10 @@ static void ttm_tt_init_fields(struct ttm_tt *ttm, }
int ttm_tt_init(struct ttm_tt *ttm, struct ttm_buffer_object *bo,
uint32_t page_flags, enum ttm_caching caching)
uint32_t page_flags, enum ttm_caching caching,
{unsigned long extra_pages)
- ttm_tt_init_fields(ttm, bo, page_flags, caching);
ttm_tt_init_fields(ttm, bo, page_flags, caching, extra_pages);
if (ttm_tt_alloc_page_directory(ttm)) { pr_err("Failed allocating page table\n");
@@ -180,7 +182,7 @@ int ttm_sg_tt_init(struct ttm_tt *ttm, struct ttm_buffer_object *bo, { int ret;
- ttm_tt_init_fields(ttm, bo, page_flags, caching);
ttm_tt_init_fields(ttm, bo, page_flags, caching, 0);
if (page_flags & TTM_TT_FLAG_EXTERNAL) ret = ttm_sg_tt_alloc_page_directory(ttm);
diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c b/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c index b84ecc6d6611..4e3938e62c08 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c @@ -517,7 +517,7 @@ static struct ttm_tt *vmw_ttm_tt_create(struct ttm_buffer_object *bo, ttm_cached); else ret = ttm_tt_init(&vmw_be->dma_ttm, bo, page_flags,
ttm_cached);
if (unlikely(ret != 0)) goto out_no_init;ttm_cached, 0);
diff --git a/include/drm/ttm/ttm_tt.h b/include/drm/ttm/ttm_tt.h index f20832139815..17a0310e8aaa 100644 --- a/include/drm/ttm/ttm_tt.h +++ b/include/drm/ttm/ttm_tt.h @@ -140,6 +140,7 @@ int ttm_tt_create(struct ttm_buffer_object *bo, bool zero_alloc);
- @bo: The buffer object we create the ttm for.
- @page_flags: Page flags as identified by TTM_TT_FLAG_XX flags.
- @caching: the desired caching state of the pages
- @extra_pages: Extra pages needed for the driver.
- Create a struct ttm_tt to back data with system memory pages.
- No pages are actually allocated.
@@ -147,7 +148,8 @@ int ttm_tt_create(struct ttm_buffer_object *bo, bool zero_alloc);
- NULL: Out of memory.
*/ int ttm_tt_init(struct ttm_tt *ttm, struct ttm_buffer_object *bo,
uint32_t page_flags, enum ttm_caching caching);
uint32_t page_flags, enum ttm_caching caching,
int ttm_sg_tt_init(struct ttm_tt *ttm_dma, struct ttm_buffer_object *bo, uint32_t page_flags, enum ttm_caching caching);unsigned long extra_pages);
On Xe-HP and later devices, we use dedicated compression control state (CCS) stored in local memory for each surface, to support the 3D and media compression formats.
The memory required for the CCS of the entire local memory is 1/256 of the local memory size. So before the kernel boot, the required memory is reserved for the CCS data and a secure register will be programmed with the CCS base address
So when we allocate a object in local memory we dont need to explicitly allocate the space for ccs data. But when we evict the obj into the smem to hold the compression related data along with the obj we need smem space of obj_size + (obj_size/256).
Hence when we create smem for an obj with lmem placement possibility we create with the extra space.
Signed-off-by: Ramalingam C ramalingam.c@intel.com cc: Christian Koenig christian.koenig@amd.com cc: Hellstrom Thomas thomas.hellstrom@intel.com --- drivers/gpu/drm/i915/gem/i915_gem_ttm.c | 23 ++++++++++++++++++++++- 1 file changed, 22 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c index 1a8262f5f692..c7a36861c38d 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c @@ -20,6 +20,7 @@ #include "gem/i915_gem_ttm.h" #include "gem/i915_gem_ttm_move.h" #include "gem/i915_gem_ttm_pm.h" +#include "gt/intel_gpu_commands.h"
#define I915_TTM_PRIO_PURGE 0 #define I915_TTM_PRIO_NO_PAGES 1 @@ -255,12 +256,27 @@ static const struct i915_refct_sgt_ops tt_rsgt_ops = { .release = i915_ttm_tt_release };
+static inline bool +i915_gem_object_has_lmem_placement(struct drm_i915_gem_object *obj) +{ + int i; + + for (i = 0; i < obj->mm.n_placements; i++) + if (obj->mm.placements[i]->type == INTEL_MEMORY_LOCAL) + return true; + + return false; +} + static struct ttm_tt *i915_ttm_tt_create(struct ttm_buffer_object *bo, uint32_t page_flags) { + struct drm_i915_private *i915 = container_of(bo->bdev, typeof(*i915), + bdev); struct ttm_resource_manager *man = ttm_manager_type(bo->bdev, bo->resource->mem_type); struct drm_i915_gem_object *obj = i915_ttm_to_gem(bo); + unsigned long ccs_pages = 0; enum ttm_caching caching; struct i915_ttm_tt *i915_tt; int ret; @@ -283,7 +299,12 @@ static struct ttm_tt *i915_ttm_tt_create(struct ttm_buffer_object *bo, i915_tt->is_shmem = true; }
- ret = ttm_tt_init(&i915_tt->ttm, bo, page_flags, caching, 0); + if (HAS_FLAT_CCS(i915) && i915_gem_object_has_lmem_placement(obj)) + ccs_pages = DIV_ROUND_UP(DIV_ROUND_UP(bo->base.size, + NUM_BYTES_PER_CCS_BYTE), + PAGE_SIZE); + + ret = ttm_tt_init(&i915_tt->ttm, bo, page_flags, caching, ccs_pages); if (ret) goto err_free;
On Wed, 2022-03-02 at 03:23 +0530, Ramalingam C wrote:
On Xe-HP and later devices, we use dedicated compression control state (CCS) stored in local memory for each surface, to support the 3D and media compression formats.
The memory required for the CCS of the entire local memory is 1/256 of the local memory size. So before the kernel boot, the required memory is reserved for the CCS data and a secure register will be programmed with the CCS base address
So when we allocate a object in local memory we dont need to explicitly allocate the space for ccs data. But when we evict the obj into the smem to hold the compression related data along with the obj we need smem space of obj_size + (obj_size/256).
Hence when we create smem for an obj with lmem placement possibility we create with the extra space.
Nit: Again imperative wording,
Signed-off-by: Ramalingam C ramalingam.c@intel.com cc: Christian Koenig christian.koenig@amd.com cc: Hellstrom Thomas thomas.hellstrom@intel.com
Reviewed by: Thomas Hellström thomas.hellstrom@linux.intel.com
drivers/gpu/drm/i915/gem/i915_gem_ttm.c | 23 ++++++++++++++++++++++- 1 file changed, 22 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c index 1a8262f5f692..c7a36861c38d 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c @@ -20,6 +20,7 @@ #include "gem/i915_gem_ttm.h" #include "gem/i915_gem_ttm_move.h" #include "gem/i915_gem_ttm_pm.h" +#include "gt/intel_gpu_commands.h" #define I915_TTM_PRIO_PURGE 0 #define I915_TTM_PRIO_NO_PAGES 1 @@ -255,12 +256,27 @@ static const struct i915_refct_sgt_ops tt_rsgt_ops = { .release = i915_ttm_tt_release }; +static inline bool +i915_gem_object_has_lmem_placement(struct drm_i915_gem_object *obj) +{ + int i;
+ for (i = 0; i < obj->mm.n_placements; i++) + if (obj->mm.placements[i]->type == INTEL_MEMORY_LOCAL) + return true;
+ return false; +}
static struct ttm_tt *i915_ttm_tt_create(struct ttm_buffer_object *bo, uint32_t page_flags) { + struct drm_i915_private *i915 = container_of(bo->bdev, typeof(*i915), + bdev); struct ttm_resource_manager *man = ttm_manager_type(bo->bdev, bo->resource->mem_type); struct drm_i915_gem_object *obj = i915_ttm_to_gem(bo); + unsigned long ccs_pages = 0; enum ttm_caching caching; struct i915_ttm_tt *i915_tt; int ret; @@ -283,7 +299,12 @@ static struct ttm_tt *i915_ttm_tt_create(struct ttm_buffer_object *bo, i915_tt->is_shmem = true; } - ret = ttm_tt_init(&i915_tt->ttm, bo, page_flags, caching, 0); + if (HAS_FLAT_CCS(i915) && i915_gem_object_has_lmem_placement(obj)) + ccs_pages = DIV_ROUND_UP(DIV_ROUND_UP(bo->base.size, + NUM_BYTES_PER_CCS_BYTE), + PAGE_SIZE);
+ ret = ttm_tt_init(&i915_tt->ttm, bo, page_flags, caching, ccs_pages); if (ret) goto err_free;
When we are swapping out the local memory obj on flat-ccs capable platform, we need to capture the ccs data too along with main meory and we need to restore it when we are swapping in the content.
When lmem object is swapped into a smem obj, smem obj will have the extra pages required to hold the ccs data corresponding to the lmem main memory. So main memory of lmem will be copied into the initial pages of the smem and then ccs data corresponding to the main memory will be copied to the subsequent pages of smem. ccs data is 1/256 of lmem size.
Swapin happens exactly in reverse order. First main memory of lmem is restored from the smem's initial pages and the ccs data will be restored from the subsequent pages of smem.
Extracting and restoring the CCS data is done through a special cmd called XY_CTRL_SURF_COPY_BLT
v2: Fixing the ccs handling
Signed-off-by: Ramalingam C ramalingam.c@intel.com --- drivers/gpu/drm/i915/gt/intel_migrate.c | 184 +++++++++++++++++++++--- 1 file changed, 167 insertions(+), 17 deletions(-)
diff --git a/drivers/gpu/drm/i915/gt/intel_migrate.c b/drivers/gpu/drm/i915/gt/intel_migrate.c index 330fcdc3e0cf..73ac7382aeb6 100644 --- a/drivers/gpu/drm/i915/gt/intel_migrate.c +++ b/drivers/gpu/drm/i915/gt/intel_migrate.c @@ -341,12 +341,9 @@ static int emit_no_arbitration(struct i915_request *rq) return 0; }
-static int emit_pte(struct i915_request *rq, - struct sgt_dma *it, +static int emit_pte(struct i915_request *rq, struct sgt_dma *it, enum i915_cache_level cache_level, - bool is_lmem, - u64 offset, - int length) + bool is_lmem, u64 offset, int length) { bool has_64K_pages = HAS_64K_PAGES(rq->engine->i915); const u64 encode = rq->context->vm->pte_encode(0, cache_level, @@ -573,14 +570,54 @@ static u32 *_i915_ctrl_surf_copy_blt(u32 *cmd, u64 src_addr, u64 dst_addr, return cmd; }
+static int emit_ccs_copy(struct i915_request *rq, + bool dst_is_lmem, u32 dst_offset, + bool src_is_lmem, u32 src_offset, int size) +{ + struct drm_i915_private *i915 = rq->engine->i915; + int mocs = rq->engine->gt->mocs.uc_index << 1; + u32 num_ccs_blks, ccs_ring_size; + u8 src_access, dst_access; + u32 *cs; + + GEM_BUG_ON(!(src_is_lmem ^ dst_is_lmem) || !HAS_FLAT_CCS(i915)); + + ccs_ring_size = calc_ctrl_surf_instr_size(i915, size); + WARN_ON(!ccs_ring_size); + + cs = intel_ring_begin(rq, round_up(ccs_ring_size, 2)); + if (IS_ERR(cs)) + return PTR_ERR(cs); + + num_ccs_blks = DIV_ROUND_UP(GET_CCS_BYTES(i915, size), + NUM_CCS_BYTES_PER_BLOCK); + + src_access = !src_is_lmem && dst_is_lmem; + dst_access = !src_access; + + cs = i915_flush_dw(cs, MI_FLUSH_LLC | MI_FLUSH_CCS); + cs = _i915_ctrl_surf_copy_blt(cs, src_offset, dst_offset, + src_access, dst_access, + mocs, mocs, num_ccs_blks); + cs = i915_flush_dw(cs, MI_FLUSH_LLC | MI_FLUSH_CCS); + if (ccs_ring_size & 1) + *cs++ = MI_NOOP; + + intel_ring_advance(rq, cs); + + return 0; +} + static int emit_copy(struct i915_request *rq, - u32 dst_offset, u32 src_offset, int size) + bool dst_is_lmem, u32 dst_offset, + bool src_is_lmem, u32 src_offset, int size) { const int ver = GRAPHICS_VER(rq->engine->i915); u32 instance = rq->engine->instance; u32 *cs;
cs = intel_ring_begin(rq, ver >= 8 ? 10 : 6); + if (IS_ERR(cs)) return PTR_ERR(cs);
@@ -620,6 +657,18 @@ static int emit_copy(struct i915_request *rq, return 0; }
+static int scatter_list_length(struct scatterlist *sg) +{ + int len = 0; + + while (sg) { + len += sg_dma_len(sg); + sg = sg_next(sg); + }; + + return len; +} + int intel_context_migrate_copy(struct intel_context *ce, const struct i915_deps *deps, @@ -632,7 +681,10 @@ intel_context_migrate_copy(struct intel_context *ce, struct i915_request **out) { struct sgt_dma it_src = sg_sgt(src), it_dst = sg_sgt(dst); + struct drm_i915_private *i915 = ce->engine->i915; + u32 src_sz, dst_sz, ccs_bytes = 0, bytes_to_cpy; struct i915_request *rq; + bool ccs_copy = false; int err;
GEM_BUG_ON(ce->vm != ce->engine->gt->migrate.context->vm); @@ -640,9 +692,28 @@ intel_context_migrate_copy(struct intel_context *ce,
GEM_BUG_ON(ce->ring->size < SZ_64K);
+ if (HAS_FLAT_CCS(i915) && src_is_lmem ^ dst_is_lmem) { + src_sz = scatter_list_length(src); + dst_sz = scatter_list_length(dst); + + if (src_is_lmem) + bytes_to_cpy = src_sz; + else if (dst_is_lmem) + bytes_to_cpy = dst_sz; + + /* + * When there is a eviction of ccs needed smem will have the + * extra pages for the ccs data + * + * TO-DO: Want to move the size mismatch check to a WARN_ON, + * but still we have some requests of smem->lmem with same size. + * Need to fix it. + */ + ccs_bytes = src_sz != dst_sz ? GET_CCS_BYTES(i915, bytes_to_cpy) : 0; + } + do { - u32 src_offset, dst_offset; - int len; + u32 src_offset, dst_offset, copy_sz;
rq = i915_request_create(ce); if (IS_ERR(rq)) { @@ -682,27 +753,82 @@ intel_context_migrate_copy(struct intel_context *ce, dst_offset = 2 * CHUNK_SZ; }
- len = emit_pte(rq, &it_src, src_cache_level, src_is_lmem, - src_offset, CHUNK_SZ); - if (len <= 0) { - err = len; + if (ccs_copy) { + /* Flat-CCS: CCS data copy */ + if (!src_is_lmem) { /* src is smem */ + /* + * We can only copy the ccs data corresponding to + * the CHUNK_SZ of lmem which is + * GET_CCS_BYTES(i915, CHUNK_SZ)) + */ + src_sz = min_t(int, bytes_to_cpy, + GET_CCS_BYTES(i915, CHUNK_SZ)); + dst_sz = CHUNK_SZ; + } else { + src_sz = CHUNK_SZ; + dst_sz = min_t(int, bytes_to_cpy, + GET_CCS_BYTES(i915, CHUNK_SZ)); + } + } else if (!ccs_copy && ccs_bytes) { + /* Flat-CCS: Main memory copy */ + if (!src_is_lmem) { + src_sz = min_t(int, bytes_to_cpy, CHUNK_SZ); + dst_sz = CHUNK_SZ; + } else { + dst_sz = min_t(int, bytes_to_cpy, CHUNK_SZ); + src_sz = CHUNK_SZ; + } + } else { /* ccs handling is not required */ + src_sz = CHUNK_SZ; + } + + src_sz = emit_pte(rq, &it_src, src_cache_level, src_is_lmem, + src_offset, src_sz); + if (src_sz <= 0) { + err = src_sz; goto out_rq; }
+ if (!ccs_bytes) + dst_sz = src_sz; + err = emit_pte(rq, &it_dst, dst_cache_level, dst_is_lmem, - dst_offset, len); + dst_offset, dst_sz); if (err < 0) goto out_rq; - if (err < len) { + if (err < dst_sz && !ccs_bytes) { err = -EINVAL; goto out_rq; }
+ dst_sz = err; + err = rq->engine->emit_flush(rq, EMIT_INVALIDATE); if (err) goto out_rq;
- err = emit_copy(rq, dst_offset, src_offset, len); + if (ccs_copy) { + /* + * Using max of src_sz and dst_sz, as we need to + * pass the lmem size corresponding to the ccs + * blocks we need to handle. + */ + copy_sz = max_t(int, src_sz, dst_sz); + err = emit_ccs_copy(rq, dst_is_lmem, dst_offset, + src_is_lmem, src_offset, + copy_sz); + + /* Converting back to ccs bytes */ + copy_sz = GET_CCS_BYTES(i915, copy_sz); + } else { + WARN(src_sz != dst_sz, "%d != %d", src_sz, dst_sz); + copy_sz = src_sz; + err = emit_copy(rq, dst_is_lmem, dst_offset, + src_is_lmem, src_offset, copy_sz); + } + + if (!err && ccs_bytes) + bytes_to_cpy -= copy_sz;
/* Arbitration is re-enabled between requests. */ out_rq: @@ -710,9 +836,33 @@ intel_context_migrate_copy(struct intel_context *ce, i915_request_put(*out); *out = i915_request_get(rq); i915_request_add(rq); - if (err || !it_src.sg || !sg_dma_len(it_src.sg)) - break;
+ if (err || !it_src.sg || !sg_dma_len(it_src.sg) || + !it_dst.sg || !sg_dma_len(it_src.sg)) { + if (err || !ccs_bytes) + break; + + GEM_BUG_ON(bytes_to_cpy); + if (ccs_copy) { + break; + } else if (ccs_bytes) { + if (src_is_lmem) { + WARN_ON(it_src.sg && sg_dma_len(it_src.sg)); + it_src = sg_sgt(src); + } else { + WARN_ON(it_dst.sg && sg_dma_len(it_dst.sg)); + it_dst = sg_sgt(dst); + } + bytes_to_cpy = ccs_bytes; + ccs_copy = true; + + continue; + } else { + DRM_ERROR("Invalid state\n"); + err = -EINVAL; + break; + } + } cond_resched(); } while (1);
On Wed, 2022-03-02 at 03:23 +0530, Ramalingam C wrote:
When we are swapping out the local memory obj on flat-ccs capable platform, we need to capture the ccs data too along with main meory and we need to restore it when we are swapping in the content.
When lmem object is swapped into a smem obj, smem obj will have the extra pages required to hold the ccs data corresponding to the lmem main memory. So main memory of lmem will be copied into the initial pages of the smem and then ccs data corresponding to the main memory will be copied to the subsequent pages of smem. ccs data is 1/256 of lmem size.
Swapin happens exactly in reverse order. First main memory of lmem is restored from the smem's initial pages and the ccs data will be restored from the subsequent pages of smem.
Extracting and restoring the CCS data is done through a special cmd called XY_CTRL_SURF_COPY_BLT
v2: Fixing the ccs handling
Signed-off-by: Ramalingam C ramalingam.c@intel.com
drivers/gpu/drm/i915/gt/intel_migrate.c | 184 +++++++++++++++++++++-
1 file changed, 167 insertions(+), 17 deletions(-)
diff --git a/drivers/gpu/drm/i915/gt/intel_migrate.c b/drivers/gpu/drm/i915/gt/intel_migrate.c index 330fcdc3e0cf..73ac7382aeb6 100644 --- a/drivers/gpu/drm/i915/gt/intel_migrate.c +++ b/drivers/gpu/drm/i915/gt/intel_migrate.c @@ -341,12 +341,9 @@ static int emit_no_arbitration(struct i915_request *rq) return 0; } -static int emit_pte(struct i915_request *rq, - struct sgt_dma *it, +static int emit_pte(struct i915_request *rq, struct sgt_dma *it, enum i915_cache_level cache_level, - bool is_lmem, - u64 offset, - int length) + bool is_lmem, u64 offset, int length)
Above change seems unrelated?
{ bool has_64K_pages = HAS_64K_PAGES(rq->engine->i915); const u64 encode = rq->context->vm->pte_encode(0, cache_level, @@ -573,14 +570,54 @@ static u32 *_i915_ctrl_surf_copy_blt(u32 *cmd, u64 src_addr, u64 dst_addr, return cmd; } +static int emit_ccs_copy(struct i915_request *rq, + bool dst_is_lmem, u32 dst_offset, + bool src_is_lmem, u32 src_offset, int size) +{ + struct drm_i915_private *i915 = rq->engine->i915; + int mocs = rq->engine->gt->mocs.uc_index << 1; + u32 num_ccs_blks, ccs_ring_size; + u8 src_access, dst_access; + u32 *cs;
+ GEM_BUG_ON(!(src_is_lmem ^ dst_is_lmem) || !HAS_FLAT_CCS(i915));
+ ccs_ring_size = calc_ctrl_surf_instr_size(i915, size); + WARN_ON(!ccs_ring_size);
+ cs = intel_ring_begin(rq, round_up(ccs_ring_size, 2)); + if (IS_ERR(cs)) + return PTR_ERR(cs);
+ num_ccs_blks = DIV_ROUND_UP(GET_CCS_BYTES(i915, size), + NUM_CCS_BYTES_PER_BLOCK);
+ src_access = !src_is_lmem && dst_is_lmem; + dst_access = !src_access;
+ cs = i915_flush_dw(cs, MI_FLUSH_LLC | MI_FLUSH_CCS); + cs = _i915_ctrl_surf_copy_blt(cs, src_offset, dst_offset, + src_access, dst_access, + mocs, mocs, num_ccs_blks); + cs = i915_flush_dw(cs, MI_FLUSH_LLC | MI_FLUSH_CCS); + if (ccs_ring_size & 1) + *cs++ = MI_NOOP;
+ intel_ring_advance(rq, cs);
+ return 0; +}
static int emit_copy(struct i915_request *rq, - u32 dst_offset, u32 src_offset, int size) + bool dst_is_lmem, u32 dst_offset, + bool src_is_lmem, u32 src_offset, int size) { const int ver = GRAPHICS_VER(rq->engine->i915); u32 instance = rq->engine->instance; u32 *cs; cs = intel_ring_begin(rq, ver >= 8 ? 10 : 6);
if (IS_ERR(cs)) return PTR_ERR(cs);
Changes to emit_copy() above seem unrelated? Also for the verbatim copy we need to adjust the compression flags in the main copy blit.
@@ -620,6 +657,18 @@ static int emit_copy(struct i915_request *rq, return 0; } +static int scatter_list_length(struct scatterlist *sg) +{ + int len = 0;
+ while (sg) {
Terminate loop if (sg_dma_len() == 0) ?
+ len += sg_dma_len(sg); + sg = sg_next(sg); + };
+ return len; +}
int intel_context_migrate_copy(struct intel_context *ce, const struct i915_deps *deps, @@ -632,7 +681,10 @@ intel_context_migrate_copy(struct intel_context *ce, struct i915_request **out)
Perhaps add a parameter "verbatim" to indicate whether we want to do a verbatim copy or not. That way we can differentiate between eviction (verbatim) and migration (ordinary blit)?
{ struct sgt_dma it_src = sg_sgt(src), it_dst = sg_sgt(dst); + struct drm_i915_private *i915 = ce->engine->i915; + u32 src_sz, dst_sz, ccs_bytes = 0, bytes_to_cpy; struct i915_request *rq; + bool ccs_copy = false; int err; GEM_BUG_ON(ce->vm != ce->engine->gt->migrate.context->vm); @@ -640,9 +692,28 @@ intel_context_migrate_copy(struct intel_context *ce, GEM_BUG_ON(ce->ring->size < SZ_64K); + if (HAS_FLAT_CCS(i915) && src_is_lmem ^ dst_is_lmem) { + src_sz = scatter_list_length(src); + dst_sz = scatter_list_length(dst);
+ if (src_is_lmem) + bytes_to_cpy = src_sz; + else if (dst_is_lmem) + bytes_to_cpy = dst_sz;
+ /* + * When there is a eviction of ccs needed smem will have the + * extra pages for the ccs data + * + * TO-DO: Want to move the size mismatch check to a WARN_ON, + * but still we have some requests of smem->lmem with same size. + * Need to fix it. + */ + ccs_bytes = src_sz != dst_sz ? GET_CCS_BYTES(i915, bytes_to_cpy) : 0; + }
do { - u32 src_offset, dst_offset; - int len; + u32 src_offset, dst_offset, copy_sz; rq = i915_request_create(ce); if (IS_ERR(rq)) { @@ -682,27 +753,82 @@ intel_context_migrate_copy(struct intel_context *ce, dst_offset = 2 * CHUNK_SZ; } - len = emit_pte(rq, &it_src, src_cache_level, src_is_lmem, - src_offset, CHUNK_SZ); - if (len <= 0) { - err = len; + if (ccs_copy) {
This loop was hard to understand already before this patch. Could we try to break out some loop functionality into separate functions?
Also if I understand the flow correctly, We're first blitting all the chunks of the main surface, and after that the CCS data? However for the control surface blit indirect addressing of LMEM to work, I figure *all* main surface LMEM pages for which we blit control data need to be present in the CHUNK_SZ window VMA, which is only true for small buffers. Hence we need to interleave main surface and CCS copies when we need to split the main surface into chunks, perhaps something like
for_each_chunk() {
disable_preemption();
emit_pte(lmem); emit_pte(system); xy_fast_copy_blt(); emit_pte(system_ccs_region); // Still use the system window for this tlb_flush(); // Flush the updated system ptes xy_ctrl_surf_copy_blt();
enable_preemption();
}
And also check whether we need to do the ctrl_surface blit first depending on blit direction (according to the docs).
Thanks, Thomas
---------------------------------------------------------------------- Intel Sweden AB Registered Office: Isafjordsgatan 30B, 164 40 Kista, Stockholm, Sweden Registration Number: 556189-6027
This e-mail and any attachments may contain confidential material for the sole use of the intended recipient(s). Any review or distribution by others is strictly prohibited. If you are not the intended recipient, please contact the sender and delete all copies.
dri-devel@lists.freedesktop.org