Hi,
On Mon, Jan 11, 2021 at 09:53:56AM +0100, Christian König wrote:
Am 08.01.21 um 22:58 schrieb Jeremy Cline:
dcn20_resource_construct() includes a number of kzalloc(GFP_KERNEL) calls which can sleep, but kernel_fpu_begin() disables preemption and sleeping in this context is invalid.
The only places the FPU appears to be required is in the init_soc_bounding_box() function and when calculating the {min,max}_fill_clk_mhz. Narrow the scope to just these two parts to avoid sleeping while using the FPU.
Fixes: 7a8a3430be15 ("amdgpu: Wrap FPU dependent functions in dc20") Cc: Timothy Pearson tpearson@raptorengineering.com Signed-off-by: Jeremy Cline jcline@redhat.com
Good catch, but I would rather replace the kzalloc(GFP_KERNEL) with a kzalloc(GFP_ATOMIC) for now.
We have tons of problems with this DC_FP_START()/DC_FP_END() annotations and are even in the process of moving them out of the file because the compiles tend to clutter FP registers even outside of the annotated ranges on some architectures.
Thanks for the review. Is it acceptable to move the DC_FP_END() annotation up to the last usage? Keeping it where it is is probably do-able, but covers things like calls to resource_construct() which makes use of struct resource_create_funcs. I'm guessing only a sub-set of the implementations are called via this function, but having an interface which can't sleep sometimes doesn't sound appealing.
Happy to do it, but before I go down that road I just wanted to make sure that's what you had in mind.
Thanks, Jeremy
drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c index e04ecf0fc0db..a4fa5bf016c1 100644 --- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c +++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c @@ -3622,6 +3622,7 @@ static bool init_soc_bounding_box(struct dc *dc, if (bb && ASICREV_IS_NAVI12_P(dc->ctx->asic_id.hw_internal_rev)) { int i;
dcn2_0_nv12_soc.sr_exit_time_us = fixed16_to_double_to_cpu(bb->sr_exit_time_us); dcn2_0_nv12_soc.sr_enter_plus_exit_time_us =DC_FP_START();
@@ -3721,6 +3722,7 @@ static bool init_soc_bounding_box(struct dc *dc, dcn2_0_nv12_soc.clock_limits[i].dram_speed_mts = fixed16_to_double_to_cpu(bb->clock_limits[i].dram_speed_mts); }
} if (pool->base.pp_smu) {DC_FP_END();
@@ -3777,8 +3779,6 @@ static bool dcn20_resource_construct( enum dml_project dml_project_version = get_dml_project_version(ctx->asic_id.hw_internal_rev);
- DC_FP_START();
- ctx->dc_bios->regs = &bios_regs; pool->base.funcs = &dcn20_res_pool_funcs;
@@ -3959,8 +3959,10 @@ static bool dcn20_resource_construct( ranges.reader_wm_sets[i].wm_inst = i; ranges.reader_wm_sets[i].min_drain_clk_mhz = PP_SMU_WM_SET_RANGE_CLK_UNCONSTRAINED_MIN; ranges.reader_wm_sets[i].max_drain_clk_mhz = PP_SMU_WM_SET_RANGE_CLK_UNCONSTRAINED_MAX;
DC_FP_START(); ranges.reader_wm_sets[i].min_fill_clk_mhz = (i > 0) ? (loaded_bb->clock_limits[i - 1].dram_speed_mts / 16) + 1 : 0; ranges.reader_wm_sets[i].max_fill_clk_mhz = loaded_bb->clock_limits[i].dram_speed_mts / 16;
DC_FP_END(); ranges.num_reader_wm_sets = i + 1; }
@@ -4125,12 +4127,10 @@ static bool dcn20_resource_construct( pool->base.oem_device = NULL; }
- DC_FP_END(); return true; create_fail:
- DC_FP_END(); dcn20_resource_destruct(pool); return false;