Based on the analysis of the bug from [1] the best course of action seems to be swapping off of DRM private objects back to subclassing DRM atomic state instead.
This patch series implements this change, but not yet the other changes suggested in the threads from that bug - these will come later.
CCing dri-devel per Daniel's suggestion since this issue brought some interesting misuse of private objects.
[1] https://bugzilla.kernel.org/show_bug.cgi?id=207383
Nicholas Kazlauskas (7): drm/amd/display: Store tiling_flags and tmz_surface on dm_plane_state drm/amd/display: Reset plane when tiling flags change drm/amd/display: Avoid using unvalidated tiling_flags and tmz_surface in prepare_planes drm/amd/display: Use validated tiling_flags and tmz_surface in commit_tail drm/amd/display: Reset plane for anything that's not a FAST update drm/amd/display: Drop dm_determine_update_type_for_commit drm/amd/display: Replace DRM private objects with subclassed DRM atomic state
.../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 967 ++++++------------ .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h | 13 +- 2 files changed, 343 insertions(+), 637 deletions(-)
[Why] Store these in advance so we can reuse them later in commit_tail without having to reserve the fbo again.
These will also be used for checking for tiling changes when deciding to reset the plane or not.
[How] This change should mostly be a refactor. Only commit check is affected for now and I'll drop the get_fb_info calls in prepare_planes and commit_tail after.
This runs a prepass loop once we think that all planes have been added to the context and replaces the get_fb_info calls with accessing the dm_plane_state instead.
Cc: Bhawanpreet Lakha Bhawanpreet.Lakha@amd.com Cc: Rodrigo Siqueira Rodrigo.Siqueira@amd.com Signed-off-by: Nicholas Kazlauskas nicholas.kazlauskas@amd.com --- .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 60 +++++++++++-------- .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h | 2 + 2 files changed, 37 insertions(+), 25 deletions(-)
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c index 8d64f5fde7e2..7cc5ab90ce13 100644 --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c @@ -3700,8 +3700,17 @@ static int fill_dc_scaling_info(const struct drm_plane_state *state, static int get_fb_info(const struct amdgpu_framebuffer *amdgpu_fb, uint64_t *tiling_flags, bool *tmz_surface) { - struct amdgpu_bo *rbo = gem_to_amdgpu_bo(amdgpu_fb->base.obj[0]); - int r = amdgpu_bo_reserve(rbo, false); + struct amdgpu_bo *rbo; + int r; + + if (!amdgpu_fb) { + *tiling_flags = 0; + *tmz_surface = false; + return 0; + } + + rbo = gem_to_amdgpu_bo(amdgpu_fb->base.obj[0]); + r = amdgpu_bo_reserve(rbo, false);
if (unlikely(r)) { /* Don't show error message when returning -ERESTARTSYS */ @@ -4124,13 +4133,10 @@ static int fill_dc_plane_attributes(struct amdgpu_device *adev, struct drm_crtc_state *crtc_state) { struct dm_crtc_state *dm_crtc_state = to_dm_crtc_state(crtc_state); - const struct amdgpu_framebuffer *amdgpu_fb = - to_amdgpu_framebuffer(plane_state->fb); + struct dm_plane_state *dm_plane_state = to_dm_plane_state(plane_state); struct dc_scaling_info scaling_info; struct dc_plane_info plane_info; - uint64_t tiling_flags; int ret; - bool tmz_surface = false; bool force_disable_dcc = false;
ret = fill_dc_scaling_info(plane_state, &scaling_info); @@ -4142,15 +4148,12 @@ static int fill_dc_plane_attributes(struct amdgpu_device *adev, dc_plane_state->clip_rect = scaling_info.clip_rect; dc_plane_state->scaling_quality = scaling_info.scaling_quality;
- ret = get_fb_info(amdgpu_fb, &tiling_flags, &tmz_surface); - if (ret) - return ret; - force_disable_dcc = adev->asic_type == CHIP_RAVEN && adev->in_suspend; - ret = fill_dc_plane_info_and_addr(adev, plane_state, tiling_flags, + ret = fill_dc_plane_info_and_addr(adev, plane_state, + dm_plane_state->tiling_flags, &plane_info, &dc_plane_state->address, - tmz_surface, + dm_plane_state->tmz_surface, force_disable_dcc); if (ret) return ret; @@ -5753,6 +5756,10 @@ dm_drm_plane_duplicate_state(struct drm_plane *plane) dc_plane_state_retain(dm_plane_state->dc_state); }
+ /* Framebuffer hasn't been updated yet, so retain old flags. */ + dm_plane_state->tiling_flags = old_dm_plane_state->tiling_flags; + dm_plane_state->tmz_surface = old_dm_plane_state->tmz_surface; + return &dm_plane_state->base; }
@@ -8557,13 +8564,9 @@ dm_determine_update_type_for_commit(struct amdgpu_display_manager *dm, continue;
for_each_oldnew_plane_in_state(state, plane, old_plane_state, new_plane_state, j) { - const struct amdgpu_framebuffer *amdgpu_fb = - to_amdgpu_framebuffer(new_plane_state->fb); struct dc_plane_info *plane_info = &bundle->plane_infos[num_plane]; struct dc_flip_addrs *flip_addr = &bundle->flip_addrs[num_plane]; struct dc_scaling_info *scaling_info = &bundle->scaling_infos[num_plane]; - uint64_t tiling_flags; - bool tmz_surface = false;
new_plane_crtc = new_plane_state->crtc; new_dm_plane_state = to_dm_plane_state(new_plane_state); @@ -8610,16 +8613,12 @@ dm_determine_update_type_for_commit(struct amdgpu_display_manager *dm,
bundle->surface_updates[num_plane].scaling_info = scaling_info;
- if (amdgpu_fb) { - ret = get_fb_info(amdgpu_fb, &tiling_flags, &tmz_surface); - if (ret) - goto cleanup; - + if (new_plane_state->fb) { ret = fill_dc_plane_info_and_addr( - dm->adev, new_plane_state, tiling_flags, - plane_info, - &flip_addr->address, tmz_surface, - false); + dm->adev, new_plane_state, + new_dm_plane_state->tiling_flags, + plane_info, &flip_addr->address, + new_dm_plane_state->tmz_surface, false); if (ret) goto cleanup;
@@ -8833,6 +8832,17 @@ static int amdgpu_dm_atomic_check(struct drm_device *dev, } }
+ /* Prepass for updating tiling flags on new planes. */ + for_each_new_plane_in_state(state, plane, new_plane_state, i) { + struct dm_plane_state *new_dm_plane_state = to_dm_plane_state(new_plane_state); + struct amdgpu_framebuffer *new_afb = to_amdgpu_framebuffer(new_plane_state->fb); + + ret = get_fb_info(new_afb, &new_dm_plane_state->tiling_flags, + &new_dm_plane_state->tmz_surface); + if (ret) + goto fail; + } + /* Remove exiting planes if they are modified */ for_each_oldnew_plane_in_state_reverse(state, plane, old_plane_state, new_plane_state, i) { ret = dm_update_plane_state(dc, state, plane, diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h index 5b6f879a108c..ad025f5cfd3e 100644 --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h @@ -409,6 +409,8 @@ struct dc_plane_state; struct dm_plane_state { struct drm_plane_state base; struct dc_plane_state *dc_state; + uint64_t tiling_flags; + bool tmz_surface; };
struct dm_crtc_state {
Reviewed-by: Rodrigo Siqueira Rodrigo.Siqueira@amd.com
On 07/30, Nicholas Kazlauskas wrote:
[Why] Store these in advance so we can reuse them later in commit_tail without having to reserve the fbo again.
These will also be used for checking for tiling changes when deciding to reset the plane or not.
[How] This change should mostly be a refactor. Only commit check is affected for now and I'll drop the get_fb_info calls in prepare_planes and commit_tail after.
This runs a prepass loop once we think that all planes have been added to the context and replaces the get_fb_info calls with accessing the dm_plane_state instead.
Cc: Bhawanpreet Lakha Bhawanpreet.Lakha@amd.com Cc: Rodrigo Siqueira Rodrigo.Siqueira@amd.com Signed-off-by: Nicholas Kazlauskas nicholas.kazlauskas@amd.com
.../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 60 +++++++++++-------- .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h | 2 + 2 files changed, 37 insertions(+), 25 deletions(-)
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c index 8d64f5fde7e2..7cc5ab90ce13 100644 --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c @@ -3700,8 +3700,17 @@ static int fill_dc_scaling_info(const struct drm_plane_state *state, static int get_fb_info(const struct amdgpu_framebuffer *amdgpu_fb, uint64_t *tiling_flags, bool *tmz_surface) {
- struct amdgpu_bo *rbo = gem_to_amdgpu_bo(amdgpu_fb->base.obj[0]);
- int r = amdgpu_bo_reserve(rbo, false);
struct amdgpu_bo *rbo;
int r;
if (!amdgpu_fb) {
*tiling_flags = 0;
*tmz_surface = false;
return 0;
}
rbo = gem_to_amdgpu_bo(amdgpu_fb->base.obj[0]);
r = amdgpu_bo_reserve(rbo, false);
if (unlikely(r)) { /* Don't show error message when returning -ERESTARTSYS */
@@ -4124,13 +4133,10 @@ static int fill_dc_plane_attributes(struct amdgpu_device *adev, struct drm_crtc_state *crtc_state) { struct dm_crtc_state *dm_crtc_state = to_dm_crtc_state(crtc_state);
- const struct amdgpu_framebuffer *amdgpu_fb =
to_amdgpu_framebuffer(plane_state->fb);
- struct dm_plane_state *dm_plane_state = to_dm_plane_state(plane_state); struct dc_scaling_info scaling_info; struct dc_plane_info plane_info;
uint64_t tiling_flags; int ret;
bool tmz_surface = false; bool force_disable_dcc = false;
ret = fill_dc_scaling_info(plane_state, &scaling_info);
@@ -4142,15 +4148,12 @@ static int fill_dc_plane_attributes(struct amdgpu_device *adev, dc_plane_state->clip_rect = scaling_info.clip_rect; dc_plane_state->scaling_quality = scaling_info.scaling_quality;
- ret = get_fb_info(amdgpu_fb, &tiling_flags, &tmz_surface);
- if (ret)
return ret;
- force_disable_dcc = adev->asic_type == CHIP_RAVEN && adev->in_suspend;
- ret = fill_dc_plane_info_and_addr(adev, plane_state, tiling_flags,
- ret = fill_dc_plane_info_and_addr(adev, plane_state,
dm_plane_state->tiling_flags, &plane_info, &dc_plane_state->address,
tmz_surface,
if (ret) return ret;dm_plane_state->tmz_surface, force_disable_dcc);
@@ -5753,6 +5756,10 @@ dm_drm_plane_duplicate_state(struct drm_plane *plane) dc_plane_state_retain(dm_plane_state->dc_state); }
- /* Framebuffer hasn't been updated yet, so retain old flags. */
- dm_plane_state->tiling_flags = old_dm_plane_state->tiling_flags;
- dm_plane_state->tmz_surface = old_dm_plane_state->tmz_surface;
- return &dm_plane_state->base;
}
@@ -8557,13 +8564,9 @@ dm_determine_update_type_for_commit(struct amdgpu_display_manager *dm, continue;
for_each_oldnew_plane_in_state(state, plane, old_plane_state, new_plane_state, j) {
const struct amdgpu_framebuffer *amdgpu_fb =
to_amdgpu_framebuffer(new_plane_state->fb); struct dc_plane_info *plane_info = &bundle->plane_infos[num_plane]; struct dc_flip_addrs *flip_addr = &bundle->flip_addrs[num_plane]; struct dc_scaling_info *scaling_info = &bundle->scaling_infos[num_plane];
uint64_t tiling_flags;
bool tmz_surface = false; new_plane_crtc = new_plane_state->crtc; new_dm_plane_state = to_dm_plane_state(new_plane_state);
@@ -8610,16 +8613,12 @@ dm_determine_update_type_for_commit(struct amdgpu_display_manager *dm,
bundle->surface_updates[num_plane].scaling_info = scaling_info;
if (amdgpu_fb) {
ret = get_fb_info(amdgpu_fb, &tiling_flags, &tmz_surface);
if (ret)
goto cleanup;
if (new_plane_state->fb) { ret = fill_dc_plane_info_and_addr(
dm->adev, new_plane_state, tiling_flags,
plane_info,
&flip_addr->address, tmz_surface,
false);
dm->adev, new_plane_state,
new_dm_plane_state->tiling_flags,
plane_info, &flip_addr->address,
new_dm_plane_state->tmz_surface, false); if (ret) goto cleanup;
@@ -8833,6 +8832,17 @@ static int amdgpu_dm_atomic_check(struct drm_device *dev, } }
- /* Prepass for updating tiling flags on new planes. */
- for_each_new_plane_in_state(state, plane, new_plane_state, i) {
struct dm_plane_state *new_dm_plane_state = to_dm_plane_state(new_plane_state);
struct amdgpu_framebuffer *new_afb = to_amdgpu_framebuffer(new_plane_state->fb);
ret = get_fb_info(new_afb, &new_dm_plane_state->tiling_flags,
&new_dm_plane_state->tmz_surface);
if (ret)
goto fail;
- }
- /* Remove exiting planes if they are modified */ for_each_oldnew_plane_in_state_reverse(state, plane, old_plane_state, new_plane_state, i) { ret = dm_update_plane_state(dc, state, plane,
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h index 5b6f879a108c..ad025f5cfd3e 100644 --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h @@ -409,6 +409,8 @@ struct dc_plane_state; struct dm_plane_state { struct drm_plane_state base; struct dc_plane_state *dc_state;
- uint64_t tiling_flags;
- bool tmz_surface;
};
struct dm_crtc_state {
2.25.1
On Thu, Jul 30, 2020 at 04:36:36PM -0400, Nicholas Kazlauskas wrote:
[Why] Store these in advance so we can reuse them later in commit_tail without having to reserve the fbo again.
These will also be used for checking for tiling changes when deciding to reset the plane or not.
I've also dropped some comments on Bas' series for adding modifiers which might be relevant for shuffling all this. But yeah stuff this into plane state sounds like a good idea. -Daniel
[How] This change should mostly be a refactor. Only commit check is affected for now and I'll drop the get_fb_info calls in prepare_planes and commit_tail after.
This runs a prepass loop once we think that all planes have been added to the context and replaces the get_fb_info calls with accessing the dm_plane_state instead.
Cc: Bhawanpreet Lakha Bhawanpreet.Lakha@amd.com Cc: Rodrigo Siqueira Rodrigo.Siqueira@amd.com Signed-off-by: Nicholas Kazlauskas nicholas.kazlauskas@amd.com
.../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 60 +++++++++++-------- .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h | 2 + 2 files changed, 37 insertions(+), 25 deletions(-)
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c index 8d64f5fde7e2..7cc5ab90ce13 100644 --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c @@ -3700,8 +3700,17 @@ static int fill_dc_scaling_info(const struct drm_plane_state *state, static int get_fb_info(const struct amdgpu_framebuffer *amdgpu_fb, uint64_t *tiling_flags, bool *tmz_surface) {
- struct amdgpu_bo *rbo = gem_to_amdgpu_bo(amdgpu_fb->base.obj[0]);
- int r = amdgpu_bo_reserve(rbo, false);
struct amdgpu_bo *rbo;
int r;
if (!amdgpu_fb) {
*tiling_flags = 0;
*tmz_surface = false;
return 0;
}
rbo = gem_to_amdgpu_bo(amdgpu_fb->base.obj[0]);
r = amdgpu_bo_reserve(rbo, false);
if (unlikely(r)) { /* Don't show error message when returning -ERESTARTSYS */
@@ -4124,13 +4133,10 @@ static int fill_dc_plane_attributes(struct amdgpu_device *adev, struct drm_crtc_state *crtc_state) { struct dm_crtc_state *dm_crtc_state = to_dm_crtc_state(crtc_state);
- const struct amdgpu_framebuffer *amdgpu_fb =
to_amdgpu_framebuffer(plane_state->fb);
- struct dm_plane_state *dm_plane_state = to_dm_plane_state(plane_state); struct dc_scaling_info scaling_info; struct dc_plane_info plane_info;
uint64_t tiling_flags; int ret;
bool tmz_surface = false; bool force_disable_dcc = false;
ret = fill_dc_scaling_info(plane_state, &scaling_info);
@@ -4142,15 +4148,12 @@ static int fill_dc_plane_attributes(struct amdgpu_device *adev, dc_plane_state->clip_rect = scaling_info.clip_rect; dc_plane_state->scaling_quality = scaling_info.scaling_quality;
- ret = get_fb_info(amdgpu_fb, &tiling_flags, &tmz_surface);
- if (ret)
return ret;
- force_disable_dcc = adev->asic_type == CHIP_RAVEN && adev->in_suspend;
- ret = fill_dc_plane_info_and_addr(adev, plane_state, tiling_flags,
- ret = fill_dc_plane_info_and_addr(adev, plane_state,
dm_plane_state->tiling_flags, &plane_info, &dc_plane_state->address,
tmz_surface,
if (ret) return ret;dm_plane_state->tmz_surface, force_disable_dcc);
@@ -5753,6 +5756,10 @@ dm_drm_plane_duplicate_state(struct drm_plane *plane) dc_plane_state_retain(dm_plane_state->dc_state); }
- /* Framebuffer hasn't been updated yet, so retain old flags. */
- dm_plane_state->tiling_flags = old_dm_plane_state->tiling_flags;
- dm_plane_state->tmz_surface = old_dm_plane_state->tmz_surface;
- return &dm_plane_state->base;
}
@@ -8557,13 +8564,9 @@ dm_determine_update_type_for_commit(struct amdgpu_display_manager *dm, continue;
for_each_oldnew_plane_in_state(state, plane, old_plane_state, new_plane_state, j) {
const struct amdgpu_framebuffer *amdgpu_fb =
to_amdgpu_framebuffer(new_plane_state->fb); struct dc_plane_info *plane_info = &bundle->plane_infos[num_plane]; struct dc_flip_addrs *flip_addr = &bundle->flip_addrs[num_plane]; struct dc_scaling_info *scaling_info = &bundle->scaling_infos[num_plane];
uint64_t tiling_flags;
bool tmz_surface = false; new_plane_crtc = new_plane_state->crtc; new_dm_plane_state = to_dm_plane_state(new_plane_state);
@@ -8610,16 +8613,12 @@ dm_determine_update_type_for_commit(struct amdgpu_display_manager *dm,
bundle->surface_updates[num_plane].scaling_info = scaling_info;
if (amdgpu_fb) {
ret = get_fb_info(amdgpu_fb, &tiling_flags, &tmz_surface);
if (ret)
goto cleanup;
if (new_plane_state->fb) { ret = fill_dc_plane_info_and_addr(
dm->adev, new_plane_state, tiling_flags,
plane_info,
&flip_addr->address, tmz_surface,
false);
dm->adev, new_plane_state,
new_dm_plane_state->tiling_flags,
plane_info, &flip_addr->address,
new_dm_plane_state->tmz_surface, false); if (ret) goto cleanup;
@@ -8833,6 +8832,17 @@ static int amdgpu_dm_atomic_check(struct drm_device *dev, } }
- /* Prepass for updating tiling flags on new planes. */
- for_each_new_plane_in_state(state, plane, new_plane_state, i) {
struct dm_plane_state *new_dm_plane_state = to_dm_plane_state(new_plane_state);
struct amdgpu_framebuffer *new_afb = to_amdgpu_framebuffer(new_plane_state->fb);
ret = get_fb_info(new_afb, &new_dm_plane_state->tiling_flags,
&new_dm_plane_state->tmz_surface);
if (ret)
goto fail;
- }
- /* Remove exiting planes if they are modified */ for_each_oldnew_plane_in_state_reverse(state, plane, old_plane_state, new_plane_state, i) { ret = dm_update_plane_state(dc, state, plane,
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h index 5b6f879a108c..ad025f5cfd3e 100644 --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h @@ -409,6 +409,8 @@ struct dc_plane_state; struct dm_plane_state { struct drm_plane_state base; struct dc_plane_state *dc_state;
- uint64_t tiling_flags;
- bool tmz_surface;
};
struct dm_crtc_state {
2.25.1
dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel
[Why] Enabling or disable DCC or switching between tiled and linear formats can require bandwidth updates.
They're currently skipping all DC validation by being treated as purely surface updates.
[How] Treat tiling_flag changes (which encode DCC state) as a condition for resetting the plane.
Cc: Bhawanpreet Lakha Bhawanpreet.Lakha@amd.com Cc: Rodrigo Siqueira Rodrigo.Siqueira@amd.com Cc: Hersen Wu hersenxs.wu@amd.com Signed-off-by: Nicholas Kazlauskas nicholas.kazlauskas@amd.com --- .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 19 ++++++++++++++++--- 1 file changed, 16 insertions(+), 3 deletions(-)
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c index 7cc5ab90ce13..bf1881bd492c 100644 --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c @@ -8332,6 +8332,8 @@ static bool should_reset_plane(struct drm_atomic_state *state, * TODO: Come up with a more elegant solution for this. */ for_each_oldnew_plane_in_state(state, other, old_other_state, new_other_state, i) { + struct dm_plane_state *old_dm_plane_state, *new_dm_plane_state; + if (other->type == DRM_PLANE_TYPE_CURSOR) continue;
@@ -8342,9 +8344,20 @@ static bool should_reset_plane(struct drm_atomic_state *state, if (old_other_state->crtc != new_other_state->crtc) return true;
- /* TODO: Remove this once we can handle fast format changes. */ - if (old_other_state->fb && new_other_state->fb && - old_other_state->fb->format != new_other_state->fb->format) + /* Framebuffer checks fall at the end. */ + if (!old_other_state->fb || !new_other_state->fb) + continue; + + /* Pixel format changes can require bandwidth updates. */ + if (old_other_state->fb->format != new_other_state->fb->format) + return true; + + old_dm_plane_state = to_dm_plane_state(old_other_state); + new_dm_plane_state = to_dm_plane_state(new_other_state); + + /* Tiling and DCC changes also require bandwidth updates. */ + if (old_dm_plane_state->tiling_flags != + new_dm_plane_state->tiling_flags) return true; }
[AMD Official Use Only - Internal Distribution Only]
Reviewed-by: Hersen Wu hersenxs.wu@amd.com
-----Original Message----- From: Nicholas Kazlauskas nicholas.kazlauskas@amd.com Sent: Thursday, July 30, 2020 4:37 PM To: amd-gfx@lists.freedesktop.org; dri-devel@lists.freedesktop.org Cc: Kazlauskas, Nicholas Nicholas.Kazlauskas@amd.com; Lakha, Bhawanpreet Bhawanpreet.Lakha@amd.com; Siqueira, Rodrigo Rodrigo.Siqueira@amd.com; Wu, Hersen hersenxs.wu@amd.com Subject: [PATCH 2/7] drm/amd/display: Reset plane when tiling flags change
[Why] Enabling or disable DCC or switching between tiled and linear formats can require bandwidth updates.
They're currently skipping all DC validation by being treated as purely surface updates.
[How] Treat tiling_flag changes (which encode DCC state) as a condition for resetting the plane.
Cc: Bhawanpreet Lakha Bhawanpreet.Lakha@amd.com Cc: Rodrigo Siqueira Rodrigo.Siqueira@amd.com Cc: Hersen Wu hersenxs.wu@amd.com Signed-off-by: Nicholas Kazlauskas nicholas.kazlauskas@amd.com --- .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 19 ++++++++++++++++--- 1 file changed, 16 insertions(+), 3 deletions(-)
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c index 7cc5ab90ce13..bf1881bd492c 100644 --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c @@ -8332,6 +8332,8 @@ static bool should_reset_plane(struct drm_atomic_state *state, * TODO: Come up with a more elegant solution for this. */ for_each_oldnew_plane_in_state(state, other, old_other_state, new_other_state, i) { + struct dm_plane_state *old_dm_plane_state, *new_dm_plane_state; + if (other->type == DRM_PLANE_TYPE_CURSOR) continue;
@@ -8342,9 +8344,20 @@ static bool should_reset_plane(struct drm_atomic_state *state, if (old_other_state->crtc != new_other_state->crtc) return true;
- /* TODO: Remove this once we can handle fast format changes. */ - if (old_other_state->fb && new_other_state->fb && - old_other_state->fb->format != new_other_state->fb->format) + /* Framebuffer checks fall at the end. */ + if (!old_other_state->fb || !new_other_state->fb) + continue; + + /* Pixel format changes can require bandwidth updates. */ + if (old_other_state->fb->format != new_other_state->fb->format) + return true; + + old_dm_plane_state = to_dm_plane_state(old_other_state); + new_dm_plane_state = to_dm_plane_state(new_other_state); + + /* Tiling and DCC changes also require bandwidth updates. */ + if (old_dm_plane_state->tiling_flags != + new_dm_plane_state->tiling_flags) return true; }
-- 2.25.1
On 07/30, Nicholas Kazlauskas wrote:
[Why] Enabling or disable DCC or switching between tiled and linear formats can require bandwidth updates.
They're currently skipping all DC validation by being treated as purely surface updates.
[How] Treat tiling_flag changes (which encode DCC state) as a condition for resetting the plane.
Cc: Bhawanpreet Lakha Bhawanpreet.Lakha@amd.com Cc: Rodrigo Siqueira Rodrigo.Siqueira@amd.com Cc: Hersen Wu hersenxs.wu@amd.com Signed-off-by: Nicholas Kazlauskas nicholas.kazlauskas@amd.com
.../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 19 ++++++++++++++++--- 1 file changed, 16 insertions(+), 3 deletions(-)
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c index 7cc5ab90ce13..bf1881bd492c 100644 --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c @@ -8332,6 +8332,8 @@ static bool should_reset_plane(struct drm_atomic_state *state, * TODO: Come up with a more elegant solution for this. */ for_each_oldnew_plane_in_state(state, other, old_other_state, new_other_state, i) {
struct dm_plane_state *old_dm_plane_state, *new_dm_plane_state;
- if (other->type == DRM_PLANE_TYPE_CURSOR) continue;
@@ -8342,9 +8344,20 @@ static bool should_reset_plane(struct drm_atomic_state *state, if (old_other_state->crtc != new_other_state->crtc) return true;
/* TODO: Remove this once we can handle fast format changes. */
if (old_other_state->fb && new_other_state->fb &&
old_other_state->fb->format != new_other_state->fb->format)
/* Framebuffer checks fall at the end. */
if (!old_other_state->fb || !new_other_state->fb)
continue;
/* Pixel format changes can require bandwidth updates. */
if (old_other_state->fb->format != new_other_state->fb->format)
return true;
old_dm_plane_state = to_dm_plane_state(old_other_state);
new_dm_plane_state = to_dm_plane_state(new_other_state);
/* Tiling and DCC changes also require bandwidth updates. */
if (old_dm_plane_state->tiling_flags !=
new_dm_plane_state->tiling_flags)
Why not add a case when we move to a TMZ area?
Reviewed-by: Rodrigo Siqueira Rodrigo.Siqueira@amd.com
return true;
}
-- 2.25.1
On 2020-08-05 5:11 p.m., Rodrigo Siqueira wrote:
On 07/30, Nicholas Kazlauskas wrote:
[Why] Enabling or disable DCC or switching between tiled and linear formats can require bandwidth updates.
They're currently skipping all DC validation by being treated as purely surface updates.
[How] Treat tiling_flag changes (which encode DCC state) as a condition for resetting the plane.
Cc: Bhawanpreet Lakha Bhawanpreet.Lakha@amd.com Cc: Rodrigo Siqueira Rodrigo.Siqueira@amd.com Cc: Hersen Wu hersenxs.wu@amd.com Signed-off-by: Nicholas Kazlauskas nicholas.kazlauskas@amd.com
.../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 19 ++++++++++++++++--- 1 file changed, 16 insertions(+), 3 deletions(-)
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c index 7cc5ab90ce13..bf1881bd492c 100644 --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c @@ -8332,6 +8332,8 @@ static bool should_reset_plane(struct drm_atomic_state *state, * TODO: Come up with a more elegant solution for this. */ for_each_oldnew_plane_in_state(state, other, old_other_state, new_other_state, i) {
struct dm_plane_state *old_dm_plane_state, *new_dm_plane_state;
- if (other->type == DRM_PLANE_TYPE_CURSOR) continue;
@@ -8342,9 +8344,20 @@ static bool should_reset_plane(struct drm_atomic_state *state, if (old_other_state->crtc != new_other_state->crtc) return true;
/* TODO: Remove this once we can handle fast format changes. */
if (old_other_state->fb && new_other_state->fb &&
old_other_state->fb->format != new_other_state->fb->format)
/* Framebuffer checks fall at the end. */
if (!old_other_state->fb || !new_other_state->fb)
continue;
/* Pixel format changes can require bandwidth updates. */
if (old_other_state->fb->format != new_other_state->fb->format)
return true;
old_dm_plane_state = to_dm_plane_state(old_other_state);
new_dm_plane_state = to_dm_plane_state(new_other_state);
/* Tiling and DCC changes also require bandwidth updates. */
if (old_dm_plane_state->tiling_flags !=
new_dm_plane_state->tiling_flags)
Why not add a case when we move to a TMZ area?
Reviewed-by: Rodrigo Siqueira Rodrigo.Siqueira@amd.com
TMZ doesn't affect DML calculations or validation in this case so we can safely skip it.
Regards, Nicholas Kazlauskas
return true;
}
-- 2.25.1
[Why] We're racing with userspace as the flags could potentially change from when we acquired and validated them in commit_check.
[How] We unfortunately can't drop this function in its entirety from prepare_planes since we don't know the afb->address at commit_check time yet.
So instead of querying new tiling_flags and tmz_surface use the ones from the plane_state directly.
While we're at it, also update the force_disable_dcc option based on the state from atomic check.
Cc: Bhawanpreet Lakha Bhawanpreet.Lakha@amd.com Cc: Rodrigo Siqueira Rodrigo.Siqueira@amd.com Signed-off-by: Nicholas Kazlauskas nicholas.kazlauskas@amd.com --- .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 36 ++++++++++--------- 1 file changed, 19 insertions(+), 17 deletions(-)
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c index bf1881bd492c..f78c09c9585e 100644 --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c @@ -5794,14 +5794,8 @@ static int dm_plane_helper_prepare_fb(struct drm_plane *plane, struct list_head list; struct ttm_validate_buffer tv; struct ww_acquire_ctx ticket; - uint64_t tiling_flags; uint32_t domain; int r; - bool tmz_surface = false; - bool force_disable_dcc = false; - - dm_plane_state_old = to_dm_plane_state(plane->state); - dm_plane_state_new = to_dm_plane_state(new_state);
if (!new_state->fb) { DRM_DEBUG_DRIVER("No FB bound\n"); @@ -5845,27 +5839,35 @@ static int dm_plane_helper_prepare_fb(struct drm_plane *plane, return r; }
- amdgpu_bo_get_tiling_flags(rbo, &tiling_flags); - - tmz_surface = amdgpu_bo_encrypted(rbo); - ttm_eu_backoff_reservation(&ticket, &list);
afb->address = amdgpu_bo_gpu_offset(rbo);
amdgpu_bo_ref(rbo);
+ /** + * We don't do surface updates on planes that have been newly created, + * but we also don't have the afb->address during atomic check. + * + * Fill in buffer attributes depending on the address here, but only on + * newly created planes since they're not being used by DC yet and this + * won't modify global state. + */ + dm_plane_state_old = to_dm_plane_state(plane->state); + dm_plane_state_new = to_dm_plane_state(new_state); + if (dm_plane_state_new->dc_state && - dm_plane_state_old->dc_state != dm_plane_state_new->dc_state) { - struct dc_plane_state *plane_state = dm_plane_state_new->dc_state; + dm_plane_state_old->dc_state != dm_plane_state_new->dc_state) { + struct dc_plane_state *plane_state = + dm_plane_state_new->dc_state; + bool force_disable_dcc = !plane_state->dcc.enable;
- force_disable_dcc = adev->asic_type == CHIP_RAVEN && adev->in_suspend; fill_plane_buffer_attributes( adev, afb, plane_state->format, plane_state->rotation, - tiling_flags, &plane_state->tiling_info, - &plane_state->plane_size, &plane_state->dcc, - &plane_state->address, tmz_surface, - force_disable_dcc); + dm_plane_state_new->tiling_flags, + &plane_state->tiling_info, &plane_state->plane_size, + &plane_state->dcc, &plane_state->address, + dm_plane_state_new->tmz_surface, force_disable_dcc); }
return 0;
Reviewed-by: Rodrigo Siqueira Rodrigo.Siqueira@amd.com
On 07/30, Nicholas Kazlauskas wrote:
[Why] We're racing with userspace as the flags could potentially change from when we acquired and validated them in commit_check.
[How] We unfortunately can't drop this function in its entirety from prepare_planes since we don't know the afb->address at commit_check time yet.
So instead of querying new tiling_flags and tmz_surface use the ones from the plane_state directly.
While we're at it, also update the force_disable_dcc option based on the state from atomic check.
Cc: Bhawanpreet Lakha Bhawanpreet.Lakha@amd.com Cc: Rodrigo Siqueira Rodrigo.Siqueira@amd.com Signed-off-by: Nicholas Kazlauskas nicholas.kazlauskas@amd.com
.../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 36 ++++++++++--------- 1 file changed, 19 insertions(+), 17 deletions(-)
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c index bf1881bd492c..f78c09c9585e 100644 --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c @@ -5794,14 +5794,8 @@ static int dm_plane_helper_prepare_fb(struct drm_plane *plane, struct list_head list; struct ttm_validate_buffer tv; struct ww_acquire_ctx ticket;
uint64_t tiling_flags; uint32_t domain; int r;
bool tmz_surface = false;
bool force_disable_dcc = false;
dm_plane_state_old = to_dm_plane_state(plane->state);
dm_plane_state_new = to_dm_plane_state(new_state);
if (!new_state->fb) { DRM_DEBUG_DRIVER("No FB bound\n");
@@ -5845,27 +5839,35 @@ static int dm_plane_helper_prepare_fb(struct drm_plane *plane, return r; }
amdgpu_bo_get_tiling_flags(rbo, &tiling_flags);
tmz_surface = amdgpu_bo_encrypted(rbo);
ttm_eu_backoff_reservation(&ticket, &list);
afb->address = amdgpu_bo_gpu_offset(rbo);
amdgpu_bo_ref(rbo);
- /**
* We don't do surface updates on planes that have been newly created,
* but we also don't have the afb->address during atomic check.
*
* Fill in buffer attributes depending on the address here, but only on
* newly created planes since they're not being used by DC yet and this
* won't modify global state.
*/
- dm_plane_state_old = to_dm_plane_state(plane->state);
- dm_plane_state_new = to_dm_plane_state(new_state);
- if (dm_plane_state_new->dc_state &&
dm_plane_state_old->dc_state != dm_plane_state_new->dc_state) {
struct dc_plane_state *plane_state = dm_plane_state_new->dc_state;
dm_plane_state_old->dc_state != dm_plane_state_new->dc_state) {
struct dc_plane_state *plane_state =
dm_plane_state_new->dc_state;
bool force_disable_dcc = !plane_state->dcc.enable;
fill_plane_buffer_attributes( adev, afb, plane_state->format, plane_state->rotation,force_disable_dcc = adev->asic_type == CHIP_RAVEN && adev->in_suspend;
tiling_flags, &plane_state->tiling_info,
&plane_state->plane_size, &plane_state->dcc,
&plane_state->address, tmz_surface,
force_disable_dcc);
dm_plane_state_new->tiling_flags,
&plane_state->tiling_info, &plane_state->plane_size,
&plane_state->dcc, &plane_state->address,
dm_plane_state_new->tmz_surface, force_disable_dcc);
}
return 0;
-- 2.25.1
On Thu, Jul 30, 2020 at 04:36:38PM -0400, Nicholas Kazlauskas wrote:
[Why] We're racing with userspace as the flags could potentially change from when we acquired and validated them in commit_check.
Uh ... I didn't know these could change. I think my comments on Bas' series are even more relevant now. I think long term would be best to bake these flags in at addfb time when modifiers aren't set. And otherwise always use the modifiers flag, and completely ignore the legacy flags here. -Daniel
[How] We unfortunately can't drop this function in its entirety from prepare_planes since we don't know the afb->address at commit_check time yet.
So instead of querying new tiling_flags and tmz_surface use the ones from the plane_state directly.
While we're at it, also update the force_disable_dcc option based on the state from atomic check.
Cc: Bhawanpreet Lakha Bhawanpreet.Lakha@amd.com Cc: Rodrigo Siqueira Rodrigo.Siqueira@amd.com Signed-off-by: Nicholas Kazlauskas nicholas.kazlauskas@amd.com
.../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 36 ++++++++++--------- 1 file changed, 19 insertions(+), 17 deletions(-)
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c index bf1881bd492c..f78c09c9585e 100644 --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c @@ -5794,14 +5794,8 @@ static int dm_plane_helper_prepare_fb(struct drm_plane *plane, struct list_head list; struct ttm_validate_buffer tv; struct ww_acquire_ctx ticket;
uint64_t tiling_flags; uint32_t domain; int r;
bool tmz_surface = false;
bool force_disable_dcc = false;
dm_plane_state_old = to_dm_plane_state(plane->state);
dm_plane_state_new = to_dm_plane_state(new_state);
if (!new_state->fb) { DRM_DEBUG_DRIVER("No FB bound\n");
@@ -5845,27 +5839,35 @@ static int dm_plane_helper_prepare_fb(struct drm_plane *plane, return r; }
amdgpu_bo_get_tiling_flags(rbo, &tiling_flags);
tmz_surface = amdgpu_bo_encrypted(rbo);
ttm_eu_backoff_reservation(&ticket, &list);
afb->address = amdgpu_bo_gpu_offset(rbo);
amdgpu_bo_ref(rbo);
- /**
* We don't do surface updates on planes that have been newly created,
* but we also don't have the afb->address during atomic check.
*
* Fill in buffer attributes depending on the address here, but only on
* newly created planes since they're not being used by DC yet and this
* won't modify global state.
*/
- dm_plane_state_old = to_dm_plane_state(plane->state);
- dm_plane_state_new = to_dm_plane_state(new_state);
- if (dm_plane_state_new->dc_state &&
dm_plane_state_old->dc_state != dm_plane_state_new->dc_state) {
struct dc_plane_state *plane_state = dm_plane_state_new->dc_state;
dm_plane_state_old->dc_state != dm_plane_state_new->dc_state) {
struct dc_plane_state *plane_state =
dm_plane_state_new->dc_state;
bool force_disable_dcc = !plane_state->dcc.enable;
fill_plane_buffer_attributes( adev, afb, plane_state->format, plane_state->rotation,force_disable_dcc = adev->asic_type == CHIP_RAVEN && adev->in_suspend;
tiling_flags, &plane_state->tiling_info,
&plane_state->plane_size, &plane_state->dcc,
&plane_state->address, tmz_surface,
force_disable_dcc);
dm_plane_state_new->tiling_flags,
&plane_state->tiling_info, &plane_state->plane_size,
&plane_state->dcc, &plane_state->address,
dm_plane_state_new->tmz_surface, force_disable_dcc);
}
return 0;
-- 2.25.1
dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel
On 2020-08-07 4:30 a.m., daniel@ffwll.ch wrote:
On Thu, Jul 30, 2020 at 04:36:38PM -0400, Nicholas Kazlauskas wrote:
[Why] We're racing with userspace as the flags could potentially change from when we acquired and validated them in commit_check.
Uh ... I didn't know these could change. I think my comments on Bas' series are even more relevant now. I think long term would be best to bake these flags in at addfb time when modifiers aren't set. And otherwise always use the modifiers flag, and completely ignore the legacy flags here. -Daniel
There's a set tiling/mod flags IOCTL that can be called after addfb happens, so unless there's some sort of driver magic preventing this from working when it's already been pinned for scanout then I don't see anything stopping this from happening.
I still need to review the modifiers series in a little more detail but that looks like a good approach to fixing these kind of issues.
Regards, Nicholas Kazlauskas
[How] We unfortunately can't drop this function in its entirety from prepare_planes since we don't know the afb->address at commit_check time yet.
So instead of querying new tiling_flags and tmz_surface use the ones from the plane_state directly.
While we're at it, also update the force_disable_dcc option based on the state from atomic check.
Cc: Bhawanpreet Lakha Bhawanpreet.Lakha@amd.com Cc: Rodrigo Siqueira Rodrigo.Siqueira@amd.com Signed-off-by: Nicholas Kazlauskas nicholas.kazlauskas@amd.com
.../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 36 ++++++++++--------- 1 file changed, 19 insertions(+), 17 deletions(-)
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c index bf1881bd492c..f78c09c9585e 100644 --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c @@ -5794,14 +5794,8 @@ static int dm_plane_helper_prepare_fb(struct drm_plane *plane, struct list_head list; struct ttm_validate_buffer tv; struct ww_acquire_ctx ticket;
uint64_t tiling_flags; uint32_t domain; int r;
bool tmz_surface = false;
bool force_disable_dcc = false;
dm_plane_state_old = to_dm_plane_state(plane->state);
dm_plane_state_new = to_dm_plane_state(new_state);
if (!new_state->fb) { DRM_DEBUG_DRIVER("No FB bound\n");
@@ -5845,27 +5839,35 @@ static int dm_plane_helper_prepare_fb(struct drm_plane *plane, return r; }
amdgpu_bo_get_tiling_flags(rbo, &tiling_flags);
tmz_surface = amdgpu_bo_encrypted(rbo);
ttm_eu_backoff_reservation(&ticket, &list);
afb->address = amdgpu_bo_gpu_offset(rbo);
amdgpu_bo_ref(rbo);
- /**
* We don't do surface updates on planes that have been newly created,
* but we also don't have the afb->address during atomic check.
*
* Fill in buffer attributes depending on the address here, but only on
* newly created planes since they're not being used by DC yet and this
* won't modify global state.
*/
- dm_plane_state_old = to_dm_plane_state(plane->state);
- dm_plane_state_new = to_dm_plane_state(new_state);
- if (dm_plane_state_new->dc_state &&
dm_plane_state_old->dc_state != dm_plane_state_new->dc_state) {
struct dc_plane_state *plane_state = dm_plane_state_new->dc_state;
dm_plane_state_old->dc_state != dm_plane_state_new->dc_state) {
struct dc_plane_state *plane_state =
dm_plane_state_new->dc_state;
bool force_disable_dcc = !plane_state->dcc.enable;
fill_plane_buffer_attributes( adev, afb, plane_state->format, plane_state->rotation,force_disable_dcc = adev->asic_type == CHIP_RAVEN && adev->in_suspend;
tiling_flags, &plane_state->tiling_info,
&plane_state->plane_size, &plane_state->dcc,
&plane_state->address, tmz_surface,
force_disable_dcc);
dm_plane_state_new->tiling_flags,
&plane_state->tiling_info, &plane_state->plane_size,
&plane_state->dcc, &plane_state->address,
dm_plane_state_new->tmz_surface, force_disable_dcc);
}
return 0;
-- 2.25.1
dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel
On Fri, Aug 07, 2020 at 10:29:09AM -0400, Kazlauskas, Nicholas wrote:
On 2020-08-07 4:30 a.m., daniel@ffwll.ch wrote:
On Thu, Jul 30, 2020 at 04:36:38PM -0400, Nicholas Kazlauskas wrote:
[Why] We're racing with userspace as the flags could potentially change from when we acquired and validated them in commit_check.
Uh ... I didn't know these could change. I think my comments on Bas' series are even more relevant now. I think long term would be best to bake these flags in at addfb time when modifiers aren't set. And otherwise always use the modifiers flag, and completely ignore the legacy flags here. -Daniel
There's a set tiling/mod flags IOCTL that can be called after addfb happens, so unless there's some sort of driver magic preventing this from working when it's already been pinned for scanout then I don't see anything stopping this from happening.
I still need to review the modifiers series in a little more detail but that looks like a good approach to fixing these kind of issues.
Yeah we had the same model for i915, but it's awkward and can surprise compositors (since the client could change the tiling mode from underneath the compositor). So freezing the tiling mode at addfb time is the right thing to do, and anyway how things work with modifiers.
Ofc maybe good to audit the -amd driver, but hopefully it doesn't do anything silly with changed tiling. If it does, it's kinda sad day.
Btw for i915 we even went a step further, and made the set_tiling ioctl return an error if a framebuffer for that gem_bo existed. Just to make sure we don't ever accidentally break this.
Cheers, Daniel
Regards, Nicholas Kazlauskas
[How] We unfortunately can't drop this function in its entirety from prepare_planes since we don't know the afb->address at commit_check time yet.
So instead of querying new tiling_flags and tmz_surface use the ones from the plane_state directly.
While we're at it, also update the force_disable_dcc option based on the state from atomic check.
Cc: Bhawanpreet Lakha Bhawanpreet.Lakha@amd.com Cc: Rodrigo Siqueira Rodrigo.Siqueira@amd.com Signed-off-by: Nicholas Kazlauskas nicholas.kazlauskas@amd.com
.../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 36 ++++++++++--------- 1 file changed, 19 insertions(+), 17 deletions(-)
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c index bf1881bd492c..f78c09c9585e 100644 --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c @@ -5794,14 +5794,8 @@ static int dm_plane_helper_prepare_fb(struct drm_plane *plane, struct list_head list; struct ttm_validate_buffer tv; struct ww_acquire_ctx ticket;
- uint64_t tiling_flags; uint32_t domain; int r;
- bool tmz_surface = false;
- bool force_disable_dcc = false;
- dm_plane_state_old = to_dm_plane_state(plane->state);
- dm_plane_state_new = to_dm_plane_state(new_state); if (!new_state->fb) { DRM_DEBUG_DRIVER("No FB bound\n");
@@ -5845,27 +5839,35 @@ static int dm_plane_helper_prepare_fb(struct drm_plane *plane, return r; }
- amdgpu_bo_get_tiling_flags(rbo, &tiling_flags);
- tmz_surface = amdgpu_bo_encrypted(rbo);
- ttm_eu_backoff_reservation(&ticket, &list); afb->address = amdgpu_bo_gpu_offset(rbo); amdgpu_bo_ref(rbo);
- /**
* We don't do surface updates on planes that have been newly created,
* but we also don't have the afb->address during atomic check.
*
* Fill in buffer attributes depending on the address here, but only on
* newly created planes since they're not being used by DC yet and this
* won't modify global state.
*/
- dm_plane_state_old = to_dm_plane_state(plane->state);
- dm_plane_state_new = to_dm_plane_state(new_state);
- if (dm_plane_state_new->dc_state &&
dm_plane_state_old->dc_state != dm_plane_state_new->dc_state) {
struct dc_plane_state *plane_state = dm_plane_state_new->dc_state;
dm_plane_state_old->dc_state != dm_plane_state_new->dc_state) {
struct dc_plane_state *plane_state =
dm_plane_state_new->dc_state;
bool force_disable_dcc = !plane_state->dcc.enable;
fill_plane_buffer_attributes( adev, afb, plane_state->format, plane_state->rotation,force_disable_dcc = adev->asic_type == CHIP_RAVEN && adev->in_suspend;
tiling_flags, &plane_state->tiling_info,
&plane_state->plane_size, &plane_state->dcc,
&plane_state->address, tmz_surface,
force_disable_dcc);
dm_plane_state_new->tiling_flags,
&plane_state->tiling_info, &plane_state->plane_size,
&plane_state->dcc, &plane_state->address,
} return 0;dm_plane_state_new->tmz_surface, force_disable_dcc);
-- 2.25.1
dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel
Am 10.08.20 um 14:25 schrieb Daniel Vetter:
On Fri, Aug 07, 2020 at 10:29:09AM -0400, Kazlauskas, Nicholas wrote:
On 2020-08-07 4:30 a.m., daniel@ffwll.ch wrote:
On Thu, Jul 30, 2020 at 04:36:38PM -0400, Nicholas Kazlauskas wrote:
[Why] We're racing with userspace as the flags could potentially change from when we acquired and validated them in commit_check.
Uh ... I didn't know these could change. I think my comments on Bas' series are even more relevant now. I think long term would be best to bake these flags in at addfb time when modifiers aren't set. And otherwise always use the modifiers flag, and completely ignore the legacy flags here. -Daniel
There's a set tiling/mod flags IOCTL that can be called after addfb happens, so unless there's some sort of driver magic preventing this from working when it's already been pinned for scanout then I don't see anything stopping this from happening.
I still need to review the modifiers series in a little more detail but that looks like a good approach to fixing these kind of issues.
Yeah we had the same model for i915, but it's awkward and can surprise compositors (since the client could change the tiling mode from underneath the compositor). So freezing the tiling mode at addfb time is the right thing to do, and anyway how things work with modifiers.
Ofc maybe good to audit the -amd driver, but hopefully it doesn't do anything silly with changed tiling. If it does, it's kinda sad day.
Marek should know this right away, but I think we only set the tilling flags once while exporting the BO and then never change them.
Regards, Christian.
Btw for i915 we even went a step further, and made the set_tiling ioctl return an error if a framebuffer for that gem_bo existed. Just to make sure we don't ever accidentally break this.
Cheers, Daniel
Regards, Nicholas Kazlauskas
[How] We unfortunately can't drop this function in its entirety from prepare_planes since we don't know the afb->address at commit_check time yet.
So instead of querying new tiling_flags and tmz_surface use the ones from the plane_state directly.
While we're at it, also update the force_disable_dcc option based on the state from atomic check.
Cc: Bhawanpreet Lakha Bhawanpreet.Lakha@amd.com Cc: Rodrigo Siqueira Rodrigo.Siqueira@amd.com Signed-off-by: Nicholas Kazlauskas nicholas.kazlauskas@amd.com
.../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 36 ++++++++++--------- 1 file changed, 19 insertions(+), 17 deletions(-)
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c index bf1881bd492c..f78c09c9585e 100644 --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c @@ -5794,14 +5794,8 @@ static int dm_plane_helper_prepare_fb(struct drm_plane *plane, struct list_head list; struct ttm_validate_buffer tv; struct ww_acquire_ctx ticket;
- uint64_t tiling_flags; uint32_t domain; int r;
- bool tmz_surface = false;
- bool force_disable_dcc = false;
- dm_plane_state_old = to_dm_plane_state(plane->state);
- dm_plane_state_new = to_dm_plane_state(new_state); if (!new_state->fb) { DRM_DEBUG_DRIVER("No FB bound\n");
@@ -5845,27 +5839,35 @@ static int dm_plane_helper_prepare_fb(struct drm_plane *plane, return r; }
- amdgpu_bo_get_tiling_flags(rbo, &tiling_flags);
- tmz_surface = amdgpu_bo_encrypted(rbo);
- ttm_eu_backoff_reservation(&ticket, &list); afb->address = amdgpu_bo_gpu_offset(rbo); amdgpu_bo_ref(rbo);
- /**
* We don't do surface updates on planes that have been newly created,
* but we also don't have the afb->address during atomic check.
*
* Fill in buffer attributes depending on the address here, but only on
* newly created planes since they're not being used by DC yet and this
* won't modify global state.
*/
- dm_plane_state_old = to_dm_plane_state(plane->state);
- dm_plane_state_new = to_dm_plane_state(new_state);
- if (dm_plane_state_new->dc_state &&
dm_plane_state_old->dc_state != dm_plane_state_new->dc_state) {
struct dc_plane_state *plane_state = dm_plane_state_new->dc_state;
dm_plane_state_old->dc_state != dm_plane_state_new->dc_state) {
struct dc_plane_state *plane_state =
dm_plane_state_new->dc_state;
bool force_disable_dcc = !plane_state->dcc.enable;
force_disable_dcc = adev->asic_type == CHIP_RAVEN && adev->in_suspend; fill_plane_buffer_attributes( adev, afb, plane_state->format, plane_state->rotation,
tiling_flags, &plane_state->tiling_info,
&plane_state->plane_size, &plane_state->dcc,
&plane_state->address, tmz_surface,
force_disable_dcc);
dm_plane_state_new->tiling_flags,
&plane_state->tiling_info, &plane_state->plane_size,
&plane_state->dcc, &plane_state->address,
} return 0;dm_plane_state_new->tmz_surface, force_disable_dcc);
-- 2.25.1
dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel
On Mon, Aug 10, 2020 at 02:30:05PM +0200, Christian König wrote:
Am 10.08.20 um 14:25 schrieb Daniel Vetter:
On Fri, Aug 07, 2020 at 10:29:09AM -0400, Kazlauskas, Nicholas wrote:
On 2020-08-07 4:30 a.m., daniel@ffwll.ch wrote:
On Thu, Jul 30, 2020 at 04:36:38PM -0400, Nicholas Kazlauskas wrote:
[Why] We're racing with userspace as the flags could potentially change from when we acquired and validated them in commit_check.
Uh ... I didn't know these could change. I think my comments on Bas' series are even more relevant now. I think long term would be best to bake these flags in at addfb time when modifiers aren't set. And otherwise always use the modifiers flag, and completely ignore the legacy flags here. -Daniel
There's a set tiling/mod flags IOCTL that can be called after addfb happens, so unless there's some sort of driver magic preventing this from working when it's already been pinned for scanout then I don't see anything stopping this from happening.
I still need to review the modifiers series in a little more detail but that looks like a good approach to fixing these kind of issues.
Yeah we had the same model for i915, but it's awkward and can surprise compositors (since the client could change the tiling mode from underneath the compositor). So freezing the tiling mode at addfb time is the right thing to do, and anyway how things work with modifiers.
Ofc maybe good to audit the -amd driver, but hopefully it doesn't do anything silly with changed tiling. If it does, it's kinda sad day.
Marek should know this right away, but I think we only set the tilling flags once while exporting the BO and then never change them.
Yeah it's the only reasonable model really, since set/get_tiling is not synchronized with any winsys protocol. So full of races by design.
The only thing I'd worry about is if you do set_tiling after addfb in your ddx. That one is save, but would badly break if we sample modifiers from set_tiling flags only once at addfb time. -Daniel
Regards, Christian.
Btw for i915 we even went a step further, and made the set_tiling ioctl return an error if a framebuffer for that gem_bo existed. Just to make sure we don't ever accidentally break this.
Cheers, Daniel
Regards, Nicholas Kazlauskas
[How] We unfortunately can't drop this function in its entirety from prepare_planes since we don't know the afb->address at commit_check time yet.
So instead of querying new tiling_flags and tmz_surface use the ones from the plane_state directly.
While we're at it, also update the force_disable_dcc option based on the state from atomic check.
Cc: Bhawanpreet Lakha Bhawanpreet.Lakha@amd.com Cc: Rodrigo Siqueira Rodrigo.Siqueira@amd.com Signed-off-by: Nicholas Kazlauskas nicholas.kazlauskas@amd.com
.../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 36 ++++++++++--------- 1 file changed, 19 insertions(+), 17 deletions(-)
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c index bf1881bd492c..f78c09c9585e 100644 --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c @@ -5794,14 +5794,8 @@ static int dm_plane_helper_prepare_fb(struct drm_plane *plane, struct list_head list; struct ttm_validate_buffer tv; struct ww_acquire_ctx ticket;
- uint64_t tiling_flags; uint32_t domain; int r;
- bool tmz_surface = false;
- bool force_disable_dcc = false;
- dm_plane_state_old = to_dm_plane_state(plane->state);
- dm_plane_state_new = to_dm_plane_state(new_state); if (!new_state->fb) { DRM_DEBUG_DRIVER("No FB bound\n");
@@ -5845,27 +5839,35 @@ static int dm_plane_helper_prepare_fb(struct drm_plane *plane, return r; }
- amdgpu_bo_get_tiling_flags(rbo, &tiling_flags);
- tmz_surface = amdgpu_bo_encrypted(rbo);
- ttm_eu_backoff_reservation(&ticket, &list); afb->address = amdgpu_bo_gpu_offset(rbo); amdgpu_bo_ref(rbo);
- /**
* We don't do surface updates on planes that have been newly created,
* but we also don't have the afb->address during atomic check.
*
* Fill in buffer attributes depending on the address here, but only on
* newly created planes since they're not being used by DC yet and this
* won't modify global state.
*/
- dm_plane_state_old = to_dm_plane_state(plane->state);
- dm_plane_state_new = to_dm_plane_state(new_state);
- if (dm_plane_state_new->dc_state &&
dm_plane_state_old->dc_state != dm_plane_state_new->dc_state) {
struct dc_plane_state *plane_state = dm_plane_state_new->dc_state;
dm_plane_state_old->dc_state != dm_plane_state_new->dc_state) {
struct dc_plane_state *plane_state =
dm_plane_state_new->dc_state;
bool force_disable_dcc = !plane_state->dcc.enable;
force_disable_dcc = adev->asic_type == CHIP_RAVEN && adev->in_suspend; fill_plane_buffer_attributes( adev, afb, plane_state->format, plane_state->rotation,
tiling_flags, &plane_state->tiling_info,
&plane_state->plane_size, &plane_state->dcc,
&plane_state->address, tmz_surface,
force_disable_dcc);
dm_plane_state_new->tiling_flags,
&plane_state->tiling_info, &plane_state->plane_size,
&plane_state->dcc, &plane_state->address,
} return 0;dm_plane_state_new->tmz_surface, force_disable_dcc);
-- 2.25.1
dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel
There are a few cases when the flags can change, for example DCC can be disabled due to a hw limitation in the 3d engine. Modifiers give the misleading impression that they help with that, but they don't. They don't really help with anything.
Marek
On Mon., Aug. 10, 2020, 08:30 Christian König, < ckoenig.leichtzumerken@gmail.com> wrote:
Am 10.08.20 um 14:25 schrieb Daniel Vetter:
On Fri, Aug 07, 2020 at 10:29:09AM -0400, Kazlauskas, Nicholas wrote:
On 2020-08-07 4:30 a.m., daniel@ffwll.ch wrote:
On Thu, Jul 30, 2020 at 04:36:38PM -0400, Nicholas Kazlauskas wrote:
[Why] We're racing with userspace as the flags could potentially change from when we acquired and validated them in commit_check.
Uh ... I didn't know these could change. I think my comments on Bas' series are even more relevant now. I think long term would be best to
bake
these flags in at addfb time when modifiers aren't set. And otherwise always use the modifiers flag, and completely ignore the legacy flags here. -Daniel
There's a set tiling/mod flags IOCTL that can be called after addfb
happens,
so unless there's some sort of driver magic preventing this from working when it's already been pinned for scanout then I don't see anything
stopping
this from happening.
I still need to review the modifiers series in a little more detail but
that
looks like a good approach to fixing these kind of issues.
Yeah we had the same model for i915, but it's awkward and can surprise compositors (since the client could change the tiling mode from
underneath
the compositor). So freezing the tiling mode at addfb time is the right thing to do, and anyway how things work with modifiers.
Ofc maybe good to audit the -amd driver, but hopefully it doesn't do anything silly with changed tiling. If it does, it's kinda sad day.
Marek should know this right away, but I think we only set the tilling flags once while exporting the BO and then never change them.
Regards, Christian.
Btw for i915 we even went a step further, and made the set_tiling ioctl return an error if a framebuffer for that gem_bo existed. Just to make sure we don't ever accidentally break this.
Cheers, Daniel
Regards, Nicholas Kazlauskas
[How] We unfortunately can't drop this function in its entirety from prepare_planes since we don't know the afb->address at commit_check time yet.
So instead of querying new tiling_flags and tmz_surface use the ones from the plane_state directly.
While we're at it, also update the force_disable_dcc option based on the state from atomic check.
Cc: Bhawanpreet Lakha Bhawanpreet.Lakha@amd.com Cc: Rodrigo Siqueira Rodrigo.Siqueira@amd.com Signed-off-by: Nicholas Kazlauskas nicholas.kazlauskas@amd.com
.../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 36
++++++++++---------
1 file changed, 19 insertions(+), 17 deletions(-)
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
index bf1881bd492c..f78c09c9585e 100644 --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c @@ -5794,14 +5794,8 @@ static int dm_plane_helper_prepare_fb(struct
drm_plane *plane,
struct list_head list; struct ttm_validate_buffer tv; struct ww_acquire_ctx ticket;
- uint64_t tiling_flags; uint32_t domain; int r;
- bool tmz_surface = false;
- bool force_disable_dcc = false;
- dm_plane_state_old = to_dm_plane_state(plane->state);
- dm_plane_state_new = to_dm_plane_state(new_state); if (!new_state->fb) { DRM_DEBUG_DRIVER("No FB bound\n");
@@ -5845,27 +5839,35 @@ static int dm_plane_helper_prepare_fb(struct
drm_plane *plane,
return r; }
- amdgpu_bo_get_tiling_flags(rbo, &tiling_flags);
- tmz_surface = amdgpu_bo_encrypted(rbo);
ttm_eu_backoff_reservation(&ticket, &list); afb->address = amdgpu_bo_gpu_offset(rbo); amdgpu_bo_ref(rbo);
- /**
- We don't do surface updates on planes that have been newly
created,
- but we also don't have the afb->address during atomic check.
- Fill in buffer attributes depending on the address here, but
only on
- newly created planes since they're not being used by DC yet and
this
- won't modify global state.
- */
- dm_plane_state_old = to_dm_plane_state(plane->state);
- dm_plane_state_new = to_dm_plane_state(new_state);
if (dm_plane_state_new->dc_state &&
dm_plane_state_old->dc_state !=
dm_plane_state_new->dc_state) {
struct dc_plane_state *plane_state =
dm_plane_state_new->dc_state;
dm_plane_state_old->dc_state != dm_plane_state_new->dc_state) {
struct dc_plane_state *plane_state =
dm_plane_state_new->dc_state;
bool force_disable_dcc = !plane_state->dcc.enable;
force_disable_dcc = adev->asic_type == CHIP_RAVEN &&
adev->in_suspend;
fill_plane_buffer_attributes( adev, afb, plane_state->format,
plane_state->rotation,
tiling_flags, &plane_state->tiling_info,
&plane_state->plane_size, &plane_state->dcc,
&plane_state->address, tmz_surface,
force_disable_dcc);
dm_plane_state_new->tiling_flags,
&plane_state->tiling_info,
&plane_state->plane_size,
&plane_state->dcc, &plane_state->address,
dm_plane_state_new->tmz_surface,
force_disable_dcc);
} return 0;
-- 2.25.1
dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel
amd-gfx mailing list amd-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/amd-gfx
On Tue, Aug 11, 2020 at 09:42:11AM -0400, Marek Olšák wrote:
There are a few cases when the flags can change, for example DCC can be disabled due to a hw limitation in the 3d engine. Modifiers give the misleading impression that they help with that, but they don't. They don't really help with anything.
But if that happens, how do you tell the other side that it needs to sample new flags? Does that just happen all the time?
Also do the DDC state changes happen for shared buffers too? -Daniel
Marek
On Mon., Aug. 10, 2020, 08:30 Christian König, < ckoenig.leichtzumerken@gmail.com> wrote:
Am 10.08.20 um 14:25 schrieb Daniel Vetter:
On Fri, Aug 07, 2020 at 10:29:09AM -0400, Kazlauskas, Nicholas wrote:
On 2020-08-07 4:30 a.m., daniel@ffwll.ch wrote:
On Thu, Jul 30, 2020 at 04:36:38PM -0400, Nicholas Kazlauskas wrote:
[Why] We're racing with userspace as the flags could potentially change from when we acquired and validated them in commit_check.
Uh ... I didn't know these could change. I think my comments on Bas' series are even more relevant now. I think long term would be best to
bake
these flags in at addfb time when modifiers aren't set. And otherwise always use the modifiers flag, and completely ignore the legacy flags here. -Daniel
There's a set tiling/mod flags IOCTL that can be called after addfb
happens,
so unless there's some sort of driver magic preventing this from working when it's already been pinned for scanout then I don't see anything
stopping
this from happening.
I still need to review the modifiers series in a little more detail but
that
looks like a good approach to fixing these kind of issues.
Yeah we had the same model for i915, but it's awkward and can surprise compositors (since the client could change the tiling mode from
underneath
the compositor). So freezing the tiling mode at addfb time is the right thing to do, and anyway how things work with modifiers.
Ofc maybe good to audit the -amd driver, but hopefully it doesn't do anything silly with changed tiling. If it does, it's kinda sad day.
Marek should know this right away, but I think we only set the tilling flags once while exporting the BO and then never change them.
Regards, Christian.
Btw for i915 we even went a step further, and made the set_tiling ioctl return an error if a framebuffer for that gem_bo existed. Just to make sure we don't ever accidentally break this.
Cheers, Daniel
Regards, Nicholas Kazlauskas
[How] We unfortunately can't drop this function in its entirety from prepare_planes since we don't know the afb->address at commit_check time yet.
So instead of querying new tiling_flags and tmz_surface use the ones from the plane_state directly.
While we're at it, also update the force_disable_dcc option based on the state from atomic check.
Cc: Bhawanpreet Lakha Bhawanpreet.Lakha@amd.com Cc: Rodrigo Siqueira Rodrigo.Siqueira@amd.com Signed-off-by: Nicholas Kazlauskas nicholas.kazlauskas@amd.com
.../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 36
++++++++++---------
1 file changed, 19 insertions(+), 17 deletions(-)
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
index bf1881bd492c..f78c09c9585e 100644 --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c @@ -5794,14 +5794,8 @@ static int dm_plane_helper_prepare_fb(struct
drm_plane *plane,
struct list_head list; struct ttm_validate_buffer tv; struct ww_acquire_ctx ticket;
- uint64_t tiling_flags; uint32_t domain; int r;
- bool tmz_surface = false;
- bool force_disable_dcc = false;
- dm_plane_state_old = to_dm_plane_state(plane->state);
- dm_plane_state_new = to_dm_plane_state(new_state); if (!new_state->fb) { DRM_DEBUG_DRIVER("No FB bound\n");
@@ -5845,27 +5839,35 @@ static int dm_plane_helper_prepare_fb(struct
drm_plane *plane,
return r; }
- amdgpu_bo_get_tiling_flags(rbo, &tiling_flags);
- tmz_surface = amdgpu_bo_encrypted(rbo);
ttm_eu_backoff_reservation(&ticket, &list); afb->address = amdgpu_bo_gpu_offset(rbo); amdgpu_bo_ref(rbo);
- /**
- We don't do surface updates on planes that have been newly
created,
- but we also don't have the afb->address during atomic check.
- Fill in buffer attributes depending on the address here, but
only on
- newly created planes since they're not being used by DC yet and
this
- won't modify global state.
- */
- dm_plane_state_old = to_dm_plane_state(plane->state);
- dm_plane_state_new = to_dm_plane_state(new_state);
if (dm_plane_state_new->dc_state &&
dm_plane_state_old->dc_state !=
dm_plane_state_new->dc_state) {
struct dc_plane_state *plane_state =
dm_plane_state_new->dc_state;
dm_plane_state_old->dc_state != dm_plane_state_new->dc_state) {
struct dc_plane_state *plane_state =
dm_plane_state_new->dc_state;
bool force_disable_dcc = !plane_state->dcc.enable;
force_disable_dcc = adev->asic_type == CHIP_RAVEN &&
adev->in_suspend;
fill_plane_buffer_attributes( adev, afb, plane_state->format,
plane_state->rotation,
tiling_flags, &plane_state->tiling_info,
&plane_state->plane_size, &plane_state->dcc,
&plane_state->address, tmz_surface,
force_disable_dcc);
dm_plane_state_new->tiling_flags,
&plane_state->tiling_info,
&plane_state->plane_size,
&plane_state->dcc, &plane_state->address,
dm_plane_state_new->tmz_surface,
force_disable_dcc);
} return 0;
-- 2.25.1
dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel
amd-gfx mailing list amd-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/amd-gfx
On Wed, Aug 12, 2020 at 9:54 AM Daniel Vetter daniel@ffwll.ch wrote:
On Tue, Aug 11, 2020 at 09:42:11AM -0400, Marek Olšák wrote:
There are a few cases when the flags can change, for example DCC can be disabled due to a hw limitation in the 3d engine. Modifiers give the misleading impression that they help with that, but they don't. They
don't
really help with anything.
But if that happens, how do you tell the other side that it needs to sample new flags? Does that just happen all the time?
Also do the DDC state changes happen for shared buffers too?
I thought we were only talking about shared buffers.
If the other side is only a consumer and the producer must disable DCC, the producer decompresses DCC and then disables it and updates the BO flags. The consumer doesn't need the new flags, because even if DCC stays enabled in the consumer, it's in a decompressed state (it has no effect). Only the producer knows it's disabled, and any new consumer will also know it when it queries the latest BO flags.
It doesn't work if both sides use writes, because it's not communicated that DCC is disabled (BO flags are queried only once). This hasn't been a problem so far.
Is there a way to disable DCC correctly and safely across processes? Yes. So why don't we do it? Because it would add more GPU overhead.
Marek
On Mon, Aug 17, 2020 at 02:23:47AM -0400, Marek Olšák wrote:
On Wed, Aug 12, 2020 at 9:54 AM Daniel Vetter daniel@ffwll.ch wrote:
On Tue, Aug 11, 2020 at 09:42:11AM -0400, Marek Olšák wrote:
There are a few cases when the flags can change, for example DCC can be disabled due to a hw limitation in the 3d engine. Modifiers give the misleading impression that they help with that, but they don't. They
don't
really help with anything.
But if that happens, how do you tell the other side that it needs to sample new flags? Does that just happen all the time?
Also do the DDC state changes happen for shared buffers too?
I thought we were only talking about shared buffers.
If the other side is only a consumer and the producer must disable DCC, the producer decompresses DCC and then disables it and updates the BO flags. The consumer doesn't need the new flags, because even if DCC stays enabled in the consumer, it's in a decompressed state (it has no effect). Only the producer knows it's disabled, and any new consumer will also know it when it queries the latest BO flags.
It doesn't work if both sides use writes, because it's not communicated that DCC is disabled (BO flags are queried only once). This hasn't been a problem so far.
Is there a way to disable DCC correctly and safely across processes? Yes. So why don't we do it? Because it would add more GPU overhead.
Yeah but in this case you can get away with just sampling the bo flags once (which is what you're doing), so doing that at addfb time should be perfectly fine. Ofc you might waste a bit of $something also scanning out the compression metadata (which tells the hw that it's all uncompressed), but that doesn't seem to be a problem for you.
So treating the legacy bo flags as invariant for shared buffers should be perfectly fine. -Daniel
[Why] So we're not racing with userspace or deadlocking DM.
[How] These flags are now stored on dm_plane_state itself and acquried and validated during commit_check, so just use those instead.
Cc: Daniel Vetter daniel.vetter@ffwll.ch Cc: Bhawanpreet Lakha Bhawanpreet.Lakha@amd.com Cc: Rodrigo Siqueira Rodrigo.Siqueira@amd.com Signed-off-by: Nicholas Kazlauskas nicholas.kazlauskas@amd.com --- .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 18 ++++-------------- 1 file changed, 4 insertions(+), 14 deletions(-)
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c index f78c09c9585e..0d5f45742bb5 100644 --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c @@ -7094,8 +7094,6 @@ static void amdgpu_dm_commit_planes(struct drm_atomic_state *state, long r; unsigned long flags; struct amdgpu_bo *abo; - uint64_t tiling_flags; - bool tmz_surface = false; uint32_t target_vblank, last_flip_vblank; bool vrr_active = amdgpu_dm_vrr_active(acrtc_state); bool pflip_present = false; @@ -7179,20 +7177,12 @@ static void amdgpu_dm_commit_planes(struct drm_atomic_state *state, if (unlikely(r <= 0)) DRM_ERROR("Waiting for fences timed out!");
- /* - * We cannot reserve buffers here, which means the normal flag - * access functions don't work. Paper over this with READ_ONCE, - * but maybe the flags are invariant enough that not even that - * would be needed. - */ - tiling_flags = READ_ONCE(abo->tiling_flags); - tmz_surface = READ_ONCE(abo->flags) & AMDGPU_GEM_CREATE_ENCRYPTED; - fill_dc_plane_info_and_addr( - dm->adev, new_plane_state, tiling_flags, + dm->adev, new_plane_state, + dm_new_plane_state->tiling_flags, &bundle->plane_infos[planes_count], - &bundle->flip_addrs[planes_count].address, tmz_surface, - false); + &bundle->flip_addrs[planes_count].address, + dm_new_plane_state->tmz_surface, false);
DRM_DEBUG_DRIVER("plane: id=%d dcc_en=%d\n", new_plane_state->plane->index,
Reviewed-by: Rodrigo Siqueira Rodrigo.Siqueira@amd.com
On 07/30, Nicholas Kazlauskas wrote:
[Why] So we're not racing with userspace or deadlocking DM.
[How] These flags are now stored on dm_plane_state itself and acquried and validated during commit_check, so just use those instead.
Cc: Daniel Vetter daniel.vetter@ffwll.ch Cc: Bhawanpreet Lakha Bhawanpreet.Lakha@amd.com Cc: Rodrigo Siqueira Rodrigo.Siqueira@amd.com Signed-off-by: Nicholas Kazlauskas nicholas.kazlauskas@amd.com
.../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 18 ++++-------------- 1 file changed, 4 insertions(+), 14 deletions(-)
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c index f78c09c9585e..0d5f45742bb5 100644 --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c @@ -7094,8 +7094,6 @@ static void amdgpu_dm_commit_planes(struct drm_atomic_state *state, long r; unsigned long flags; struct amdgpu_bo *abo;
- uint64_t tiling_flags;
- bool tmz_surface = false; uint32_t target_vblank, last_flip_vblank; bool vrr_active = amdgpu_dm_vrr_active(acrtc_state); bool pflip_present = false;
@@ -7179,20 +7177,12 @@ static void amdgpu_dm_commit_planes(struct drm_atomic_state *state, if (unlikely(r <= 0)) DRM_ERROR("Waiting for fences timed out!");
/*
* We cannot reserve buffers here, which means the normal flag
* access functions don't work. Paper over this with READ_ONCE,
* but maybe the flags are invariant enough that not even that
* would be needed.
*/
tiling_flags = READ_ONCE(abo->tiling_flags);
tmz_surface = READ_ONCE(abo->flags) & AMDGPU_GEM_CREATE_ENCRYPTED;
- fill_dc_plane_info_and_addr(
dm->adev, new_plane_state, tiling_flags,
dm->adev, new_plane_state,
dm_new_plane_state->tiling_flags, &bundle->plane_infos[planes_count],
&bundle->flip_addrs[planes_count].address, tmz_surface,
false);
&bundle->flip_addrs[planes_count].address,
dm_new_plane_state->tmz_surface, false);
DRM_DEBUG_DRIVER("plane: id=%d dcc_en=%d\n", new_plane_state->plane->index,
-- 2.25.1
[Why] MEDIUM or FULL updates can require global validation or affect bandwidth. By treating these all simply as surface updates we aren't actually passing this through DC global validation.
[How] There's currently no way to pass surface updates through DC global validation, nor do I think it's a good idea to change the interface to accept these.
DC global validation itself is currently stateless, and we can move our update type checking to be stateless as well by duplicating DC surface checks in DM based on DRM properties.
We wanted to rely on DC automatically determining this since DC knows best, but DM is ultimately what fills in everything into DC plane state so it does need to know as well.
There are basically only three paths that we exercise in DM today:
1) Cursor (async update) 2) Pageflip (fast update) 3) Full pipe programming (medium/full updates)
Which means that anything that's more than a pageflip really needs to go down path #3.
So this change duplicates all the surface update checks based on DRM state instead inside of should_reset_plane().
Next step is dropping dm_determine_update_type_for_commit and we no longer require the old DC state at all for global validation.
Optimization can come later so we don't reset DC planes at all for MEDIUM udpates and avoid validation, but we might require some extra checks in DM to achieve this.
Cc: Bhawanpreet Lakha Bhawanpreet.Lakha@amd.com Cc: Hersen Wu hersenxs.wu@amd.com Signed-off-by: Nicholas Kazlauskas nicholas.kazlauskas@amd.com --- .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 25 +++++++++++++++++++ 1 file changed, 25 insertions(+)
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c index 0d5f45742bb5..2cbb29199e61 100644 --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c @@ -8336,6 +8336,31 @@ static bool should_reset_plane(struct drm_atomic_state *state, if (old_other_state->crtc != new_other_state->crtc) return true;
+ /* Src/dst size and scaling updates. */ + if (old_other_state->src_w != new_other_state->src_w || + old_other_state->src_h != new_other_state->src_h || + old_other_state->crtc_w != new_other_state->crtc_w || + old_other_state->crtc_h != new_other_state->crtc_h) + return true; + + /* Rotation / mirroring updates. */ + if (old_other_state->rotation != new_other_state->rotation) + return true; + + /* Blending updates. */ + if (old_other_state->pixel_blend_mode != + new_other_state->pixel_blend_mode) + return true; + + /* Alpha updates. */ + if (old_other_state->alpha != new_other_state->alpha) + return true; + + /* Colorspace changes. */ + if (old_other_state->color_range != new_other_state->color_range || + old_other_state->color_encoding != new_other_state->color_encoding) + return true; + /* Framebuffer checks fall at the end. */ if (!old_other_state->fb || !new_other_state->fb) continue;
[AMD Official Use Only - Internal Distribution Only]
Reviewed-by: Hersen Wu hersenxs.wu@amd.com
-----Original Message----- From: Nicholas Kazlauskas nicholas.kazlauskas@amd.com Sent: Thursday, July 30, 2020 4:37 PM To: amd-gfx@lists.freedesktop.org; dri-devel@lists.freedesktop.org Cc: Kazlauskas, Nicholas Nicholas.Kazlauskas@amd.com; Lakha, Bhawanpreet Bhawanpreet.Lakha@amd.com; Wu, Hersen hersenxs.wu@amd.com Subject: [PATCH 5/7] drm/amd/display: Reset plane for anything that's not a FAST update
[Why] MEDIUM or FULL updates can require global validation or affect bandwidth. By treating these all simply as surface updates we aren't actually passing this through DC global validation.
[How] There's currently no way to pass surface updates through DC global validation, nor do I think it's a good idea to change the interface to accept these.
DC global validation itself is currently stateless, and we can move our update type checking to be stateless as well by duplicating DC surface checks in DM based on DRM properties.
We wanted to rely on DC automatically determining this since DC knows best, but DM is ultimately what fills in everything into DC plane state so it does need to know as well.
There are basically only three paths that we exercise in DM today:
1) Cursor (async update) 2) Pageflip (fast update) 3) Full pipe programming (medium/full updates)
Which means that anything that's more than a pageflip really needs to go down path #3.
So this change duplicates all the surface update checks based on DRM state instead inside of should_reset_plane().
Next step is dropping dm_determine_update_type_for_commit and we no longer require the old DC state at all for global validation.
Optimization can come later so we don't reset DC planes at all for MEDIUM udpates and avoid validation, but we might require some extra checks in DM to achieve this.
Cc: Bhawanpreet Lakha Bhawanpreet.Lakha@amd.com Cc: Hersen Wu hersenxs.wu@amd.com Signed-off-by: Nicholas Kazlauskas nicholas.kazlauskas@amd.com --- .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 25 +++++++++++++++++++ 1 file changed, 25 insertions(+)
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c index 0d5f45742bb5..2cbb29199e61 100644 --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c @@ -8336,6 +8336,31 @@ static bool should_reset_plane(struct drm_atomic_state *state, if (old_other_state->crtc != new_other_state->crtc) return true;
+ /* Src/dst size and scaling updates. */ + if (old_other_state->src_w != new_other_state->src_w || + old_other_state->src_h != new_other_state->src_h || + old_other_state->crtc_w != new_other_state->crtc_w || + old_other_state->crtc_h != new_other_state->crtc_h) + return true; + + /* Rotation / mirroring updates. */ + if (old_other_state->rotation != new_other_state->rotation) + return true; + + /* Blending updates. */ + if (old_other_state->pixel_blend_mode != + new_other_state->pixel_blend_mode) + return true; + + /* Alpha updates. */ + if (old_other_state->alpha != new_other_state->alpha) + return true; + + /* Colorspace changes. */ + if (old_other_state->color_range != new_other_state->color_range || + old_other_state->color_encoding != new_other_state->color_encoding) + return true; + /* Framebuffer checks fall at the end. */ if (!old_other_state->fb || !new_other_state->fb) continue; -- 2.25.1
On 07/30, Nicholas Kazlauskas wrote:
[Why] MEDIUM or FULL updates can require global validation or affect bandwidth. By treating these all simply as surface updates we aren't actually passing this through DC global validation.
[How] There's currently no way to pass surface updates through DC global validation, nor do I think it's a good idea to change the interface to accept these.
DC global validation itself is currently stateless, and we can move our update type checking to be stateless as well by duplicating DC surface checks in DM based on DRM properties.
We wanted to rely on DC automatically determining this since DC knows best, but DM is ultimately what fills in everything into DC plane state so it does need to know as well.
There are basically only three paths that we exercise in DM today:
- Cursor (async update)
- Pageflip (fast update)
- Full pipe programming (medium/full updates)
Which means that anything that's more than a pageflip really needs to go down path #3.
So this change duplicates all the surface update checks based on DRM state instead inside of should_reset_plane().
Next step is dropping dm_determine_update_type_for_commit and we no longer require the old DC state at all for global validation.
Optimization can come later so we don't reset DC planes at all for MEDIUM udpates and avoid validation, but we might require some extra checks in DM to achieve this.
How about adding this optimization description in our TODO list under-display folder?
Reviewed-by: Rodrigo Siqueira Rodrigo.Siqueira@amd.com
Cc: Bhawanpreet Lakha Bhawanpreet.Lakha@amd.com Cc: Hersen Wu hersenxs.wu@amd.com Signed-off-by: Nicholas Kazlauskas nicholas.kazlauskas@amd.com
.../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 25 +++++++++++++++++++ 1 file changed, 25 insertions(+)
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c index 0d5f45742bb5..2cbb29199e61 100644 --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c @@ -8336,6 +8336,31 @@ static bool should_reset_plane(struct drm_atomic_state *state, if (old_other_state->crtc != new_other_state->crtc) return true;
/* Src/dst size and scaling updates. */
if (old_other_state->src_w != new_other_state->src_w ||
old_other_state->src_h != new_other_state->src_h ||
old_other_state->crtc_w != new_other_state->crtc_w ||
old_other_state->crtc_h != new_other_state->crtc_h)
return true;
/* Rotation / mirroring updates. */
if (old_other_state->rotation != new_other_state->rotation)
return true;
/* Blending updates. */
if (old_other_state->pixel_blend_mode !=
new_other_state->pixel_blend_mode)
return true;
/* Alpha updates. */
if (old_other_state->alpha != new_other_state->alpha)
return true;
/* Colorspace changes. */
if (old_other_state->color_range != new_other_state->color_range ||
old_other_state->color_encoding != new_other_state->color_encoding)
return true;
- /* Framebuffer checks fall at the end. */ if (!old_other_state->fb || !new_other_state->fb) continue;
-- 2.25.1
amd-gfx mailing list amd-gfx@lists.freedesktop.org https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.free...
On 2020-08-05 4:45 p.m., Rodrigo Siqueira wrote:
On 07/30, Nicholas Kazlauskas wrote:
[Why] MEDIUM or FULL updates can require global validation or affect bandwidth. By treating these all simply as surface updates we aren't actually passing this through DC global validation.
[How] There's currently no way to pass surface updates through DC global validation, nor do I think it's a good idea to change the interface to accept these.
DC global validation itself is currently stateless, and we can move our update type checking to be stateless as well by duplicating DC surface checks in DM based on DRM properties.
We wanted to rely on DC automatically determining this since DC knows best, but DM is ultimately what fills in everything into DC plane state so it does need to know as well.
There are basically only three paths that we exercise in DM today:
- Cursor (async update)
- Pageflip (fast update)
- Full pipe programming (medium/full updates)
Which means that anything that's more than a pageflip really needs to go down path #3.
So this change duplicates all the surface update checks based on DRM state instead inside of should_reset_plane().
Next step is dropping dm_determine_update_type_for_commit and we no longer require the old DC state at all for global validation.
Optimization can come later so we don't reset DC planes at all for MEDIUM udpates and avoid validation, but we might require some extra checks in DM to achieve this.
How about adding this optimization description in our TODO list under-display folder?
Reviewed-by: Rodrigo Siqueira Rodrigo.Siqueira@amd.com
Sure, I'll make another patch to clean up some of the TODO items in the text file.
Regards, Nicholas Kazlauskas
Cc: Bhawanpreet Lakha Bhawanpreet.Lakha@amd.com Cc: Hersen Wu hersenxs.wu@amd.com Signed-off-by: Nicholas Kazlauskas nicholas.kazlauskas@amd.com
.../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 25 +++++++++++++++++++ 1 file changed, 25 insertions(+)
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c index 0d5f45742bb5..2cbb29199e61 100644 --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c @@ -8336,6 +8336,31 @@ static bool should_reset_plane(struct drm_atomic_state *state, if (old_other_state->crtc != new_other_state->crtc) return true;
/* Src/dst size and scaling updates. */
if (old_other_state->src_w != new_other_state->src_w ||
old_other_state->src_h != new_other_state->src_h ||
old_other_state->crtc_w != new_other_state->crtc_w ||
old_other_state->crtc_h != new_other_state->crtc_h)
return true;
/* Rotation / mirroring updates. */
if (old_other_state->rotation != new_other_state->rotation)
return true;
/* Blending updates. */
if (old_other_state->pixel_blend_mode !=
new_other_state->pixel_blend_mode)
return true;
/* Alpha updates. */
if (old_other_state->alpha != new_other_state->alpha)
return true;
/* Colorspace changes. */
if (old_other_state->color_range != new_other_state->color_range ||
old_other_state->color_encoding != new_other_state->color_encoding)
return true;
- /* Framebuffer checks fall at the end. */ if (!old_other_state->fb || !new_other_state->fb) continue;
-- 2.25.1
amd-gfx mailing list amd-gfx@lists.freedesktop.org https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.free...
On Thu, Jul 30, 2020 at 04:36:40PM -0400, Nicholas Kazlauskas wrote:
[Why] MEDIUM or FULL updates can require global validation or affect bandwidth. By treating these all simply as surface updates we aren't actually passing this through DC global validation.
[How] There's currently no way to pass surface updates through DC global validation, nor do I think it's a good idea to change the interface to accept these.
DC global validation itself is currently stateless, and we can move our update type checking to be stateless as well by duplicating DC surface checks in DM based on DRM properties.
We wanted to rely on DC automatically determining this since DC knows best, but DM is ultimately what fills in everything into DC plane state so it does need to know as well.
There are basically only three paths that we exercise in DM today:
- Cursor (async update)
- Pageflip (fast update)
- Full pipe programming (medium/full updates)
Which means that anything that's more than a pageflip really needs to go down path #3.
So this change duplicates all the surface update checks based on DRM state instead inside of should_reset_plane().
Next step is dropping dm_determine_update_type_for_commit and we no longer require the old DC state at all for global validation.
I think we do something similar in i915, where we have a "nothing except base address changed" fast path, but for anything else we fully compute a new state. Obviously you should try to keep global state synchronization to a minimum for this step, so it's not entirely only 2 options.
Once we have the states, we compare them and figure out whether we can get away with a fast modeset operation (maybe what you guys call medium update). Anyway I think being slightly more aggressive with computing full state, and then falling back to more optimized update again is a good approach. Only risk is if we you have too much synchronization in your locking (e.g. modern compositors do like to change tiling and stuff, especially once you have modifiers enabled, so this shouldn't cause a sync across crtc except when absolutely needed). -Daniel
Optimization can come later so we don't reset DC planes at all for MEDIUM udpates and avoid validation, but we might require some extra checks in DM to achieve this.
Cc: Bhawanpreet Lakha Bhawanpreet.Lakha@amd.com Cc: Hersen Wu hersenxs.wu@amd.com Signed-off-by: Nicholas Kazlauskas nicholas.kazlauskas@amd.com
.../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 25 +++++++++++++++++++ 1 file changed, 25 insertions(+)
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c index 0d5f45742bb5..2cbb29199e61 100644 --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c @@ -8336,6 +8336,31 @@ static bool should_reset_plane(struct drm_atomic_state *state, if (old_other_state->crtc != new_other_state->crtc) return true;
/* Src/dst size and scaling updates. */
if (old_other_state->src_w != new_other_state->src_w ||
old_other_state->src_h != new_other_state->src_h ||
old_other_state->crtc_w != new_other_state->crtc_w ||
old_other_state->crtc_h != new_other_state->crtc_h)
return true;
/* Rotation / mirroring updates. */
if (old_other_state->rotation != new_other_state->rotation)
return true;
/* Blending updates. */
if (old_other_state->pixel_blend_mode !=
new_other_state->pixel_blend_mode)
return true;
/* Alpha updates. */
if (old_other_state->alpha != new_other_state->alpha)
return true;
/* Colorspace changes. */
if (old_other_state->color_range != new_other_state->color_range ||
old_other_state->color_encoding != new_other_state->color_encoding)
return true;
- /* Framebuffer checks fall at the end. */ if (!old_other_state->fb || !new_other_state->fb) continue;
-- 2.25.1
dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel
On 2020-08-07 4:34 a.m., daniel@ffwll.ch wrote:
On Thu, Jul 30, 2020 at 04:36:40PM -0400, Nicholas Kazlauskas wrote:
[Why] MEDIUM or FULL updates can require global validation or affect bandwidth. By treating these all simply as surface updates we aren't actually passing this through DC global validation.
[How] There's currently no way to pass surface updates through DC global validation, nor do I think it's a good idea to change the interface to accept these.
DC global validation itself is currently stateless, and we can move our update type checking to be stateless as well by duplicating DC surface checks in DM based on DRM properties.
We wanted to rely on DC automatically determining this since DC knows best, but DM is ultimately what fills in everything into DC plane state so it does need to know as well.
There are basically only three paths that we exercise in DM today:
- Cursor (async update)
- Pageflip (fast update)
- Full pipe programming (medium/full updates)
Which means that anything that's more than a pageflip really needs to go down path #3.
So this change duplicates all the surface update checks based on DRM state instead inside of should_reset_plane().
Next step is dropping dm_determine_update_type_for_commit and we no longer require the old DC state at all for global validation.
I think we do something similar in i915, where we have a "nothing except base address changed" fast path, but for anything else we fully compute a new state. Obviously you should try to keep global state synchronization to a minimum for this step, so it's not entirely only 2 options.
Once we have the states, we compare them and figure out whether we can get away with a fast modeset operation (maybe what you guys call medium update). Anyway I think being slightly more aggressive with computing full state, and then falling back to more optimized update again is a good approach. Only risk is if we you have too much synchronization in your locking (e.g. modern compositors do like to change tiling and stuff, especially once you have modifiers enabled, so this shouldn't cause a sync across crtc except when absolutely needed). -Daniel
Sounds like the right approach then.
We can support tiling changes in the fast path, but the more optimized version of that last check is really linear <-> tiled. That requires global validation with DC to revalidate bandwidth and calculate requestor parameters for HW. So we'll have to stall for some of these changes unfortunately since we need the full HW state for validation.
Regards, Nicholas Kazlauskas
Optimization can come later so we don't reset DC planes at all for MEDIUM udpates and avoid validation, but we might require some extra checks in DM to achieve this.
Cc: Bhawanpreet Lakha Bhawanpreet.Lakha@amd.com Cc: Hersen Wu hersenxs.wu@amd.com Signed-off-by: Nicholas Kazlauskas nicholas.kazlauskas@amd.com
.../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 25 +++++++++++++++++++ 1 file changed, 25 insertions(+)
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c index 0d5f45742bb5..2cbb29199e61 100644 --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c @@ -8336,6 +8336,31 @@ static bool should_reset_plane(struct drm_atomic_state *state, if (old_other_state->crtc != new_other_state->crtc) return true;
/* Src/dst size and scaling updates. */
if (old_other_state->src_w != new_other_state->src_w ||
old_other_state->src_h != new_other_state->src_h ||
old_other_state->crtc_w != new_other_state->crtc_w ||
old_other_state->crtc_h != new_other_state->crtc_h)
return true;
/* Rotation / mirroring updates. */
if (old_other_state->rotation != new_other_state->rotation)
return true;
/* Blending updates. */
if (old_other_state->pixel_blend_mode !=
new_other_state->pixel_blend_mode)
return true;
/* Alpha updates. */
if (old_other_state->alpha != new_other_state->alpha)
return true;
/* Colorspace changes. */
if (old_other_state->color_range != new_other_state->color_range ||
old_other_state->color_encoding != new_other_state->color_encoding)
return true;
- /* Framebuffer checks fall at the end. */ if (!old_other_state->fb || !new_other_state->fb) continue;
-- 2.25.1
dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel
On Fri, Aug 07, 2020 at 10:26:51AM -0400, Kazlauskas, Nicholas wrote:
On 2020-08-07 4:34 a.m., daniel@ffwll.ch wrote:
On Thu, Jul 30, 2020 at 04:36:40PM -0400, Nicholas Kazlauskas wrote:
[Why] MEDIUM or FULL updates can require global validation or affect bandwidth. By treating these all simply as surface updates we aren't actually passing this through DC global validation.
[How] There's currently no way to pass surface updates through DC global validation, nor do I think it's a good idea to change the interface to accept these.
DC global validation itself is currently stateless, and we can move our update type checking to be stateless as well by duplicating DC surface checks in DM based on DRM properties.
We wanted to rely on DC automatically determining this since DC knows best, but DM is ultimately what fills in everything into DC plane state so it does need to know as well.
There are basically only three paths that we exercise in DM today:
- Cursor (async update)
- Pageflip (fast update)
- Full pipe programming (medium/full updates)
Which means that anything that's more than a pageflip really needs to go down path #3.
So this change duplicates all the surface update checks based on DRM state instead inside of should_reset_plane().
Next step is dropping dm_determine_update_type_for_commit and we no longer require the old DC state at all for global validation.
I think we do something similar in i915, where we have a "nothing except base address changed" fast path, but for anything else we fully compute a new state. Obviously you should try to keep global state synchronization to a minimum for this step, so it's not entirely only 2 options.
Once we have the states, we compare them and figure out whether we can get away with a fast modeset operation (maybe what you guys call medium update). Anyway I think being slightly more aggressive with computing full state, and then falling back to more optimized update again is a good approach. Only risk is if we you have too much synchronization in your locking (e.g. modern compositors do like to change tiling and stuff, especially once you have modifiers enabled, so this shouldn't cause a sync across crtc except when absolutely needed). -Daniel
Sounds like the right approach then.
We can support tiling changes in the fast path, but the more optimized version of that last check is really linear <-> tiled. That requires global validation with DC to revalidate bandwidth and calculate requestor parameters for HW. So we'll have to stall for some of these changes unfortunately since we need the full HW state for validation.
Yeah I think that's perfectly ok, and probably worth it to still optimize for tiled->tiled changes. If you can. tiled<->untiled only happens for boot-splash, so no one cares, but with modifiers the idea is that tiling changes (especially once we get into the different compression modes amdgpu support, and which Bas' series tries to enable) are fairly common and should be doable without any stalls anywhere.
Cheers, Daniel
Regards, Nicholas Kazlauskas
Optimization can come later so we don't reset DC planes at all for MEDIUM udpates and avoid validation, but we might require some extra checks in DM to achieve this.
Cc: Bhawanpreet Lakha Bhawanpreet.Lakha@amd.com Cc: Hersen Wu hersenxs.wu@amd.com Signed-off-by: Nicholas Kazlauskas nicholas.kazlauskas@amd.com
.../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 25 +++++++++++++++++++ 1 file changed, 25 insertions(+)
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c index 0d5f45742bb5..2cbb29199e61 100644 --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c @@ -8336,6 +8336,31 @@ static bool should_reset_plane(struct drm_atomic_state *state, if (old_other_state->crtc != new_other_state->crtc) return true;
/* Src/dst size and scaling updates. */
if (old_other_state->src_w != new_other_state->src_w ||
old_other_state->src_h != new_other_state->src_h ||
old_other_state->crtc_w != new_other_state->crtc_w ||
old_other_state->crtc_h != new_other_state->crtc_h)
return true;
/* Rotation / mirroring updates. */
if (old_other_state->rotation != new_other_state->rotation)
return true;
/* Blending updates. */
if (old_other_state->pixel_blend_mode !=
new_other_state->pixel_blend_mode)
return true;
/* Alpha updates. */
if (old_other_state->alpha != new_other_state->alpha)
return true;
/* Colorspace changes. */
if (old_other_state->color_range != new_other_state->color_range ||
old_other_state->color_encoding != new_other_state->color_encoding)
return true;
- /* Framebuffer checks fall at the end. */ if (!old_other_state->fb || !new_other_state->fb) continue;
-- 2.25.1
dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel
[Why] This was added in the past to solve the issue of not knowing when to stall for medium and full updates in DM.
Since DC is ultimately decides what requires bandwidth changes we wanted to make use of it directly to determine this.
The problem is that we can't actually pass any of the stream or surface updates into DC global validation, so we don't actually check if the new configuration is valid - we just validate the old existing config instead and stall for outstanding commits to finish.
There's also the problem of grabbing the DRM private object for pageflips which can lead to page faults in the case where commits execute out of order and free a DRM private object state that was still required for commit tail.
[How] Now that we reset the plane in DM with the same conditions DC checks we can have planes go through DC validation and we know when we need to check and stall based on whether the stream or planes changed.
We mark lock_and_validation_needed whenever we've done this, so just go back to using that instead of dm_determine_update_type_for_commit.
Since we'll skip resetting the plane for a pageflip we will no longer grab the DRM private object for pageflips as well, avoiding the page fault issued caused by pageflipping under load with commits executing out of order.
Cc: Harry Wentland harry.wentland@amd.com Cc: Bhawanpreet Lakha Bhawanpreet.Lakha@amd.com Signed-off-by: Nicholas Kazlauskas nicholas.kazlauskas@amd.com --- .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 199 ++---------------- 1 file changed, 17 insertions(+), 182 deletions(-)
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c index 2cbb29199e61..59829ec81694 100644 --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c @@ -8542,161 +8542,6 @@ static int dm_update_plane_state(struct dc *dc, return ret; }
-static int -dm_determine_update_type_for_commit(struct amdgpu_display_manager *dm, - struct drm_atomic_state *state, - enum surface_update_type *out_type) -{ - struct dc *dc = dm->dc; - struct dm_atomic_state *dm_state = NULL, *old_dm_state = NULL; - int i, j, num_plane, ret = 0; - struct drm_plane_state *old_plane_state, *new_plane_state; - struct dm_plane_state *new_dm_plane_state, *old_dm_plane_state; - struct drm_crtc *new_plane_crtc; - struct drm_plane *plane; - - struct drm_crtc *crtc; - struct drm_crtc_state *new_crtc_state, *old_crtc_state; - struct dm_crtc_state *new_dm_crtc_state, *old_dm_crtc_state; - struct dc_stream_status *status = NULL; - enum surface_update_type update_type = UPDATE_TYPE_FAST; - struct surface_info_bundle { - struct dc_surface_update surface_updates[MAX_SURFACES]; - struct dc_plane_info plane_infos[MAX_SURFACES]; - struct dc_scaling_info scaling_infos[MAX_SURFACES]; - struct dc_flip_addrs flip_addrs[MAX_SURFACES]; - struct dc_stream_update stream_update; - } *bundle; - - bundle = kzalloc(sizeof(*bundle), GFP_KERNEL); - - if (!bundle) { - DRM_ERROR("Failed to allocate update bundle\n"); - /* Set type to FULL to avoid crashing in DC*/ - update_type = UPDATE_TYPE_FULL; - goto cleanup; - } - - for_each_oldnew_crtc_in_state(state, crtc, old_crtc_state, new_crtc_state, i) { - - memset(bundle, 0, sizeof(struct surface_info_bundle)); - - new_dm_crtc_state = to_dm_crtc_state(new_crtc_state); - old_dm_crtc_state = to_dm_crtc_state(old_crtc_state); - num_plane = 0; - - if (new_dm_crtc_state->stream != old_dm_crtc_state->stream) { - update_type = UPDATE_TYPE_FULL; - goto cleanup; - } - - if (!new_dm_crtc_state->stream) - continue; - - for_each_oldnew_plane_in_state(state, plane, old_plane_state, new_plane_state, j) { - struct dc_plane_info *plane_info = &bundle->plane_infos[num_plane]; - struct dc_flip_addrs *flip_addr = &bundle->flip_addrs[num_plane]; - struct dc_scaling_info *scaling_info = &bundle->scaling_infos[num_plane]; - - new_plane_crtc = new_plane_state->crtc; - new_dm_plane_state = to_dm_plane_state(new_plane_state); - old_dm_plane_state = to_dm_plane_state(old_plane_state); - - if (plane->type == DRM_PLANE_TYPE_CURSOR) - continue; - - if (new_dm_plane_state->dc_state != old_dm_plane_state->dc_state) { - update_type = UPDATE_TYPE_FULL; - goto cleanup; - } - - if (crtc != new_plane_crtc) - continue; - - bundle->surface_updates[num_plane].surface = - new_dm_plane_state->dc_state; - - if (new_crtc_state->mode_changed) { - bundle->stream_update.dst = new_dm_crtc_state->stream->dst; - bundle->stream_update.src = new_dm_crtc_state->stream->src; - } - - if (new_crtc_state->color_mgmt_changed) { - bundle->surface_updates[num_plane].gamma = - new_dm_plane_state->dc_state->gamma_correction; - bundle->surface_updates[num_plane].in_transfer_func = - new_dm_plane_state->dc_state->in_transfer_func; - bundle->surface_updates[num_plane].gamut_remap_matrix = - &new_dm_plane_state->dc_state->gamut_remap_matrix; - bundle->stream_update.gamut_remap = - &new_dm_crtc_state->stream->gamut_remap_matrix; - bundle->stream_update.output_csc_transform = - &new_dm_crtc_state->stream->csc_color_matrix; - bundle->stream_update.out_transfer_func = - new_dm_crtc_state->stream->out_transfer_func; - } - - ret = fill_dc_scaling_info(new_plane_state, - scaling_info); - if (ret) - goto cleanup; - - bundle->surface_updates[num_plane].scaling_info = scaling_info; - - if (new_plane_state->fb) { - ret = fill_dc_plane_info_and_addr( - dm->adev, new_plane_state, - new_dm_plane_state->tiling_flags, - plane_info, &flip_addr->address, - new_dm_plane_state->tmz_surface, false); - if (ret) - goto cleanup; - - bundle->surface_updates[num_plane].plane_info = plane_info; - bundle->surface_updates[num_plane].flip_addr = flip_addr; - } - - num_plane++; - } - - if (num_plane == 0) - continue; - - ret = dm_atomic_get_state(state, &dm_state); - if (ret) - goto cleanup; - - old_dm_state = dm_atomic_get_old_state(state); - if (!old_dm_state) { - ret = -EINVAL; - goto cleanup; - } - - status = dc_stream_get_status_from_state(old_dm_state->context, - new_dm_crtc_state->stream); - bundle->stream_update.stream = new_dm_crtc_state->stream; - /* - * TODO: DC modifies the surface during this call so we need - * to lock here - find a way to do this without locking. - */ - mutex_lock(&dm->dc_lock); - update_type = dc_check_update_surfaces_for_stream( - dc, bundle->surface_updates, num_plane, - &bundle->stream_update, status); - mutex_unlock(&dm->dc_lock); - - if (update_type > UPDATE_TYPE_MED) { - update_type = UPDATE_TYPE_FULL; - goto cleanup; - } - } - -cleanup: - kfree(bundle); - - *out_type = update_type; - return ret; -} #if defined(CONFIG_DRM_AMD_DC_DCN) static int add_affected_mst_dsc_crtcs(struct drm_atomic_state *state, struct drm_crtc *crtc) { @@ -8737,8 +8582,7 @@ static int add_affected_mst_dsc_crtcs(struct drm_atomic_state *state, struct drm * acquired. For full updates case which removes/adds/updates streams on one * CRTC while flipping on another CRTC, acquiring global lock will guarantee * that any such full update commit will wait for completion of any outstanding - * flip using DRMs synchronization events. See - * dm_determine_update_type_for_commit() + * flip using DRMs synchronization events. * * Note that DM adds the affected connectors for all CRTCs in state, when that * might not seem necessary. This is because DC stream creation requires the @@ -8759,15 +8603,8 @@ static int amdgpu_dm_atomic_check(struct drm_device *dev, struct drm_crtc_state *old_crtc_state, *new_crtc_state; struct drm_plane *plane; struct drm_plane_state *old_plane_state, *new_plane_state; - enum surface_update_type update_type = UPDATE_TYPE_FAST; - enum surface_update_type overall_update_type = UPDATE_TYPE_FAST; enum dc_status status; int ret, i; - - /* - * This bool will be set for true for any modeset/reset - * or plane update which implies non fast surface update. - */ bool lock_and_validation_needed = false;
ret = drm_atomic_helper_check_modeset(dev, state); @@ -8961,27 +8798,23 @@ static int amdgpu_dm_atomic_check(struct drm_device *dev, if (!is_scaling_state_different(dm_new_con_state, dm_old_con_state)) continue;
- overall_update_type = UPDATE_TYPE_FULL; lock_and_validation_needed = true; }
- ret = dm_determine_update_type_for_commit(&adev->dm, state, &update_type); - if (ret) - goto fail; - - if (overall_update_type < update_type) - overall_update_type = update_type; - - /* - * lock_and_validation_needed was an old way to determine if we need to set - * the global lock. Leaving it in to check if we broke any corner cases - * lock_and_validation_needed true = UPDATE_TYPE_FULL or UPDATE_TYPE_MED - * lock_and_validation_needed false = UPDATE_TYPE_FAST + /** + * Streams and planes are reset when there are changes that affect + * bandwidth. Anything that affects bandwidth needs to go through + * DC global validation to ensure that the configuration can be applied + * to hardware. + * + * We have to currently stall out here in atomic_check for outstanding + * commits to finish in this case because our IRQ handlers reference + * DRM state directly - we can end up disabling interrupts too early + * if we don't. + * + * TODO: Remove this stall and drop DM state private objects. */ - if (lock_and_validation_needed && overall_update_type <= UPDATE_TYPE_FAST) - WARN(1, "Global lock should be Set, overall_update_type should be UPDATE_TYPE_MED or UPDATE_TYPE_FULL"); - - if (overall_update_type > UPDATE_TYPE_FAST) { + if (lock_and_validation_needed) { ret = dm_atomic_get_state(state, &dm_state); if (ret) goto fail; @@ -9063,7 +8896,9 @@ static int amdgpu_dm_atomic_check(struct drm_device *dev, struct dm_crtc_state *dm_new_crtc_state = to_dm_crtc_state(new_crtc_state);
- dm_new_crtc_state->update_type = (int)overall_update_type; + dm_new_crtc_state->update_type = lock_and_validation_needed ? + UPDATE_TYPE_FULL : + UPDATE_TYPE_FAST; }
/* Must be success */
Reviewed-by: Rodrigo Siqueira Rodrigo.Siqueira@amd.com
On 07/30, Nicholas Kazlauskas wrote:
[Why] This was added in the past to solve the issue of not knowing when to stall for medium and full updates in DM.
Since DC is ultimately decides what requires bandwidth changes we wanted to make use of it directly to determine this.
The problem is that we can't actually pass any of the stream or surface updates into DC global validation, so we don't actually check if the new configuration is valid - we just validate the old existing config instead and stall for outstanding commits to finish.
There's also the problem of grabbing the DRM private object for pageflips which can lead to page faults in the case where commits execute out of order and free a DRM private object state that was still required for commit tail.
[How] Now that we reset the plane in DM with the same conditions DC checks we can have planes go through DC validation and we know when we need to check and stall based on whether the stream or planes changed.
We mark lock_and_validation_needed whenever we've done this, so just go back to using that instead of dm_determine_update_type_for_commit.
Since we'll skip resetting the plane for a pageflip we will no longer grab the DRM private object for pageflips as well, avoiding the page fault issued caused by pageflipping under load with commits executing out of order.
Cc: Harry Wentland harry.wentland@amd.com Cc: Bhawanpreet Lakha Bhawanpreet.Lakha@amd.com Signed-off-by: Nicholas Kazlauskas nicholas.kazlauskas@amd.com
.../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 199 ++---------------- 1 file changed, 17 insertions(+), 182 deletions(-)
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c index 2cbb29199e61..59829ec81694 100644 --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c @@ -8542,161 +8542,6 @@ static int dm_update_plane_state(struct dc *dc, return ret; }
-static int -dm_determine_update_type_for_commit(struct amdgpu_display_manager *dm,
struct drm_atomic_state *state,
enum surface_update_type *out_type)
-{
- struct dc *dc = dm->dc;
- struct dm_atomic_state *dm_state = NULL, *old_dm_state = NULL;
- int i, j, num_plane, ret = 0;
- struct drm_plane_state *old_plane_state, *new_plane_state;
- struct dm_plane_state *new_dm_plane_state, *old_dm_plane_state;
- struct drm_crtc *new_plane_crtc;
- struct drm_plane *plane;
- struct drm_crtc *crtc;
- struct drm_crtc_state *new_crtc_state, *old_crtc_state;
- struct dm_crtc_state *new_dm_crtc_state, *old_dm_crtc_state;
- struct dc_stream_status *status = NULL;
- enum surface_update_type update_type = UPDATE_TYPE_FAST;
- struct surface_info_bundle {
struct dc_surface_update surface_updates[MAX_SURFACES];
struct dc_plane_info plane_infos[MAX_SURFACES];
struct dc_scaling_info scaling_infos[MAX_SURFACES];
struct dc_flip_addrs flip_addrs[MAX_SURFACES];
struct dc_stream_update stream_update;
- } *bundle;
- bundle = kzalloc(sizeof(*bundle), GFP_KERNEL);
- if (!bundle) {
DRM_ERROR("Failed to allocate update bundle\n");
/* Set type to FULL to avoid crashing in DC*/
update_type = UPDATE_TYPE_FULL;
goto cleanup;
- }
- for_each_oldnew_crtc_in_state(state, crtc, old_crtc_state, new_crtc_state, i) {
memset(bundle, 0, sizeof(struct surface_info_bundle));
new_dm_crtc_state = to_dm_crtc_state(new_crtc_state);
old_dm_crtc_state = to_dm_crtc_state(old_crtc_state);
num_plane = 0;
if (new_dm_crtc_state->stream != old_dm_crtc_state->stream) {
update_type = UPDATE_TYPE_FULL;
goto cleanup;
}
if (!new_dm_crtc_state->stream)
continue;
for_each_oldnew_plane_in_state(state, plane, old_plane_state, new_plane_state, j) {
struct dc_plane_info *plane_info = &bundle->plane_infos[num_plane];
struct dc_flip_addrs *flip_addr = &bundle->flip_addrs[num_plane];
struct dc_scaling_info *scaling_info = &bundle->scaling_infos[num_plane];
new_plane_crtc = new_plane_state->crtc;
new_dm_plane_state = to_dm_plane_state(new_plane_state);
old_dm_plane_state = to_dm_plane_state(old_plane_state);
if (plane->type == DRM_PLANE_TYPE_CURSOR)
continue;
if (new_dm_plane_state->dc_state != old_dm_plane_state->dc_state) {
update_type = UPDATE_TYPE_FULL;
goto cleanup;
}
if (crtc != new_plane_crtc)
continue;
bundle->surface_updates[num_plane].surface =
new_dm_plane_state->dc_state;
if (new_crtc_state->mode_changed) {
bundle->stream_update.dst = new_dm_crtc_state->stream->dst;
bundle->stream_update.src = new_dm_crtc_state->stream->src;
}
if (new_crtc_state->color_mgmt_changed) {
bundle->surface_updates[num_plane].gamma =
new_dm_plane_state->dc_state->gamma_correction;
bundle->surface_updates[num_plane].in_transfer_func =
new_dm_plane_state->dc_state->in_transfer_func;
bundle->surface_updates[num_plane].gamut_remap_matrix =
&new_dm_plane_state->dc_state->gamut_remap_matrix;
bundle->stream_update.gamut_remap =
&new_dm_crtc_state->stream->gamut_remap_matrix;
bundle->stream_update.output_csc_transform =
&new_dm_crtc_state->stream->csc_color_matrix;
bundle->stream_update.out_transfer_func =
new_dm_crtc_state->stream->out_transfer_func;
}
ret = fill_dc_scaling_info(new_plane_state,
scaling_info);
if (ret)
goto cleanup;
bundle->surface_updates[num_plane].scaling_info = scaling_info;
if (new_plane_state->fb) {
ret = fill_dc_plane_info_and_addr(
dm->adev, new_plane_state,
new_dm_plane_state->tiling_flags,
plane_info, &flip_addr->address,
new_dm_plane_state->tmz_surface, false);
if (ret)
goto cleanup;
bundle->surface_updates[num_plane].plane_info = plane_info;
bundle->surface_updates[num_plane].flip_addr = flip_addr;
}
num_plane++;
}
if (num_plane == 0)
continue;
ret = dm_atomic_get_state(state, &dm_state);
if (ret)
goto cleanup;
old_dm_state = dm_atomic_get_old_state(state);
if (!old_dm_state) {
ret = -EINVAL;
goto cleanup;
}
status = dc_stream_get_status_from_state(old_dm_state->context,
new_dm_crtc_state->stream);
bundle->stream_update.stream = new_dm_crtc_state->stream;
/*
* TODO: DC modifies the surface during this call so we need
* to lock here - find a way to do this without locking.
*/
mutex_lock(&dm->dc_lock);
update_type = dc_check_update_surfaces_for_stream(
dc, bundle->surface_updates, num_plane,
&bundle->stream_update, status);
mutex_unlock(&dm->dc_lock);
if (update_type > UPDATE_TYPE_MED) {
update_type = UPDATE_TYPE_FULL;
goto cleanup;
}
- }
-cleanup:
- kfree(bundle);
- *out_type = update_type;
- return ret;
-} #if defined(CONFIG_DRM_AMD_DC_DCN) static int add_affected_mst_dsc_crtcs(struct drm_atomic_state *state, struct drm_crtc *crtc) { @@ -8737,8 +8582,7 @@ static int add_affected_mst_dsc_crtcs(struct drm_atomic_state *state, struct drm
- acquired. For full updates case which removes/adds/updates streams on one
- CRTC while flipping on another CRTC, acquiring global lock will guarantee
- that any such full update commit will wait for completion of any outstanding
- flip using DRMs synchronization events. See
- dm_determine_update_type_for_commit()
- flip using DRMs synchronization events.
- Note that DM adds the affected connectors for all CRTCs in state, when that
- might not seem necessary. This is because DC stream creation requires the
@@ -8759,15 +8603,8 @@ static int amdgpu_dm_atomic_check(struct drm_device *dev, struct drm_crtc_state *old_crtc_state, *new_crtc_state; struct drm_plane *plane; struct drm_plane_state *old_plane_state, *new_plane_state;
enum surface_update_type update_type = UPDATE_TYPE_FAST;
enum surface_update_type overall_update_type = UPDATE_TYPE_FAST; enum dc_status status; int ret, i;
/*
* This bool will be set for true for any modeset/reset
* or plane update which implies non fast surface update.
*/
bool lock_and_validation_needed = false;
ret = drm_atomic_helper_check_modeset(dev, state);
@@ -8961,27 +8798,23 @@ static int amdgpu_dm_atomic_check(struct drm_device *dev, if (!is_scaling_state_different(dm_new_con_state, dm_old_con_state)) continue;
overall_update_type = UPDATE_TYPE_FULL;
lock_and_validation_needed = true; }
ret = dm_determine_update_type_for_commit(&adev->dm, state, &update_type);
if (ret)
goto fail;
if (overall_update_type < update_type)
overall_update_type = update_type;
/*
* lock_and_validation_needed was an old way to determine if we need to set
* the global lock. Leaving it in to check if we broke any corner cases
* lock_and_validation_needed true = UPDATE_TYPE_FULL or UPDATE_TYPE_MED
* lock_and_validation_needed false = UPDATE_TYPE_FAST
- /**
* Streams and planes are reset when there are changes that affect
* bandwidth. Anything that affects bandwidth needs to go through
* DC global validation to ensure that the configuration can be applied
* to hardware.
*
* We have to currently stall out here in atomic_check for outstanding
* commits to finish in this case because our IRQ handlers reference
* DRM state directly - we can end up disabling interrupts too early
* if we don't.
*
*/* TODO: Remove this stall and drop DM state private objects.
- if (lock_and_validation_needed && overall_update_type <= UPDATE_TYPE_FAST)
WARN(1, "Global lock should be Set, overall_update_type should be UPDATE_TYPE_MED or UPDATE_TYPE_FULL");
- if (overall_update_type > UPDATE_TYPE_FAST) {
- if (lock_and_validation_needed) { ret = dm_atomic_get_state(state, &dm_state); if (ret) goto fail;
@@ -9063,7 +8896,9 @@ static int amdgpu_dm_atomic_check(struct drm_device *dev, struct dm_crtc_state *dm_new_crtc_state = to_dm_crtc_state(new_crtc_state);
dm_new_crtc_state->update_type = (int)overall_update_type;
dm_new_crtc_state->update_type = lock_and_validation_needed ?
UPDATE_TYPE_FULL :
UPDATE_TYPE_FAST;
}
/* Must be success */
-- 2.25.1
amd-gfx mailing list amd-gfx@lists.freedesktop.org https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.free...
[Why] DM atomic check was structured in a way that we required old DC state in order to dynamically add and remove planes and streams from the context to build the DC state context for validation.
DRM private objects were used to carry over the last DC state and were added to the context on nearly every commit - regardless of fast or full so we could check whether or not the new state could affect bandwidth.
The problem with this model is that DRM private objects do not implicitly stall out other commits.
So if you have two commits touching separate DRM objects they could run concurrently and potentially execute out of order - leading to a use-after-free.
If we want this to be safe we have two options: 1. Stall out concurrent commits since they touch the same private object 2. Refactor DM to not require old DC state and drop private object usage
[How] This implements approach #2 since it still allows for judder free updates in multi-display scenarios.
I'll list the big changes in order at a high level:
1. Subclass DRM atomic state instead of using DRM private objects.
DC relied on the old state to determine which changes cause bandwidth updates but now we have DM perform similar checks based on DRM state instead - dropping the requirement for old state to exist at all.
This means that we can now build a new DC context from scratch whenever we have something that DM thinks could affect bandwidth.
Whenever we need to rebuild bandwidth we now add all CRTCs and planes to the DRM state in order to get the absolute set of DC streams and DC planes.
This introduces a stall on other commits, but this stall already exists because of the lock_and_validation logic and it's necessary since updates may move around pipes and require full reprogramming.
2. Drop workarounds to add planes to maintain z-order early in atomic check. This is no longer needed because of the changes for (1).
This also involves fixing up should_plane_reset checks since we can just avoid resetting streams and planes when they haven't actually changed.
3. Rework dm_update_crtc_state and dm_update_plane_state to be single pass instead of two pass.
This is necessary since we no longer have the dc_state to add and remove planes to the context in and we want to defer creation to the end of commit_check.
It also makes the logic a lot simpler to follow since as an added bonus.
Cc: Bhawanpreet Lakha Bhawanpreet.Lakha@amd.com Cc: Harry Wentland harry.wentland@amd.com Cc: Leo Li sunpeng.li@amd.com Cc: Daniel Vetter daniel@ffwll.ch Signed-off-by: Nicholas Kazlauskas nicholas.kazlauskas@amd.com --- .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 720 +++++++----------- .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h | 11 +- 2 files changed, 280 insertions(+), 451 deletions(-)
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c index 59829ec81694..97a7dfc620e8 100644 --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c @@ -1839,7 +1839,6 @@ static int dm_resume(void *handle) struct drm_plane *plane; struct drm_plane_state *new_plane_state; struct dm_plane_state *dm_new_plane_state; - struct dm_atomic_state *dm_state = to_dm_atomic_state(dm->atomic_obj.state); enum dc_connection_type new_connection_type = dc_connection_none; struct dc_state *dc_state; int i, r, j; @@ -1879,11 +1878,6 @@ static int dm_resume(void *handle)
return 0; } - /* Recreate dc_state - DC invalidates it when setting power state to S3. */ - dc_release_state(dm_state->context); - dm_state->context = dc_create_state(dm->dc); - /* TODO: Remove dc_state->dccg, use dc->dccg directly. */ - dc_resource_state_construct(dm->dc, dm_state->context);
/* Before powering on DC we need to re-initialize DMUB. */ r = dm_dmub_hw_init(adev); @@ -2019,11 +2013,51 @@ const struct amdgpu_ip_block_version dm_ip_block = * *WIP* */
+struct drm_atomic_state *dm_atomic_state_alloc(struct drm_device *dev) +{ + struct dm_atomic_state *dm_state; + + dm_state = kzalloc(sizeof(*dm_state), GFP_KERNEL); + + if (!dm_state) + return NULL; + + if (drm_atomic_state_init(dev, &dm_state->base) < 0) { + kfree(dm_state); + return NULL; + } + + return &dm_state->base; +} + +void dm_atomic_state_free(struct drm_atomic_state *state) +{ + struct dm_atomic_state *dm_state = to_dm_atomic_state(state); + + if (dm_state->context) { + dc_release_state(dm_state->context); + dm_state->context = NULL; + } + + drm_atomic_state_default_release(state); + kfree(state); +} + +void dm_atomic_state_clear(struct drm_atomic_state *state) +{ + struct dm_atomic_state *dm_state = to_dm_atomic_state(state); + + drm_atomic_state_default_clear(&dm_state->base); +} + static const struct drm_mode_config_funcs amdgpu_dm_mode_funcs = { .fb_create = amdgpu_display_user_framebuffer_create, .output_poll_changed = drm_fb_helper_output_poll_changed, .atomic_check = amdgpu_dm_atomic_check, .atomic_commit = amdgpu_dm_atomic_commit, + .atomic_state_alloc = dm_atomic_state_alloc, + .atomic_state_free = dm_atomic_state_free, + .atomic_state_clear = dm_atomic_state_clear, };
static struct drm_mode_config_helper_funcs amdgpu_dm_mode_config_helperfuncs = { @@ -2782,108 +2816,6 @@ static int dcn10_register_irq_handlers(struct amdgpu_device *adev) } #endif
-/* - * Acquires the lock for the atomic state object and returns - * the new atomic state. - * - * This should only be called during atomic check. - */ -static int dm_atomic_get_state(struct drm_atomic_state *state, - struct dm_atomic_state **dm_state) -{ - struct drm_device *dev = state->dev; - struct amdgpu_device *adev = dev->dev_private; - struct amdgpu_display_manager *dm = &adev->dm; - struct drm_private_state *priv_state; - - if (*dm_state) - return 0; - - priv_state = drm_atomic_get_private_obj_state(state, &dm->atomic_obj); - if (IS_ERR(priv_state)) - return PTR_ERR(priv_state); - - *dm_state = to_dm_atomic_state(priv_state); - - return 0; -} - -static struct dm_atomic_state * -dm_atomic_get_new_state(struct drm_atomic_state *state) -{ - struct drm_device *dev = state->dev; - struct amdgpu_device *adev = dev->dev_private; - struct amdgpu_display_manager *dm = &adev->dm; - struct drm_private_obj *obj; - struct drm_private_state *new_obj_state; - int i; - - for_each_new_private_obj_in_state(state, obj, new_obj_state, i) { - if (obj->funcs == dm->atomic_obj.funcs) - return to_dm_atomic_state(new_obj_state); - } - - return NULL; -} - -static struct dm_atomic_state * -dm_atomic_get_old_state(struct drm_atomic_state *state) -{ - struct drm_device *dev = state->dev; - struct amdgpu_device *adev = dev->dev_private; - struct amdgpu_display_manager *dm = &adev->dm; - struct drm_private_obj *obj; - struct drm_private_state *old_obj_state; - int i; - - for_each_old_private_obj_in_state(state, obj, old_obj_state, i) { - if (obj->funcs == dm->atomic_obj.funcs) - return to_dm_atomic_state(old_obj_state); - } - - return NULL; -} - -static struct drm_private_state * -dm_atomic_duplicate_state(struct drm_private_obj *obj) -{ - struct dm_atomic_state *old_state, *new_state; - - new_state = kzalloc(sizeof(*new_state), GFP_KERNEL); - if (!new_state) - return NULL; - - __drm_atomic_helper_private_obj_duplicate_state(obj, &new_state->base); - - old_state = to_dm_atomic_state(obj->state); - - if (old_state && old_state->context) - new_state->context = dc_copy_state(old_state->context); - - if (!new_state->context) { - kfree(new_state); - return NULL; - } - - return &new_state->base; -} - -static void dm_atomic_destroy_state(struct drm_private_obj *obj, - struct drm_private_state *state) -{ - struct dm_atomic_state *dm_state = to_dm_atomic_state(state); - - if (dm_state && dm_state->context) - dc_release_state(dm_state->context); - - kfree(dm_state); -} - -static struct drm_private_state_funcs dm_atomic_state_funcs = { - .atomic_duplicate_state = dm_atomic_duplicate_state, - .atomic_destroy_state = dm_atomic_destroy_state, -}; - static int amdgpu_dm_mode_config_init(struct amdgpu_device *adev) { struct dm_atomic_state *state; @@ -2916,11 +2848,6 @@ static int amdgpu_dm_mode_config_init(struct amdgpu_device *adev)
dc_resource_state_copy_construct_current(adev->dm.dc, state->context);
- drm_atomic_private_obj_init(adev->ddev, - &adev->dm.atomic_obj, - &state->base, - &dm_atomic_state_funcs); - r = amdgpu_display_modeset_create_props(adev); if (r) return r; @@ -3360,7 +3287,6 @@ static int amdgpu_dm_initialize_drm_device(struct amdgpu_device *adev) static void amdgpu_dm_destroy_drm_device(struct amdgpu_display_manager *dm) { drm_mode_config_cleanup(dm->ddev); - drm_atomic_private_obj_fini(&dm->atomic_obj); return; }
@@ -7533,7 +7459,7 @@ static void amdgpu_dm_atomic_commit_tail(struct drm_atomic_state *state) struct drm_device *dev = state->dev; struct amdgpu_device *adev = dev->dev_private; struct amdgpu_display_manager *dm = &adev->dm; - struct dm_atomic_state *dm_state; + struct dm_atomic_state *dm_state = to_dm_atomic_state(state); struct dc_state *dc_state = NULL, *dc_state_temp = NULL; uint32_t i, j; struct drm_crtc *crtc; @@ -7547,8 +7473,7 @@ static void amdgpu_dm_atomic_commit_tail(struct drm_atomic_state *state)
drm_atomic_helper_update_legacy_modeset_state(dev, state);
- dm_state = dm_atomic_get_new_state(state); - if (dm_state && dm_state->context) { + if (dm_state->context) { dc_state = dm_state->context; } else { /* No state changes, retain current state. */ @@ -8052,10 +7977,8 @@ static int dm_update_crtc_state(struct amdgpu_display_manager *dm, struct drm_crtc *crtc, struct drm_crtc_state *old_crtc_state, struct drm_crtc_state *new_crtc_state, - bool enable, bool *lock_and_validation_needed) { - struct dm_atomic_state *dm_state = NULL; struct dm_crtc_state *dm_old_crtc_state, *dm_new_crtc_state; struct dc_stream_state *new_stream; int ret = 0; @@ -8077,7 +8000,7 @@ static int dm_update_crtc_state(struct amdgpu_display_manager *dm, aconnector = amdgpu_dm_find_first_crtc_matching_connector(state, crtc);
/* TODO This hack should go away */ - if (aconnector && enable) { + if (aconnector) { /* Make sure fake sink is created in plug-in scenario */ drm_new_conn_state = drm_atomic_get_new_connector_state(state, &aconnector->base); @@ -8155,36 +8078,20 @@ static int dm_update_crtc_state(struct amdgpu_display_manager *dm, new_crtc_state->active_changed, new_crtc_state->connectors_changed);
- /* Remove stream for any changed/disabled CRTC */ - if (!enable) { - - if (!dm_old_crtc_state->stream) - goto skip_modeset; - - ret = dm_atomic_get_state(state, &dm_state); - if (ret) - goto fail; - - DRM_DEBUG_DRIVER("Disabling DRM crtc: %d\n", - crtc->base.id); + /* Remove stream for changed CRTC - can't reuse it for validation. */ + if (dm_new_crtc_state->stream) { + DRM_DEBUG_DRIVER("Disabling DRM crtc: %d\n", crtc->base.id);
- /* i.e. reset mode */ - if (dc_remove_stream_from_ctx( - dm->dc, - dm_state->context, - dm_old_crtc_state->stream) != DC_OK) { - ret = -EINVAL; - goto fail; - } - - dc_stream_release(dm_old_crtc_state->stream); + dc_stream_release(dm_new_crtc_state->stream); dm_new_crtc_state->stream = NULL;
reset_freesync_config_for_crtc(dm_new_crtc_state);
*lock_and_validation_needed = true; + }
- } else {/* Add stream for any updated/enabled CRTC */ + /* Add stream for any updated/enabled CRTC - active implies enabled. */ + if (new_crtc_state->active) { /* * Quick fix to prevent NULL pointer on new_stream when * added MST connectors not found in existing crtc_state in the chained mode @@ -8193,35 +8100,13 @@ static int dm_update_crtc_state(struct amdgpu_display_manager *dm, if (!aconnector || (!aconnector->dc_sink && aconnector->mst_port)) goto skip_modeset;
- if (modereset_required(new_crtc_state)) - goto skip_modeset; - - if (modeset_required(new_crtc_state, new_stream, - dm_old_crtc_state->stream)) { - - WARN_ON(dm_new_crtc_state->stream); - - ret = dm_atomic_get_state(state, &dm_state); - if (ret) - goto fail; - - dm_new_crtc_state->stream = new_stream; + WARN_ON(dm_new_crtc_state->stream); + dm_new_crtc_state->stream = new_stream; + dc_stream_retain(new_stream);
- dc_stream_retain(new_stream); + DRM_DEBUG_DRIVER("Enabling DRM crtc: %d\n", crtc->base.id);
- DRM_DEBUG_DRIVER("Enabling DRM crtc: %d\n", - crtc->base.id); - - if (dc_add_stream_to_ctx( - dm->dc, - dm_state->context, - dm_new_crtc_state->stream) != DC_OK) { - ret = -EINVAL; - goto fail; - } - - *lock_and_validation_needed = true; - } + *lock_and_validation_needed = true; }
skip_modeset: @@ -8233,7 +8118,7 @@ static int dm_update_crtc_state(struct amdgpu_display_manager *dm, * We want to do dc stream updates that do not require a * full modeset below. */ - if (!(enable && aconnector && new_crtc_state->active)) + if (!(aconnector && new_crtc_state->enable && new_crtc_state->active)) return 0; /* * Given above conditions, the dc state cannot be NULL because: @@ -8281,10 +8166,12 @@ static bool should_reset_plane(struct drm_atomic_state *state, struct drm_plane_state *old_plane_state, struct drm_plane_state *new_plane_state) { - struct drm_plane *other; - struct drm_plane_state *old_other_state, *new_other_state; struct drm_crtc_state *new_crtc_state; - int i; + struct dm_plane_state *old_dm_plane_state, *new_dm_plane_state; + + /* Cursor planes don't affect bandwidth. */ + if (plane->type == DRM_PLANE_TYPE_CURSOR) + return false;
/* * TODO: Remove this hack once the checks below are sufficient @@ -8312,71 +8199,50 @@ static bool should_reset_plane(struct drm_atomic_state *state, if (new_crtc_state->color_mgmt_changed) return true;
+ /* Plane scaling can change with a modeset, so reset. */ if (drm_atomic_crtc_needs_modeset(new_crtc_state)) return true;
- /* - * If there are any new primary or overlay planes being added or - * removed then the z-order can potentially change. To ensure - * correct z-order and pipe acquisition the current DC architecture - * requires us to remove and recreate all existing planes. - * - * TODO: Come up with a more elegant solution for this. - */ - for_each_oldnew_plane_in_state(state, other, old_other_state, new_other_state, i) { - struct dm_plane_state *old_dm_plane_state, *new_dm_plane_state; - - if (other->type == DRM_PLANE_TYPE_CURSOR) - continue; - - if (old_other_state->crtc != new_plane_state->crtc && - new_other_state->crtc != new_plane_state->crtc) - continue; - - if (old_other_state->crtc != new_other_state->crtc) - return true; - - /* Src/dst size and scaling updates. */ - if (old_other_state->src_w != new_other_state->src_w || - old_other_state->src_h != new_other_state->src_h || - old_other_state->crtc_w != new_other_state->crtc_w || - old_other_state->crtc_h != new_other_state->crtc_h) - return true; + /* Src/dst size and scaling updates. */ + if (old_plane_state->src_w != new_plane_state->src_w || + old_plane_state->src_h != new_plane_state->src_h || + old_plane_state->crtc_w != new_plane_state->crtc_w || + old_plane_state->crtc_h != new_plane_state->crtc_h) + return true;
- /* Rotation / mirroring updates. */ - if (old_other_state->rotation != new_other_state->rotation) - return true; + /* Rotation / mirroring updates. */ + if (old_plane_state->rotation != new_plane_state->rotation) + return true;
- /* Blending updates. */ - if (old_other_state->pixel_blend_mode != - new_other_state->pixel_blend_mode) - return true; + /* Blending updates. */ + if (old_plane_state->pixel_blend_mode != + new_plane_state->pixel_blend_mode) + return true;
- /* Alpha updates. */ - if (old_other_state->alpha != new_other_state->alpha) - return true; + /* Alpha updates. */ + if (old_plane_state->alpha != new_plane_state->alpha) + return true;
- /* Colorspace changes. */ - if (old_other_state->color_range != new_other_state->color_range || - old_other_state->color_encoding != new_other_state->color_encoding) - return true; + /* Colorspace changes. */ + if (old_plane_state->color_range != new_plane_state->color_range || + old_plane_state->color_encoding != new_plane_state->color_encoding) + return true;
- /* Framebuffer checks fall at the end. */ - if (!old_other_state->fb || !new_other_state->fb) - continue; + /* Framebuffer checks fall at the end. */ + if (!old_plane_state->fb || !new_plane_state->fb) + return false;
- /* Pixel format changes can require bandwidth updates. */ - if (old_other_state->fb->format != new_other_state->fb->format) - return true; + /* Pixel format changes can require bandwidth updates. */ + if (old_plane_state->fb->format != new_plane_state->fb->format) + return true;
- old_dm_plane_state = to_dm_plane_state(old_other_state); - new_dm_plane_state = to_dm_plane_state(new_other_state); + old_dm_plane_state = to_dm_plane_state(old_plane_state); + new_dm_plane_state = to_dm_plane_state(new_plane_state);
- /* Tiling and DCC changes also require bandwidth updates. */ - if (old_dm_plane_state->tiling_flags != - new_dm_plane_state->tiling_flags) - return true; - } + /* Tiling and DCC changes also require bandwidth updates. */ + if (old_dm_plane_state->tiling_flags != + new_dm_plane_state->tiling_flags) + return true;
return false; } @@ -8386,15 +8252,12 @@ static int dm_update_plane_state(struct dc *dc, struct drm_plane *plane, struct drm_plane_state *old_plane_state, struct drm_plane_state *new_plane_state, - bool enable, bool *lock_and_validation_needed) { - - struct dm_atomic_state *dm_state = NULL; struct drm_crtc *new_plane_crtc, *old_plane_crtc; - struct drm_crtc_state *old_crtc_state, *new_crtc_state; - struct dm_crtc_state *dm_new_crtc_state, *dm_old_crtc_state; - struct dm_plane_state *dm_new_plane_state, *dm_old_plane_state; + struct drm_crtc_state *new_crtc_state; + struct dm_crtc_state *dm_new_crtc_state; + struct dm_plane_state *dm_new_plane_state; struct amdgpu_crtc *new_acrtc; bool needs_reset; int ret = 0; @@ -8403,12 +8266,12 @@ static int dm_update_plane_state(struct dc *dc, new_plane_crtc = new_plane_state->crtc; old_plane_crtc = old_plane_state->crtc; dm_new_plane_state = to_dm_plane_state(new_plane_state); - dm_old_plane_state = to_dm_plane_state(old_plane_state);
/*TODO Implement better atomic check for cursor plane */ if (plane->type == DRM_PLANE_TYPE_CURSOR) { - if (!enable || !new_plane_crtc || - drm_atomic_plane_disabling(plane->state, new_plane_state)) + /* Cursor disabled is always OK. */ + if (!new_plane_crtc || + drm_atomic_plane_disabling(plane->state, new_plane_state)) return 0;
new_acrtc = to_amdgpu_crtc(new_plane_crtc); @@ -8423,123 +8286,72 @@ static int dm_update_plane_state(struct dc *dc, return 0; }
+ /* Check if the plane requires a reset for bandwidth validation. */ needs_reset = should_reset_plane(state, plane, old_plane_state, new_plane_state);
- /* Remove any changed/removed planes */ - if (!enable) { - if (!needs_reset) - return 0; - - if (!old_plane_crtc) - return 0; - - old_crtc_state = drm_atomic_get_old_crtc_state( - state, old_plane_crtc); - dm_old_crtc_state = to_dm_crtc_state(old_crtc_state); - - if (!dm_old_crtc_state->stream) - return 0; + /* Exit early if the plane hasn't been trivially modified. */ + if (!needs_reset) + return 0;
+ /** + * Remove exisiting plane, if any - we can't reuse it for validation + * because we'd be modifying the current state applied to HW. + */ + if (dm_new_plane_state->dc_state) { DRM_DEBUG_ATOMIC("Disabling DRM plane: %d on DRM crtc %d\n", - plane->base.id, old_plane_crtc->base.id); - - ret = dm_atomic_get_state(state, &dm_state); - if (ret) - return ret; + plane->base.id, old_plane_crtc->base.id);
- if (!dc_remove_plane_from_context( - dc, - dm_old_crtc_state->stream, - dm_old_plane_state->dc_state, - dm_state->context)) { - - ret = EINVAL; - return ret; - } - - - dc_plane_state_release(dm_old_plane_state->dc_state); + dc_plane_state_release(dm_new_plane_state->dc_state); dm_new_plane_state->dc_state = NULL;
*lock_and_validation_needed = true; + }
- } else { /* Add new planes */ - struct dc_plane_state *dc_new_plane_state; - - if (drm_atomic_plane_disabling(plane->state, new_plane_state)) - return 0; - - if (!new_plane_crtc) - return 0; - - new_crtc_state = drm_atomic_get_new_crtc_state(state, new_plane_crtc); - dm_new_crtc_state = to_dm_crtc_state(new_crtc_state); - - if (!dm_new_crtc_state->stream) - return 0; - - if (!needs_reset) - return 0; - - ret = dm_plane_helper_check_state(new_plane_state, new_crtc_state); - if (ret) - return ret; - - WARN_ON(dm_new_plane_state->dc_state); - - dc_new_plane_state = dc_create_plane_state(dc); - if (!dc_new_plane_state) - return -ENOMEM; - - DRM_DEBUG_DRIVER("Enabling DRM plane: %d on DRM crtc %d\n", - plane->base.id, new_plane_crtc->base.id); + /** + * If the plane is disabling exit early. Also serves as a guarantee + * that we have a framebuffer below if we do have a CRTC. + */ + if (drm_atomic_plane_disabling(plane->state, new_plane_state)) + return 0;
- ret = fill_dc_plane_attributes( - new_plane_crtc->dev->dev_private, - dc_new_plane_state, - new_plane_state, - new_crtc_state); - if (ret) { - dc_plane_state_release(dc_new_plane_state); - return ret; - } + /* If we don't have a CRTC then the plane is disabled. */ + if (!new_plane_crtc) + return 0;
- ret = dm_atomic_get_state(state, &dm_state); - if (ret) { - dc_plane_state_release(dc_new_plane_state); - return ret; - } + new_crtc_state = drm_atomic_get_new_crtc_state(state, new_plane_crtc); + dm_new_crtc_state = to_dm_crtc_state(new_crtc_state);
- /* - * Any atomic check errors that occur after this will - * not need a release. The plane state will be attached - * to the stream, and therefore part of the atomic - * state. It'll be released when the atomic state is - * cleaned. - */ - if (!dc_add_plane_to_context( - dc, - dm_new_crtc_state->stream, - dc_new_plane_state, - dm_state->context)) { + /* Don't enable the plane if there's no stream for output. */ + if (!dm_new_crtc_state->stream) + return 0;
- dc_plane_state_release(dc_new_plane_state); - return -EINVAL; - } + /** + * Freeing the plane will take care of freeing the dc_state + * on failure, so we don't need to explicitly release below. + */ + dm_new_plane_state->dc_state = dc_create_plane_state(dc); + if (!dm_new_plane_state->dc_state) + return -ENOMEM;
- dm_new_plane_state->dc_state = dc_new_plane_state; + DRM_DEBUG_DRIVER("Enabling DRM plane: %d on DRM crtc %d\n", + plane->base.id, new_plane_crtc->base.id);
- /* Tell DC to do a full surface update every time there - * is a plane change. Inefficient, but works for now. - */ - dm_new_plane_state->dc_state->update_flags.bits.full_update = 1; + ret = fill_dc_plane_attributes(new_plane_crtc->dev->dev_private, + dm_new_plane_state->dc_state, + new_plane_state, new_crtc_state); + if (ret) + return ret;
- *lock_and_validation_needed = true; - } + /** + * Tell DC to do a full surface update every time there + * is a plane change. Inefficient, but works for now. + */ + dm_new_plane_state->dc_state->update_flags.bits.full_update = 1;
+ *lock_and_validation_needed = true;
- return ret; + return 0; }
#if defined(CONFIG_DRM_AMD_DC_DCN) @@ -8567,6 +8379,113 @@ static int add_affected_mst_dsc_crtcs(struct drm_atomic_state *state, struct drm } #endif
+static int dm_atomic_state_init_context(struct drm_device *dev, + struct drm_atomic_state *state) +{ + struct dm_atomic_state *dm_state = to_dm_atomic_state(state); + struct amdgpu_device *adev = dev->dev_private; + struct drm_crtc *crtc; + struct drm_crtc_state *new_crtc_state; + struct dm_crtc_state *new_dm_crtc_state; + struct drm_plane *plane; + struct drm_plane_state *new_plane_state, *old_plane_state; + struct dm_plane_state *new_dm_plane_state; + int ret, i; + + dm_state->context = dc_create_state(adev->dm.dc); + if (!dm_state->context) + return -ENOMEM; + + /** + * DC validation requires an absolute set of streams and planes to work + * with so add all planes and CRTCs to the state to make this work. + * Pipe allocation can change so there's no easy way to work around + * this constraint. + * + * Unfortunately this also means that whenever userspace requests a + * change that only needs to modify one CRTC's planes we still have to + * stall out fast updates affecting other CRTCs - introducing judder + * in multi-monitor scenarios. + * + * Userspace should avoid frequent updates to properties that can + * require bandwidth changes. + */ + drm_for_each_crtc(crtc, dev) { + new_crtc_state = drm_atomic_get_crtc_state(state, crtc); + if (IS_ERR(new_crtc_state)) + return PTR_ERR(new_crtc_state); + + /** + * Cursor planes may not strictly be necessary here + * but it's probably safer to add them. + */ + ret = drm_atomic_add_affected_planes(state, crtc); + if (ret) + return ret; + } + + for_each_new_crtc_in_state(state, crtc, new_crtc_state, i) { + new_dm_crtc_state = to_dm_crtc_state(new_crtc_state); + + if (!new_dm_crtc_state->stream) + continue; + + if (dc_add_stream_to_ctx(adev->dm.dc, dm_state->context, + new_dm_crtc_state->stream) != DC_OK) + return -EINVAL; + } + + /** + * Planes are added in reverse order to ensure correct blending order + * in DC - planes with higher priority go first, and the cursor and + * MPO planes are at the very end of the list. + */ + for_each_oldnew_plane_in_state_reverse(state, plane, old_plane_state, new_plane_state, i) { + /* Cursors aren't real hardware planes. */ + if (plane->type == DRM_PLANE_TYPE_CURSOR) + continue; + + if (!new_plane_state->crtc) + continue; + + new_crtc_state = drm_atomic_get_new_crtc_state( + state, new_plane_state->crtc); + + if (!new_crtc_state) { + DRM_WARN("No CRTC state found: plane=%d crtc=%d\n", + plane->base.id, + new_plane_state->crtc->base.id); + return -EINVAL; + } + + new_dm_crtc_state = to_dm_crtc_state(new_crtc_state); + + /* Skip planes for disabled streams. */ + if (!new_dm_crtc_state->stream) + continue; + + new_dm_plane_state = to_dm_plane_state(new_plane_state); + + if (!new_dm_plane_state->dc_state) { + DRM_WARN("No DC state found: plane=%d crtc=%d\n", + plane->base.id, + new_plane_state->crtc->base.id); + return -EINVAL; + } + + if (!dc_add_plane_to_context( + adev->dm.dc, new_dm_crtc_state->stream, + new_dm_plane_state->dc_state, dm_state->context)) { + DRM_DEBUG_KMS( + "Couldn't add plane to context: plane=%d\n", + plane->base.id); + return -EINVAL; + } + } + + return 0; +} + /** * amdgpu_dm_atomic_check() - Atomic check implementation for AMDgpu DM. * @dev: The DRM device @@ -8595,7 +8514,7 @@ static int amdgpu_dm_atomic_check(struct drm_device *dev, struct drm_atomic_state *state) { struct amdgpu_device *adev = dev->dev_private; - struct dm_atomic_state *dm_state = NULL; + struct dm_atomic_state *dm_state = to_dm_atomic_state(state); struct dc *dc = adev->dm.dc; struct drm_connector *connector; struct drm_connector_state *old_con_state, *new_con_state; @@ -8607,6 +8526,11 @@ static int amdgpu_dm_atomic_check(struct drm_device *dev, int ret, i; bool lock_and_validation_needed = false;
+ /** + * Check for modesets on CRTCs in the new state. DRM internally checks + * that the mode has actually changed for us as well in here, so we can + * avoid modesets. + */ ret = drm_atomic_helper_check_modeset(dev, state); if (ret) goto fail; @@ -8634,6 +8558,18 @@ static int amdgpu_dm_atomic_check(struct drm_device *dev, new_crtc_state->connectors_changed = true; }
+ /** + * Add all required objects for performing the commit and stalling out + * other commits that may be touching the same resources. + */ + + /** + * Pass one: Add all affected CRTCs on a MST DSC topology that has a + * CRTC undergoing a modeset and mark mode_changed = true for each one. + * + * Optimization: DSC can only be supported on DCN2 onwards, so we can + * skip on earlier ASIC. + */ #if defined(CONFIG_DRM_AMD_DC_DCN) if (adev->asic_type >= CHIP_NAVI10) { for_each_oldnew_crtc_in_state(state, crtc, old_crtc_state, new_crtc_state, i) { @@ -8645,6 +8581,8 @@ static int amdgpu_dm_atomic_check(struct drm_device *dev, } } #endif + + /* Pass two: Add connectors and planes for CRTCs as needed. */ for_each_oldnew_crtc_in_state(state, crtc, old_crtc_state, new_crtc_state, i) { if (!drm_atomic_crtc_needs_modeset(new_crtc_state) && !new_crtc_state->color_mgmt_changed && @@ -8663,42 +8601,6 @@ static int amdgpu_dm_atomic_check(struct drm_device *dev, goto fail; }
- /* - * Add all primary and overlay planes on the CRTC to the state - * whenever a plane is enabled to maintain correct z-ordering - * and to enable fast surface updates. - */ - drm_for_each_crtc(crtc, dev) { - bool modified = false; - - for_each_oldnew_plane_in_state(state, plane, old_plane_state, new_plane_state, i) { - if (plane->type == DRM_PLANE_TYPE_CURSOR) - continue; - - if (new_plane_state->crtc == crtc || - old_plane_state->crtc == crtc) { - modified = true; - break; - } - } - - if (!modified) - continue; - - drm_for_each_plane_mask(plane, state->dev, crtc->state->plane_mask) { - if (plane->type == DRM_PLANE_TYPE_CURSOR) - continue; - - new_plane_state = - drm_atomic_get_plane_state(state, plane); - - if (IS_ERR(new_plane_state)) { - ret = PTR_ERR(new_plane_state); - goto fail; - } - } - } - /* Prepass for updating tiling flags on new planes. */ for_each_new_plane_in_state(state, plane, new_plane_state, i) { struct dm_plane_state *new_dm_plane_state = to_dm_plane_state(new_plane_state); @@ -8710,45 +8612,21 @@ static int amdgpu_dm_atomic_check(struct drm_device *dev, goto fail; }
- /* Remove exiting planes if they are modified */ - for_each_oldnew_plane_in_state_reverse(state, plane, old_plane_state, new_plane_state, i) { - ret = dm_update_plane_state(dc, state, plane, - old_plane_state, - new_plane_state, - false, - &lock_and_validation_needed); - if (ret) - goto fail; - } - - /* Disable all crtcs which require disable */ + /* Update modified CRTCs. */ for_each_oldnew_crtc_in_state(state, crtc, old_crtc_state, new_crtc_state, i) { ret = dm_update_crtc_state(&adev->dm, state, crtc, old_crtc_state, new_crtc_state, - false, &lock_and_validation_needed); if (ret) goto fail; }
- /* Enable all crtcs which require enable */ - for_each_oldnew_crtc_in_state(state, crtc, old_crtc_state, new_crtc_state, i) { - ret = dm_update_crtc_state(&adev->dm, state, crtc, - old_crtc_state, - new_crtc_state, - true, - &lock_and_validation_needed); - if (ret) - goto fail; - } - - /* Add new/modified planes */ + /* Update modified planes. */ for_each_oldnew_plane_in_state_reverse(state, plane, old_plane_state, new_plane_state, i) { ret = dm_update_plane_state(dc, state, plane, old_plane_state, new_plane_state, - true, &lock_and_validation_needed); if (ret) goto fail; @@ -8812,10 +8690,11 @@ static int amdgpu_dm_atomic_check(struct drm_device *dev, * DRM state directly - we can end up disabling interrupts too early * if we don't. * - * TODO: Remove this stall and drop DM state private objects. + * TODO: Remove this stall. */ if (lock_and_validation_needed) { - ret = dm_atomic_get_state(state, &dm_state); + /* Create a new DC context to validate. */ + ret = dm_atomic_state_init_context(dev, state); if (ret) goto fail;
@@ -8848,47 +8727,6 @@ static int amdgpu_dm_atomic_check(struct drm_device *dev, ret = -EINVAL; goto fail; } - } else { - /* - * The commit is a fast update. Fast updates shouldn't change - * the DC context, affect global validation, and can have their - * commit work done in parallel with other commits not touching - * the same resource. If we have a new DC context as part of - * the DM atomic state from validation we need to free it and - * retain the existing one instead. - * - * Furthermore, since the DM atomic state only contains the DC - * context and can safely be annulled, we can free the state - * and clear the associated private object now to free - * some memory and avoid a possible use-after-free later. - */ - - for (i = 0; i < state->num_private_objs; i++) { - struct drm_private_obj *obj = state->private_objs[i].ptr; - - if (obj->funcs == adev->dm.atomic_obj.funcs) { - int j = state->num_private_objs-1; - - dm_atomic_destroy_state(obj, - state->private_objs[i].state); - - /* If i is not at the end of the array then the - * last element needs to be moved to where i was - * before the array can safely be truncated. - */ - if (i != j) - state->private_objs[i] = - state->private_objs[j]; - - state->private_objs[j].ptr = NULL; - state->private_objs[j].state = NULL; - state->private_objs[j].old_state = NULL; - state->private_objs[j].new_state = NULL; - - state->num_private_objs = j; - break; - } - } }
/* Store the overall update type for use later in atomic check. */ diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h index ad025f5cfd3e..1c3aa7cb83b9 100644 --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h @@ -217,15 +217,6 @@ struct amdgpu_display_manager { struct drm_device *ddev; u16 display_indexes_num;
- /** - * @atomic_obj: - * - * In combination with &dm_atomic_state it helps manage - * global atomic state that doesn't map cleanly into existing - * drm resources, like &dc_context. - */ - struct drm_private_obj atomic_obj; - /** * @dc_lock: * @@ -440,7 +431,7 @@ struct dm_crtc_state { #define to_dm_crtc_state(x) container_of(x, struct dm_crtc_state, base)
struct dm_atomic_state { - struct drm_private_state base; + struct drm_atomic_state base;
struct dc_state *context; };
Hi,
I have some minor inline comments, but everything looks fine when I tested it on Raven; feel free to add
Tested-by: Rodrigo Siqueira Rodrigo.Siqueira@amd.com
in the whole series.
On 07/30, Nicholas Kazlauskas wrote:
[Why] DM atomic check was structured in a way that we required old DC state in order to dynamically add and remove planes and streams from the context to build the DC state context for validation.
DRM private objects were used to carry over the last DC state and were added to the context on nearly every commit - regardless of fast or full so we could check whether or not the new state could affect bandwidth.
The problem with this model is that DRM private objects do not implicitly stall out other commits.
So if you have two commits touching separate DRM objects they could run concurrently and potentially execute out of order - leading to a use-after-free.
If we want this to be safe we have two options:
- Stall out concurrent commits since they touch the same private object
- Refactor DM to not require old DC state and drop private object usage
[How] This implements approach #2 since it still allows for judder free updates in multi-display scenarios.
I'll list the big changes in order at a high level:
- Subclass DRM atomic state instead of using DRM private objects.
DC relied on the old state to determine which changes cause bandwidth updates but now we have DM perform similar checks based on DRM state instead - dropping the requirement for old state to exist at all.
This means that we can now build a new DC context from scratch whenever we have something that DM thinks could affect bandwidth.
Whenever we need to rebuild bandwidth we now add all CRTCs and planes to the DRM state in order to get the absolute set of DC streams and DC planes.
This introduces a stall on other commits, but this stall already exists because of the lock_and_validation logic and it's necessary since updates may move around pipes and require full reprogramming.
- Drop workarounds to add planes to maintain z-order early in atomic
check. This is no longer needed because of the changes for (1).
This also involves fixing up should_plane_reset checks since we can just avoid resetting streams and planes when they haven't actually changed.
- Rework dm_update_crtc_state and dm_update_plane_state to be single
pass instead of two pass.
This is necessary since we no longer have the dc_state to add and remove planes to the context in and we want to defer creation to the end of commit_check.
It also makes the logic a lot simpler to follow since as an added bonus.
Cc: Bhawanpreet Lakha Bhawanpreet.Lakha@amd.com Cc: Harry Wentland harry.wentland@amd.com Cc: Leo Li sunpeng.li@amd.com Cc: Daniel Vetter daniel@ffwll.ch Signed-off-by: Nicholas Kazlauskas nicholas.kazlauskas@amd.com
.../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 720 +++++++----------- .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h | 11 +- 2 files changed, 280 insertions(+), 451 deletions(-)
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c index 59829ec81694..97a7dfc620e8 100644 --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c @@ -1839,7 +1839,6 @@ static int dm_resume(void *handle) struct drm_plane *plane; struct drm_plane_state *new_plane_state; struct dm_plane_state *dm_new_plane_state;
- struct dm_atomic_state *dm_state = to_dm_atomic_state(dm->atomic_obj.state); enum dc_connection_type new_connection_type = dc_connection_none; struct dc_state *dc_state; int i, r, j;
@@ -1879,11 +1878,6 @@ static int dm_resume(void *handle)
return 0;
}
/* Recreate dc_state - DC invalidates it when setting power state to S3. */
dc_release_state(dm_state->context);
dm_state->context = dc_create_state(dm->dc);
/* TODO: Remove dc_state->dccg, use dc->dccg directly. */
dc_resource_state_construct(dm->dc, dm_state->context);
/* Before powering on DC we need to re-initialize DMUB. */ r = dm_dmub_hw_init(adev);
@@ -2019,11 +2013,51 @@ const struct amdgpu_ip_block_version dm_ip_block =
- *WIP*
*/
+struct drm_atomic_state *dm_atomic_state_alloc(struct drm_device *dev) +{
- struct dm_atomic_state *dm_state;
- dm_state = kzalloc(sizeof(*dm_state), GFP_KERNEL);
How about use GFP_ATOMIC here?
- if (!dm_state)
return NULL;
- if (drm_atomic_state_init(dev, &dm_state->base) < 0) {
kfree(dm_state);
return NULL;
- }
- return &dm_state->base;
+}
+void dm_atomic_state_free(struct drm_atomic_state *state) +{
- struct dm_atomic_state *dm_state = to_dm_atomic_state(state);
- if (dm_state->context) {
dc_release_state(dm_state->context);
dm_state->context = NULL;
- }
- drm_atomic_state_default_release(state);
- kfree(state);
+}
+void dm_atomic_state_clear(struct drm_atomic_state *state) +{
- struct dm_atomic_state *dm_state = to_dm_atomic_state(state);
- drm_atomic_state_default_clear(&dm_state->base);
+}
static const struct drm_mode_config_funcs amdgpu_dm_mode_funcs = { .fb_create = amdgpu_display_user_framebuffer_create, .output_poll_changed = drm_fb_helper_output_poll_changed, .atomic_check = amdgpu_dm_atomic_check, .atomic_commit = amdgpu_dm_atomic_commit,
- .atomic_state_alloc = dm_atomic_state_alloc,
Nit: the above hooks use the prefix amdgpu_dm, maybe it is a good idea to keep this pattern for the new functions.
- .atomic_state_free = dm_atomic_state_free,
- .atomic_state_clear = dm_atomic_state_clear,
Looks like that the above function scope is restricted to this file and you only use it in the above data struct. How about making all of the new functions static?
};
static struct drm_mode_config_helper_funcs amdgpu_dm_mode_config_helperfuncs = { @@ -2782,108 +2816,6 @@ static int dcn10_register_irq_handlers(struct amdgpu_device *adev) } #endif
-/*
- Acquires the lock for the atomic state object and returns
- the new atomic state.
- This should only be called during atomic check.
- */
-static int dm_atomic_get_state(struct drm_atomic_state *state,
struct dm_atomic_state **dm_state)
-{
- struct drm_device *dev = state->dev;
- struct amdgpu_device *adev = dev->dev_private;
- struct amdgpu_display_manager *dm = &adev->dm;
- struct drm_private_state *priv_state;
- if (*dm_state)
return 0;
- priv_state = drm_atomic_get_private_obj_state(state, &dm->atomic_obj);
- if (IS_ERR(priv_state))
return PTR_ERR(priv_state);
- *dm_state = to_dm_atomic_state(priv_state);
- return 0;
-}
-static struct dm_atomic_state * -dm_atomic_get_new_state(struct drm_atomic_state *state) -{
- struct drm_device *dev = state->dev;
- struct amdgpu_device *adev = dev->dev_private;
- struct amdgpu_display_manager *dm = &adev->dm;
- struct drm_private_obj *obj;
- struct drm_private_state *new_obj_state;
- int i;
- for_each_new_private_obj_in_state(state, obj, new_obj_state, i) {
if (obj->funcs == dm->atomic_obj.funcs)
return to_dm_atomic_state(new_obj_state);
- }
- return NULL;
-}
-static struct dm_atomic_state * -dm_atomic_get_old_state(struct drm_atomic_state *state) -{
- struct drm_device *dev = state->dev;
- struct amdgpu_device *adev = dev->dev_private;
- struct amdgpu_display_manager *dm = &adev->dm;
- struct drm_private_obj *obj;
- struct drm_private_state *old_obj_state;
- int i;
- for_each_old_private_obj_in_state(state, obj, old_obj_state, i) {
if (obj->funcs == dm->atomic_obj.funcs)
return to_dm_atomic_state(old_obj_state);
- }
- return NULL;
-}
-static struct drm_private_state * -dm_atomic_duplicate_state(struct drm_private_obj *obj) -{
- struct dm_atomic_state *old_state, *new_state;
- new_state = kzalloc(sizeof(*new_state), GFP_KERNEL);
- if (!new_state)
return NULL;
- __drm_atomic_helper_private_obj_duplicate_state(obj, &new_state->base);
- old_state = to_dm_atomic_state(obj->state);
- if (old_state && old_state->context)
new_state->context = dc_copy_state(old_state->context);
- if (!new_state->context) {
kfree(new_state);
return NULL;
- }
- return &new_state->base;
-}
-static void dm_atomic_destroy_state(struct drm_private_obj *obj,
struct drm_private_state *state)
-{
- struct dm_atomic_state *dm_state = to_dm_atomic_state(state);
- if (dm_state && dm_state->context)
dc_release_state(dm_state->context);
- kfree(dm_state);
-}
-static struct drm_private_state_funcs dm_atomic_state_funcs = {
- .atomic_duplicate_state = dm_atomic_duplicate_state,
- .atomic_destroy_state = dm_atomic_destroy_state,
-};
static int amdgpu_dm_mode_config_init(struct amdgpu_device *adev) { struct dm_atomic_state *state; @@ -2916,11 +2848,6 @@ static int amdgpu_dm_mode_config_init(struct amdgpu_device *adev)
dc_resource_state_copy_construct_current(adev->dm.dc, state->context);
- drm_atomic_private_obj_init(adev->ddev,
&adev->dm.atomic_obj,
&state->base,
&dm_atomic_state_funcs);
- r = amdgpu_display_modeset_create_props(adev); if (r) return r;
@@ -3360,7 +3287,6 @@ static int amdgpu_dm_initialize_drm_device(struct amdgpu_device *adev) static void amdgpu_dm_destroy_drm_device(struct amdgpu_display_manager *dm) { drm_mode_config_cleanup(dm->ddev);
- drm_atomic_private_obj_fini(&dm->atomic_obj); return;
}
@@ -7533,7 +7459,7 @@ static void amdgpu_dm_atomic_commit_tail(struct drm_atomic_state *state) struct drm_device *dev = state->dev; struct amdgpu_device *adev = dev->dev_private; struct amdgpu_display_manager *dm = &adev->dm;
- struct dm_atomic_state *dm_state;
- struct dm_atomic_state *dm_state = to_dm_atomic_state(state); struct dc_state *dc_state = NULL, *dc_state_temp = NULL; uint32_t i, j; struct drm_crtc *crtc;
@@ -7547,8 +7473,7 @@ static void amdgpu_dm_atomic_commit_tail(struct drm_atomic_state *state)
drm_atomic_helper_update_legacy_modeset_state(dev, state);
- dm_state = dm_atomic_get_new_state(state);
- if (dm_state && dm_state->context) {
- if (dm_state->context) { dc_state = dm_state->context; } else { /* No state changes, retain current state. */
@@ -8052,10 +7977,8 @@ static int dm_update_crtc_state(struct amdgpu_display_manager *dm, struct drm_crtc *crtc, struct drm_crtc_state *old_crtc_state, struct drm_crtc_state *new_crtc_state,
bool enable, bool *lock_and_validation_needed)
{
- struct dm_atomic_state *dm_state = NULL; struct dm_crtc_state *dm_old_crtc_state, *dm_new_crtc_state; struct dc_stream_state *new_stream; int ret = 0;
@@ -8077,7 +8000,7 @@ static int dm_update_crtc_state(struct amdgpu_display_manager *dm, aconnector = amdgpu_dm_find_first_crtc_matching_connector(state, crtc);
/* TODO This hack should go away */
- if (aconnector && enable) {
- if (aconnector) { /* Make sure fake sink is created in plug-in scenario */ drm_new_conn_state = drm_atomic_get_new_connector_state(state, &aconnector->base);
@@ -8155,36 +8078,20 @@ static int dm_update_crtc_state(struct amdgpu_display_manager *dm, new_crtc_state->active_changed, new_crtc_state->connectors_changed);
- /* Remove stream for any changed/disabled CRTC */
- if (!enable) {
if (!dm_old_crtc_state->stream)
goto skip_modeset;
ret = dm_atomic_get_state(state, &dm_state);
if (ret)
goto fail;
DRM_DEBUG_DRIVER("Disabling DRM crtc: %d\n",
crtc->base.id);
- /* Remove stream for changed CRTC - can't reuse it for validation. */
- if (dm_new_crtc_state->stream) {
DRM_DEBUG_DRIVER("Disabling DRM crtc: %d\n", crtc->base.id);
/* i.e. reset mode */
if (dc_remove_stream_from_ctx(
dm->dc,
dm_state->context,
dm_old_crtc_state->stream) != DC_OK) {
ret = -EINVAL;
goto fail;
}
dc_stream_release(dm_old_crtc_state->stream);
dc_stream_release(dm_new_crtc_state->stream);
dm_new_crtc_state->stream = NULL;
reset_freesync_config_for_crtc(dm_new_crtc_state);
*lock_and_validation_needed = true;
}
- } else {/* Add stream for any updated/enabled CRTC */
- /* Add stream for any updated/enabled CRTC - active implies enabled. */
- if (new_crtc_state->active) { /*
- Quick fix to prevent NULL pointer on new_stream when
- added MST connectors not found in existing crtc_state in the chained mode
@@ -8193,35 +8100,13 @@ static int dm_update_crtc_state(struct amdgpu_display_manager *dm, if (!aconnector || (!aconnector->dc_sink && aconnector->mst_port)) goto skip_modeset;
if (modereset_required(new_crtc_state))
goto skip_modeset;
if (modeset_required(new_crtc_state, new_stream,
dm_old_crtc_state->stream)) {
WARN_ON(dm_new_crtc_state->stream);
ret = dm_atomic_get_state(state, &dm_state);
if (ret)
goto fail;
dm_new_crtc_state->stream = new_stream;
WARN_ON(dm_new_crtc_state->stream);
dm_new_crtc_state->stream = new_stream;
dc_stream_retain(new_stream);
dc_stream_retain(new_stream);
DRM_DEBUG_DRIVER("Enabling DRM crtc: %d\n", crtc->base.id);
DRM_DEBUG_DRIVER("Enabling DRM crtc: %d\n",
crtc->base.id);
if (dc_add_stream_to_ctx(
dm->dc,
dm_state->context,
dm_new_crtc_state->stream) != DC_OK) {
ret = -EINVAL;
goto fail;
}
*lock_and_validation_needed = true;
}
}*lock_and_validation_needed = true;
skip_modeset: @@ -8233,7 +8118,7 @@ static int dm_update_crtc_state(struct amdgpu_display_manager *dm, * We want to do dc stream updates that do not require a * full modeset below. */
- if (!(enable && aconnector && new_crtc_state->active))
- if (!(aconnector && new_crtc_state->enable && new_crtc_state->active)) return 0; /*
- Given above conditions, the dc state cannot be NULL because:
@@ -8281,10 +8166,12 @@ static bool should_reset_plane(struct drm_atomic_state *state, struct drm_plane_state *old_plane_state, struct drm_plane_state *new_plane_state) {
- struct drm_plane *other;
- struct drm_plane_state *old_other_state, *new_other_state; struct drm_crtc_state *new_crtc_state;
- int i;
struct dm_plane_state *old_dm_plane_state, *new_dm_plane_state;
/* Cursor planes don't affect bandwidth. */
if (plane->type == DRM_PLANE_TYPE_CURSOR)
return false;
/*
- TODO: Remove this hack once the checks below are sufficient
@@ -8312,71 +8199,50 @@ static bool should_reset_plane(struct drm_atomic_state *state, if (new_crtc_state->color_mgmt_changed) return true;
- /* Plane scaling can change with a modeset, so reset. */ if (drm_atomic_crtc_needs_modeset(new_crtc_state)) return true;
- /*
* If there are any new primary or overlay planes being added or
* removed then the z-order can potentially change. To ensure
* correct z-order and pipe acquisition the current DC architecture
* requires us to remove and recreate all existing planes.
*
* TODO: Come up with a more elegant solution for this.
*/
- for_each_oldnew_plane_in_state(state, other, old_other_state, new_other_state, i) {
struct dm_plane_state *old_dm_plane_state, *new_dm_plane_state;
if (other->type == DRM_PLANE_TYPE_CURSOR)
continue;
if (old_other_state->crtc != new_plane_state->crtc &&
new_other_state->crtc != new_plane_state->crtc)
continue;
if (old_other_state->crtc != new_other_state->crtc)
return true;
/* Src/dst size and scaling updates. */
if (old_other_state->src_w != new_other_state->src_w ||
old_other_state->src_h != new_other_state->src_h ||
old_other_state->crtc_w != new_other_state->crtc_w ||
old_other_state->crtc_h != new_other_state->crtc_h)
return true;
- /* Src/dst size and scaling updates. */
- if (old_plane_state->src_w != new_plane_state->src_w ||
old_plane_state->src_h != new_plane_state->src_h ||
old_plane_state->crtc_w != new_plane_state->crtc_w ||
old_plane_state->crtc_h != new_plane_state->crtc_h)
return true;
/* Rotation / mirroring updates. */
if (old_other_state->rotation != new_other_state->rotation)
return true;
- /* Rotation / mirroring updates. */
- if (old_plane_state->rotation != new_plane_state->rotation)
return true;
/* Blending updates. */
if (old_other_state->pixel_blend_mode !=
new_other_state->pixel_blend_mode)
return true;
- /* Blending updates. */
- if (old_plane_state->pixel_blend_mode !=
new_plane_state->pixel_blend_mode)
return true;
/* Alpha updates. */
if (old_other_state->alpha != new_other_state->alpha)
return true;
- /* Alpha updates. */
- if (old_plane_state->alpha != new_plane_state->alpha)
return true;
/* Colorspace changes. */
if (old_other_state->color_range != new_other_state->color_range ||
old_other_state->color_encoding != new_other_state->color_encoding)
return true;
- /* Colorspace changes. */
- if (old_plane_state->color_range != new_plane_state->color_range ||
old_plane_state->color_encoding != new_plane_state->color_encoding)
return true;
/* Framebuffer checks fall at the end. */
if (!old_other_state->fb || !new_other_state->fb)
continue;
- /* Framebuffer checks fall at the end. */
- if (!old_plane_state->fb || !new_plane_state->fb)
return false;
/* Pixel format changes can require bandwidth updates. */
if (old_other_state->fb->format != new_other_state->fb->format)
return true;
- /* Pixel format changes can require bandwidth updates. */
- if (old_plane_state->fb->format != new_plane_state->fb->format)
return true;
old_dm_plane_state = to_dm_plane_state(old_other_state);
new_dm_plane_state = to_dm_plane_state(new_other_state);
- old_dm_plane_state = to_dm_plane_state(old_plane_state);
- new_dm_plane_state = to_dm_plane_state(new_plane_state);
/* Tiling and DCC changes also require bandwidth updates. */
if (old_dm_plane_state->tiling_flags !=
new_dm_plane_state->tiling_flags)
return true;
- }
/* Tiling and DCC changes also require bandwidth updates. */
if (old_dm_plane_state->tiling_flags !=
new_dm_plane_state->tiling_flags)
return true;
return false;
} @@ -8386,15 +8252,12 @@ static int dm_update_plane_state(struct dc *dc, struct drm_plane *plane, struct drm_plane_state *old_plane_state, struct drm_plane_state *new_plane_state,
bool enable, bool *lock_and_validation_needed)
{
- struct dm_atomic_state *dm_state = NULL; struct drm_crtc *new_plane_crtc, *old_plane_crtc;
- struct drm_crtc_state *old_crtc_state, *new_crtc_state;
- struct dm_crtc_state *dm_new_crtc_state, *dm_old_crtc_state;
- struct dm_plane_state *dm_new_plane_state, *dm_old_plane_state;
- struct drm_crtc_state *new_crtc_state;
- struct dm_crtc_state *dm_new_crtc_state;
- struct dm_plane_state *dm_new_plane_state; struct amdgpu_crtc *new_acrtc; bool needs_reset; int ret = 0;
@@ -8403,12 +8266,12 @@ static int dm_update_plane_state(struct dc *dc, new_plane_crtc = new_plane_state->crtc; old_plane_crtc = old_plane_state->crtc; dm_new_plane_state = to_dm_plane_state(new_plane_state);
dm_old_plane_state = to_dm_plane_state(old_plane_state);
/*TODO Implement better atomic check for cursor plane */ if (plane->type == DRM_PLANE_TYPE_CURSOR) {
if (!enable || !new_plane_crtc ||
drm_atomic_plane_disabling(plane->state, new_plane_state))
/* Cursor disabled is always OK. */
if (!new_plane_crtc ||
drm_atomic_plane_disabling(plane->state, new_plane_state)) return 0;
new_acrtc = to_amdgpu_crtc(new_plane_crtc);
@@ -8423,123 +8286,72 @@ static int dm_update_plane_state(struct dc *dc, return 0; }
- /* Check if the plane requires a reset for bandwidth validation. */ needs_reset = should_reset_plane(state, plane, old_plane_state, new_plane_state);
- /* Remove any changed/removed planes */
- if (!enable) {
if (!needs_reset)
return 0;
if (!old_plane_crtc)
return 0;
old_crtc_state = drm_atomic_get_old_crtc_state(
state, old_plane_crtc);
dm_old_crtc_state = to_dm_crtc_state(old_crtc_state);
if (!dm_old_crtc_state->stream)
return 0;
/* Exit early if the plane hasn't been trivially modified. */
if (!needs_reset)
return 0;
/**
* Remove exisiting plane, if any - we can't reuse it for validation
Nit: exisiting -> existing
* because we'd be modifying the current state applied to HW.
*/
- if (dm_new_plane_state->dc_state) { DRM_DEBUG_ATOMIC("Disabling DRM plane: %d on DRM crtc %d\n",
plane->base.id, old_plane_crtc->base.id);
ret = dm_atomic_get_state(state, &dm_state);
if (ret)
return ret;
plane->base.id, old_plane_crtc->base.id);
if (!dc_remove_plane_from_context(
dc,
dm_old_crtc_state->stream,
dm_old_plane_state->dc_state,
dm_state->context)) {
ret = EINVAL;
return ret;
}
dc_plane_state_release(dm_old_plane_state->dc_state);
dc_plane_state_release(dm_new_plane_state->dc_state);
dm_new_plane_state->dc_state = NULL;
*lock_and_validation_needed = true;
}
- } else { /* Add new planes */
struct dc_plane_state *dc_new_plane_state;
if (drm_atomic_plane_disabling(plane->state, new_plane_state))
return 0;
if (!new_plane_crtc)
return 0;
new_crtc_state = drm_atomic_get_new_crtc_state(state, new_plane_crtc);
dm_new_crtc_state = to_dm_crtc_state(new_crtc_state);
if (!dm_new_crtc_state->stream)
return 0;
if (!needs_reset)
return 0;
ret = dm_plane_helper_check_state(new_plane_state, new_crtc_state);
if (ret)
return ret;
WARN_ON(dm_new_plane_state->dc_state);
dc_new_plane_state = dc_create_plane_state(dc);
if (!dc_new_plane_state)
return -ENOMEM;
DRM_DEBUG_DRIVER("Enabling DRM plane: %d on DRM crtc %d\n",
plane->base.id, new_plane_crtc->base.id);
- /**
* If the plane is disabling exit early. Also serves as a guarantee
* that we have a framebuffer below if we do have a CRTC.
*/
- if (drm_atomic_plane_disabling(plane->state, new_plane_state))
return 0;
ret = fill_dc_plane_attributes(
new_plane_crtc->dev->dev_private,
dc_new_plane_state,
new_plane_state,
new_crtc_state);
if (ret) {
dc_plane_state_release(dc_new_plane_state);
return ret;
}
- /* If we don't have a CRTC then the plane is disabled. */
- if (!new_plane_crtc)
return 0;
ret = dm_atomic_get_state(state, &dm_state);
if (ret) {
dc_plane_state_release(dc_new_plane_state);
return ret;
}
- new_crtc_state = drm_atomic_get_new_crtc_state(state, new_plane_crtc);
- dm_new_crtc_state = to_dm_crtc_state(new_crtc_state);
/*
* Any atomic check errors that occur after this will
* not need a release. The plane state will be attached
* to the stream, and therefore part of the atomic
* state. It'll be released when the atomic state is
* cleaned.
*/
if (!dc_add_plane_to_context(
dc,
dm_new_crtc_state->stream,
dc_new_plane_state,
dm_state->context)) {
- /* Don't enable the plane if there's no stream for output. */
- if (!dm_new_crtc_state->stream)
return 0;
dc_plane_state_release(dc_new_plane_state);
return -EINVAL;
}
- /**
* Freeing the plane will take care of freeing the dc_state
* on failure, so we don't need to explicitly release below.
*/
- dm_new_plane_state->dc_state = dc_create_plane_state(dc);
- if (!dm_new_plane_state->dc_state)
return -ENOMEM;
dm_new_plane_state->dc_state = dc_new_plane_state;
- DRM_DEBUG_DRIVER("Enabling DRM plane: %d on DRM crtc %d\n",
plane->base.id, new_plane_crtc->base.id);
/* Tell DC to do a full surface update every time there
* is a plane change. Inefficient, but works for now.
*/
dm_new_plane_state->dc_state->update_flags.bits.full_update = 1;
- ret = fill_dc_plane_attributes(new_plane_crtc->dev->dev_private,
dm_new_plane_state->dc_state,
new_plane_state, new_crtc_state);
- if (ret)
return ret;
*lock_and_validation_needed = true;
- }
/**
* Tell DC to do a full surface update every time there
* is a plane change. Inefficient, but works for now.
*/
dm_new_plane_state->dc_state->update_flags.bits.full_update = 1;
*lock_and_validation_needed = true;
- return ret;
- return 0;
}
#if defined(CONFIG_DRM_AMD_DC_DCN) @@ -8567,6 +8379,113 @@ static int add_affected_mst_dsc_crtcs(struct drm_atomic_state *state, struct drm } #endif
+static int dm_atomic_state_init_context(struct drm_device *dev,
struct drm_atomic_state *state)
+{
- struct dm_atomic_state *dm_state = to_dm_atomic_state(state);
- struct amdgpu_device *adev = dev->dev_private;
- struct drm_crtc *crtc;
- struct drm_crtc_state *new_crtc_state;
- struct dm_crtc_state *new_dm_crtc_state;
- struct drm_plane *plane;
- struct drm_plane_state *new_plane_state, *old_plane_state;
- struct dm_plane_state *new_dm_plane_state;
- int ret, i;
- dm_state->context = dc_create_state(adev->dm.dc);
- if (!dm_state->context)
return -ENOMEM;
- /**
* DC validation requires an absolute set of streams and planes to work
* with so add all planes and CRTCs to the state to make this work.
* Pipe allocation can change so there's no easy way to work around
* this constraint.
*
* Unfortunately this also means that whenever userspace requests a
* change that only needs to modify one CRTC's planes we still have to
* stall out fast updates affecting other CRTCs - introducing judder
* in multi-monitor scenarios.
*
* Userspace should avoid frequent updates to properties that can
* require bandwidth changes.
*/
- drm_for_each_crtc(crtc, dev) {
new_crtc_state = drm_atomic_get_crtc_state(state, crtc);
if (IS_ERR(new_crtc_state))
return PTR_ERR(new_crtc_state);
/**
* Cursor planes may not strictly be necessary here
* but it's probably safer to add them.
*/
ret = drm_atomic_add_affected_planes(state, crtc);
if (ret)
return ret;
- }
- for_each_new_crtc_in_state(state, crtc, new_crtc_state, i) {
new_dm_crtc_state = to_dm_crtc_state(new_crtc_state);
if (!new_dm_crtc_state->stream)
continue;
if (dc_add_stream_to_ctx(adev->dm.dc, dm_state->context,
new_dm_crtc_state->stream) != DC_OK)
return -EINVAL;
- }
- /**
* Planes are added in reverse order to ensure correct blending order
* in DC - planes with higher priority go first, and the cursor and
* MPO planes are at the very end of the list.
*/
- for_each_oldnew_plane_in_state_reverse(state, plane, old_plane_state, new_plane_state, i) {
/* Cursors aren't real hardware planes. */
if (plane->type == DRM_PLANE_TYPE_CURSOR)
continue;
if (!new_plane_state->crtc)
continue;
new_crtc_state = drm_atomic_get_new_crtc_state(
state, new_plane_state->crtc);
if (!new_crtc_state) {
DRM_WARN("No CRTC state found: plane=%d crtc=%d\n",
plane->base.id,
new_plane_state->crtc->base.id);
return -EINVAL;
}
new_dm_crtc_state = to_dm_crtc_state(new_crtc_state);
/* Skip planes for disabled streams. */
if (!new_dm_crtc_state->stream)
continue;
new_dm_plane_state = to_dm_plane_state(new_plane_state);
if (!new_dm_plane_state->dc_state) {
DRM_WARN("No DC state found: plane=%d crtc=%d\n",
plane->base.id,
new_plane_state->crtc->base.id);
return -EINVAL;
}
if (!dc_add_plane_to_context(
adev->dm.dc, new_dm_crtc_state->stream,
new_dm_plane_state->dc_state, dm_state->context)) {
DRM_DEBUG_KMS(
"Couldn't add plane to context: plane=%d\n",
plane->base.id);
return -EINVAL;
}
- }
- return 0;
+}
/**
- amdgpu_dm_atomic_check() - Atomic check implementation for AMDgpu DM.
- @dev: The DRM device
@@ -8595,7 +8514,7 @@ static int amdgpu_dm_atomic_check(struct drm_device *dev, struct drm_atomic_state *state) { struct amdgpu_device *adev = dev->dev_private;
- struct dm_atomic_state *dm_state = NULL;
- struct dm_atomic_state *dm_state = to_dm_atomic_state(state); struct dc *dc = adev->dm.dc; struct drm_connector *connector; struct drm_connector_state *old_con_state, *new_con_state;
@@ -8607,6 +8526,11 @@ static int amdgpu_dm_atomic_check(struct drm_device *dev, int ret, i; bool lock_and_validation_needed = false;
- /**
* Check for modesets on CRTCs in the new state. DRM internally checks
* that the mode has actually changed for us as well in here, so we can
* avoid modesets.
ret = drm_atomic_helper_check_modeset(dev, state); if (ret) goto fail;*/
@@ -8634,6 +8558,18 @@ static int amdgpu_dm_atomic_check(struct drm_device *dev, new_crtc_state->connectors_changed = true; }
- /**
* Add all required objects for performing the commit and stalling out
* other commits that may be touching the same resources.
*/
Nit: Join the above and below comment in a single one, and also use '/*' instead of '/**'.
Reviewed-by: Rodrigo Siqueira Rodrigo.Siqueira@amd.com
Best Regards
- /**
* Pass one: Add all affected CRTCs on a MST DSC topology that has a
* CRTC undergoing a modeset and mark mode_changed = true for each one.
*
* Optimization: DSC can only be supported on DCN2 onwards, so we can
* skip on earlier ASIC.
*/
#if defined(CONFIG_DRM_AMD_DC_DCN) if (adev->asic_type >= CHIP_NAVI10) { for_each_oldnew_crtc_in_state(state, crtc, old_crtc_state, new_crtc_state, i) { @@ -8645,6 +8581,8 @@ static int amdgpu_dm_atomic_check(struct drm_device *dev, } } #endif
- /* Pass two: Add connectors and planes for CRTCs as needed. */ for_each_oldnew_crtc_in_state(state, crtc, old_crtc_state, new_crtc_state, i) { if (!drm_atomic_crtc_needs_modeset(new_crtc_state) && !new_crtc_state->color_mgmt_changed &&
@@ -8663,42 +8601,6 @@ static int amdgpu_dm_atomic_check(struct drm_device *dev, goto fail; }
- /*
* Add all primary and overlay planes on the CRTC to the state
* whenever a plane is enabled to maintain correct z-ordering
* and to enable fast surface updates.
*/
- drm_for_each_crtc(crtc, dev) {
bool modified = false;
for_each_oldnew_plane_in_state(state, plane, old_plane_state, new_plane_state, i) {
if (plane->type == DRM_PLANE_TYPE_CURSOR)
continue;
if (new_plane_state->crtc == crtc ||
old_plane_state->crtc == crtc) {
modified = true;
break;
}
}
if (!modified)
continue;
drm_for_each_plane_mask(plane, state->dev, crtc->state->plane_mask) {
if (plane->type == DRM_PLANE_TYPE_CURSOR)
continue;
new_plane_state =
drm_atomic_get_plane_state(state, plane);
if (IS_ERR(new_plane_state)) {
ret = PTR_ERR(new_plane_state);
goto fail;
}
}
- }
- /* Prepass for updating tiling flags on new planes. */ for_each_new_plane_in_state(state, plane, new_plane_state, i) { struct dm_plane_state *new_dm_plane_state = to_dm_plane_state(new_plane_state);
@@ -8710,45 +8612,21 @@ static int amdgpu_dm_atomic_check(struct drm_device *dev, goto fail; }
- /* Remove exiting planes if they are modified */
- for_each_oldnew_plane_in_state_reverse(state, plane, old_plane_state, new_plane_state, i) {
ret = dm_update_plane_state(dc, state, plane,
old_plane_state,
new_plane_state,
false,
&lock_and_validation_needed);
if (ret)
goto fail;
- }
- /* Disable all crtcs which require disable */
- /* Update modified CRTCs. */ for_each_oldnew_crtc_in_state(state, crtc, old_crtc_state, new_crtc_state, i) { ret = dm_update_crtc_state(&adev->dm, state, crtc, old_crtc_state, new_crtc_state,
false, &lock_and_validation_needed);
if (ret) goto fail; }
/* Enable all crtcs which require enable */
for_each_oldnew_crtc_in_state(state, crtc, old_crtc_state, new_crtc_state, i) {
ret = dm_update_crtc_state(&adev->dm, state, crtc,
old_crtc_state,
new_crtc_state,
true,
&lock_and_validation_needed);
if (ret)
goto fail;
}
/* Add new/modified planes */
- /* Update modified planes. */ for_each_oldnew_plane_in_state_reverse(state, plane, old_plane_state, new_plane_state, i) { ret = dm_update_plane_state(dc, state, plane, old_plane_state, new_plane_state,
if (ret) goto fail;true, &lock_and_validation_needed);
@@ -8812,10 +8690,11 @@ static int amdgpu_dm_atomic_check(struct drm_device *dev, * DRM state directly - we can end up disabling interrupts too early * if we don't. *
* TODO: Remove this stall and drop DM state private objects.
*/ if (lock_and_validation_needed) {* TODO: Remove this stall.
ret = dm_atomic_get_state(state, &dm_state);
/* Create a new DC context to validate. */
if (ret) goto fail;ret = dm_atomic_state_init_context(dev, state);
@@ -8848,47 +8727,6 @@ static int amdgpu_dm_atomic_check(struct drm_device *dev, ret = -EINVAL; goto fail; }
} else {
/*
* The commit is a fast update. Fast updates shouldn't change
* the DC context, affect global validation, and can have their
* commit work done in parallel with other commits not touching
* the same resource. If we have a new DC context as part of
* the DM atomic state from validation we need to free it and
* retain the existing one instead.
*
* Furthermore, since the DM atomic state only contains the DC
* context and can safely be annulled, we can free the state
* and clear the associated private object now to free
* some memory and avoid a possible use-after-free later.
*/
for (i = 0; i < state->num_private_objs; i++) {
struct drm_private_obj *obj = state->private_objs[i].ptr;
if (obj->funcs == adev->dm.atomic_obj.funcs) {
int j = state->num_private_objs-1;
dm_atomic_destroy_state(obj,
state->private_objs[i].state);
/* If i is not at the end of the array then the
* last element needs to be moved to where i was
* before the array can safely be truncated.
*/
if (i != j)
state->private_objs[i] =
state->private_objs[j];
state->private_objs[j].ptr = NULL;
state->private_objs[j].state = NULL;
state->private_objs[j].old_state = NULL;
state->private_objs[j].new_state = NULL;
state->num_private_objs = j;
break;
}
}
}
/* Store the overall update type for use later in atomic check. */
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h index ad025f5cfd3e..1c3aa7cb83b9 100644 --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h @@ -217,15 +217,6 @@ struct amdgpu_display_manager { struct drm_device *ddev; u16 display_indexes_num;
- /**
* @atomic_obj:
*
* In combination with &dm_atomic_state it helps manage
* global atomic state that doesn't map cleanly into existing
* drm resources, like &dc_context.
*/
- struct drm_private_obj atomic_obj;
- /**
- @dc_lock:
@@ -440,7 +431,7 @@ struct dm_crtc_state { #define to_dm_crtc_state(x) container_of(x, struct dm_crtc_state, base)
struct dm_atomic_state {
- struct drm_private_state base;
struct drm_atomic_state base;
struct dc_state *context;
};
2.25.1
dri-devel mailing list dri-devel@lists.freedesktop.org https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.free...
On 2020-08-05 4:37 p.m., Rodrigo Siqueira wrote:
Hi,
I have some minor inline comments, but everything looks fine when I tested it on Raven; feel free to add
Tested-by: Rodrigo Siqueira Rodrigo.Siqueira@amd.com
in the whole series.
Thanks for the reviews!
I can clean up the nitpicks for this patch and make a v2.
Regards, Nicholas Kazlauskas
On 07/30, Nicholas Kazlauskas wrote:
[Why] DM atomic check was structured in a way that we required old DC state in order to dynamically add and remove planes and streams from the context to build the DC state context for validation.
DRM private objects were used to carry over the last DC state and were added to the context on nearly every commit - regardless of fast or full so we could check whether or not the new state could affect bandwidth.
The problem with this model is that DRM private objects do not implicitly stall out other commits.
So if you have two commits touching separate DRM objects they could run concurrently and potentially execute out of order - leading to a use-after-free.
If we want this to be safe we have two options:
- Stall out concurrent commits since they touch the same private object
- Refactor DM to not require old DC state and drop private object usage
[How] This implements approach #2 since it still allows for judder free updates in multi-display scenarios.
I'll list the big changes in order at a high level:
- Subclass DRM atomic state instead of using DRM private objects.
DC relied on the old state to determine which changes cause bandwidth updates but now we have DM perform similar checks based on DRM state instead - dropping the requirement for old state to exist at all.
This means that we can now build a new DC context from scratch whenever we have something that DM thinks could affect bandwidth.
Whenever we need to rebuild bandwidth we now add all CRTCs and planes to the DRM state in order to get the absolute set of DC streams and DC planes.
This introduces a stall on other commits, but this stall already exists because of the lock_and_validation logic and it's necessary since updates may move around pipes and require full reprogramming.
- Drop workarounds to add planes to maintain z-order early in atomic
check. This is no longer needed because of the changes for (1).
This also involves fixing up should_plane_reset checks since we can just avoid resetting streams and planes when they haven't actually changed.
- Rework dm_update_crtc_state and dm_update_plane_state to be single
pass instead of two pass.
This is necessary since we no longer have the dc_state to add and remove planes to the context in and we want to defer creation to the end of commit_check.
It also makes the logic a lot simpler to follow since as an added bonus.
Cc: Bhawanpreet Lakha Bhawanpreet.Lakha@amd.com Cc: Harry Wentland harry.wentland@amd.com Cc: Leo Li sunpeng.li@amd.com Cc: Daniel Vetter daniel@ffwll.ch Signed-off-by: Nicholas Kazlauskas nicholas.kazlauskas@amd.com
.../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 720 +++++++----------- .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h | 11 +- 2 files changed, 280 insertions(+), 451 deletions(-)
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c index 59829ec81694..97a7dfc620e8 100644 --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c @@ -1839,7 +1839,6 @@ static int dm_resume(void *handle) struct drm_plane *plane; struct drm_plane_state *new_plane_state; struct dm_plane_state *dm_new_plane_state;
- struct dm_atomic_state *dm_state = to_dm_atomic_state(dm->atomic_obj.state); enum dc_connection_type new_connection_type = dc_connection_none; struct dc_state *dc_state; int i, r, j;
@@ -1879,11 +1878,6 @@ static int dm_resume(void *handle)
return 0;
}
/* Recreate dc_state - DC invalidates it when setting power state to S3. */
dc_release_state(dm_state->context);
dm_state->context = dc_create_state(dm->dc);
/* TODO: Remove dc_state->dccg, use dc->dccg directly. */
dc_resource_state_construct(dm->dc, dm_state->context);
/* Before powering on DC we need to re-initialize DMUB. */ r = dm_dmub_hw_init(adev);
@@ -2019,11 +2013,51 @@ const struct amdgpu_ip_block_version dm_ip_block =
- *WIP*
*/
+struct drm_atomic_state *dm_atomic_state_alloc(struct drm_device *dev) +{
- struct dm_atomic_state *dm_state;
- dm_state = kzalloc(sizeof(*dm_state), GFP_KERNEL);
How about use GFP_ATOMIC here?
- if (!dm_state)
return NULL;
- if (drm_atomic_state_init(dev, &dm_state->base) < 0) {
kfree(dm_state);
return NULL;
- }
- return &dm_state->base;
+}
+void dm_atomic_state_free(struct drm_atomic_state *state) +{
- struct dm_atomic_state *dm_state = to_dm_atomic_state(state);
- if (dm_state->context) {
dc_release_state(dm_state->context);
dm_state->context = NULL;
- }
- drm_atomic_state_default_release(state);
- kfree(state);
+}
+void dm_atomic_state_clear(struct drm_atomic_state *state) +{
- struct dm_atomic_state *dm_state = to_dm_atomic_state(state);
- drm_atomic_state_default_clear(&dm_state->base);
+}
- static const struct drm_mode_config_funcs amdgpu_dm_mode_funcs = { .fb_create = amdgpu_display_user_framebuffer_create, .output_poll_changed = drm_fb_helper_output_poll_changed, .atomic_check = amdgpu_dm_atomic_check, .atomic_commit = amdgpu_dm_atomic_commit,
- .atomic_state_alloc = dm_atomic_state_alloc,
Nit: the above hooks use the prefix amdgpu_dm, maybe it is a good idea to keep this pattern for the new functions.
- .atomic_state_free = dm_atomic_state_free,
- .atomic_state_clear = dm_atomic_state_clear,
Looks like that the above function scope is restricted to this file and you only use it in the above data struct. How about making all of the new functions static?
};
static struct drm_mode_config_helper_funcs amdgpu_dm_mode_config_helperfuncs = { @@ -2782,108 +2816,6 @@ static int dcn10_register_irq_handlers(struct amdgpu_device *adev) } #endif
-/*
- Acquires the lock for the atomic state object and returns
- the new atomic state.
- This should only be called during atomic check.
- */
-static int dm_atomic_get_state(struct drm_atomic_state *state,
struct dm_atomic_state **dm_state)
-{
- struct drm_device *dev = state->dev;
- struct amdgpu_device *adev = dev->dev_private;
- struct amdgpu_display_manager *dm = &adev->dm;
- struct drm_private_state *priv_state;
- if (*dm_state)
return 0;
- priv_state = drm_atomic_get_private_obj_state(state, &dm->atomic_obj);
- if (IS_ERR(priv_state))
return PTR_ERR(priv_state);
- *dm_state = to_dm_atomic_state(priv_state);
- return 0;
-}
-static struct dm_atomic_state * -dm_atomic_get_new_state(struct drm_atomic_state *state) -{
- struct drm_device *dev = state->dev;
- struct amdgpu_device *adev = dev->dev_private;
- struct amdgpu_display_manager *dm = &adev->dm;
- struct drm_private_obj *obj;
- struct drm_private_state *new_obj_state;
- int i;
- for_each_new_private_obj_in_state(state, obj, new_obj_state, i) {
if (obj->funcs == dm->atomic_obj.funcs)
return to_dm_atomic_state(new_obj_state);
- }
- return NULL;
-}
-static struct dm_atomic_state * -dm_atomic_get_old_state(struct drm_atomic_state *state) -{
- struct drm_device *dev = state->dev;
- struct amdgpu_device *adev = dev->dev_private;
- struct amdgpu_display_manager *dm = &adev->dm;
- struct drm_private_obj *obj;
- struct drm_private_state *old_obj_state;
- int i;
- for_each_old_private_obj_in_state(state, obj, old_obj_state, i) {
if (obj->funcs == dm->atomic_obj.funcs)
return to_dm_atomic_state(old_obj_state);
- }
- return NULL;
-}
-static struct drm_private_state * -dm_atomic_duplicate_state(struct drm_private_obj *obj) -{
- struct dm_atomic_state *old_state, *new_state;
- new_state = kzalloc(sizeof(*new_state), GFP_KERNEL);
- if (!new_state)
return NULL;
- __drm_atomic_helper_private_obj_duplicate_state(obj, &new_state->base);
- old_state = to_dm_atomic_state(obj->state);
- if (old_state && old_state->context)
new_state->context = dc_copy_state(old_state->context);
- if (!new_state->context) {
kfree(new_state);
return NULL;
- }
- return &new_state->base;
-}
-static void dm_atomic_destroy_state(struct drm_private_obj *obj,
struct drm_private_state *state)
-{
- struct dm_atomic_state *dm_state = to_dm_atomic_state(state);
- if (dm_state && dm_state->context)
dc_release_state(dm_state->context);
- kfree(dm_state);
-}
-static struct drm_private_state_funcs dm_atomic_state_funcs = {
- .atomic_duplicate_state = dm_atomic_duplicate_state,
- .atomic_destroy_state = dm_atomic_destroy_state,
-};
- static int amdgpu_dm_mode_config_init(struct amdgpu_device *adev) { struct dm_atomic_state *state;
@@ -2916,11 +2848,6 @@ static int amdgpu_dm_mode_config_init(struct amdgpu_device *adev)
dc_resource_state_copy_construct_current(adev->dm.dc, state->context);
- drm_atomic_private_obj_init(adev->ddev,
&adev->dm.atomic_obj,
&state->base,
&dm_atomic_state_funcs);
- r = amdgpu_display_modeset_create_props(adev); if (r) return r;
@@ -3360,7 +3287,6 @@ static int amdgpu_dm_initialize_drm_device(struct amdgpu_device *adev) static void amdgpu_dm_destroy_drm_device(struct amdgpu_display_manager *dm) { drm_mode_config_cleanup(dm->ddev);
- drm_atomic_private_obj_fini(&dm->atomic_obj); return; }
@@ -7533,7 +7459,7 @@ static void amdgpu_dm_atomic_commit_tail(struct drm_atomic_state *state) struct drm_device *dev = state->dev; struct amdgpu_device *adev = dev->dev_private; struct amdgpu_display_manager *dm = &adev->dm;
- struct dm_atomic_state *dm_state;
- struct dm_atomic_state *dm_state = to_dm_atomic_state(state); struct dc_state *dc_state = NULL, *dc_state_temp = NULL; uint32_t i, j; struct drm_crtc *crtc;
@@ -7547,8 +7473,7 @@ static void amdgpu_dm_atomic_commit_tail(struct drm_atomic_state *state)
drm_atomic_helper_update_legacy_modeset_state(dev, state);
- dm_state = dm_atomic_get_new_state(state);
- if (dm_state && dm_state->context) {
- if (dm_state->context) { dc_state = dm_state->context; } else { /* No state changes, retain current state. */
@@ -8052,10 +7977,8 @@ static int dm_update_crtc_state(struct amdgpu_display_manager *dm, struct drm_crtc *crtc, struct drm_crtc_state *old_crtc_state, struct drm_crtc_state *new_crtc_state,
{bool enable, bool *lock_and_validation_needed)
- struct dm_atomic_state *dm_state = NULL; struct dm_crtc_state *dm_old_crtc_state, *dm_new_crtc_state; struct dc_stream_state *new_stream; int ret = 0;
@@ -8077,7 +8000,7 @@ static int dm_update_crtc_state(struct amdgpu_display_manager *dm, aconnector = amdgpu_dm_find_first_crtc_matching_connector(state, crtc);
/* TODO This hack should go away */
- if (aconnector && enable) {
- if (aconnector) { /* Make sure fake sink is created in plug-in scenario */ drm_new_conn_state = drm_atomic_get_new_connector_state(state, &aconnector->base);
@@ -8155,36 +8078,20 @@ static int dm_update_crtc_state(struct amdgpu_display_manager *dm, new_crtc_state->active_changed, new_crtc_state->connectors_changed);
- /* Remove stream for any changed/disabled CRTC */
- if (!enable) {
if (!dm_old_crtc_state->stream)
goto skip_modeset;
ret = dm_atomic_get_state(state, &dm_state);
if (ret)
goto fail;
DRM_DEBUG_DRIVER("Disabling DRM crtc: %d\n",
crtc->base.id);
- /* Remove stream for changed CRTC - can't reuse it for validation. */
- if (dm_new_crtc_state->stream) {
DRM_DEBUG_DRIVER("Disabling DRM crtc: %d\n", crtc->base.id);
/* i.e. reset mode */
if (dc_remove_stream_from_ctx(
dm->dc,
dm_state->context,
dm_old_crtc_state->stream) != DC_OK) {
ret = -EINVAL;
goto fail;
}
dc_stream_release(dm_old_crtc_state->stream);
dc_stream_release(dm_new_crtc_state->stream);
dm_new_crtc_state->stream = NULL;
reset_freesync_config_for_crtc(dm_new_crtc_state);
*lock_and_validation_needed = true;
}
- } else {/* Add stream for any updated/enabled CRTC */
- /* Add stream for any updated/enabled CRTC - active implies enabled. */
- if (new_crtc_state->active) { /*
- Quick fix to prevent NULL pointer on new_stream when
- added MST connectors not found in existing crtc_state in the chained mode
@@ -8193,35 +8100,13 @@ static int dm_update_crtc_state(struct amdgpu_display_manager *dm, if (!aconnector || (!aconnector->dc_sink && aconnector->mst_port)) goto skip_modeset;
if (modereset_required(new_crtc_state))
goto skip_modeset;
if (modeset_required(new_crtc_state, new_stream,
dm_old_crtc_state->stream)) {
WARN_ON(dm_new_crtc_state->stream);
ret = dm_atomic_get_state(state, &dm_state);
if (ret)
goto fail;
dm_new_crtc_state->stream = new_stream;
WARN_ON(dm_new_crtc_state->stream);
dm_new_crtc_state->stream = new_stream;
dc_stream_retain(new_stream);
dc_stream_retain(new_stream);
DRM_DEBUG_DRIVER("Enabling DRM crtc: %d\n", crtc->base.id);
DRM_DEBUG_DRIVER("Enabling DRM crtc: %d\n",
crtc->base.id);
if (dc_add_stream_to_ctx(
dm->dc,
dm_state->context,
dm_new_crtc_state->stream) != DC_OK) {
ret = -EINVAL;
goto fail;
}
*lock_and_validation_needed = true;
}
*lock_and_validation_needed = true;
}
skip_modeset:
@@ -8233,7 +8118,7 @@ static int dm_update_crtc_state(struct amdgpu_display_manager *dm, * We want to do dc stream updates that do not require a * full modeset below. */
- if (!(enable && aconnector && new_crtc_state->active))
- if (!(aconnector && new_crtc_state->enable && new_crtc_state->active)) return 0; /*
- Given above conditions, the dc state cannot be NULL because:
@@ -8281,10 +8166,12 @@ static bool should_reset_plane(struct drm_atomic_state *state, struct drm_plane_state *old_plane_state, struct drm_plane_state *new_plane_state) {
- struct drm_plane *other;
- struct drm_plane_state *old_other_state, *new_other_state; struct drm_crtc_state *new_crtc_state;
- int i;
struct dm_plane_state *old_dm_plane_state, *new_dm_plane_state;
/* Cursor planes don't affect bandwidth. */
if (plane->type == DRM_PLANE_TYPE_CURSOR)
return false;
/*
- TODO: Remove this hack once the checks below are sufficient
@@ -8312,71 +8199,50 @@ static bool should_reset_plane(struct drm_atomic_state *state, if (new_crtc_state->color_mgmt_changed) return true;
- /* Plane scaling can change with a modeset, so reset. */ if (drm_atomic_crtc_needs_modeset(new_crtc_state)) return true;
- /*
* If there are any new primary or overlay planes being added or
* removed then the z-order can potentially change. To ensure
* correct z-order and pipe acquisition the current DC architecture
* requires us to remove and recreate all existing planes.
*
* TODO: Come up with a more elegant solution for this.
*/
- for_each_oldnew_plane_in_state(state, other, old_other_state, new_other_state, i) {
struct dm_plane_state *old_dm_plane_state, *new_dm_plane_state;
if (other->type == DRM_PLANE_TYPE_CURSOR)
continue;
if (old_other_state->crtc != new_plane_state->crtc &&
new_other_state->crtc != new_plane_state->crtc)
continue;
if (old_other_state->crtc != new_other_state->crtc)
return true;
/* Src/dst size and scaling updates. */
if (old_other_state->src_w != new_other_state->src_w ||
old_other_state->src_h != new_other_state->src_h ||
old_other_state->crtc_w != new_other_state->crtc_w ||
old_other_state->crtc_h != new_other_state->crtc_h)
return true;
- /* Src/dst size and scaling updates. */
- if (old_plane_state->src_w != new_plane_state->src_w ||
old_plane_state->src_h != new_plane_state->src_h ||
old_plane_state->crtc_w != new_plane_state->crtc_w ||
old_plane_state->crtc_h != new_plane_state->crtc_h)
return true;
/* Rotation / mirroring updates. */
if (old_other_state->rotation != new_other_state->rotation)
return true;
- /* Rotation / mirroring updates. */
- if (old_plane_state->rotation != new_plane_state->rotation)
return true;
/* Blending updates. */
if (old_other_state->pixel_blend_mode !=
new_other_state->pixel_blend_mode)
return true;
- /* Blending updates. */
- if (old_plane_state->pixel_blend_mode !=
new_plane_state->pixel_blend_mode)
return true;
/* Alpha updates. */
if (old_other_state->alpha != new_other_state->alpha)
return true;
- /* Alpha updates. */
- if (old_plane_state->alpha != new_plane_state->alpha)
return true;
/* Colorspace changes. */
if (old_other_state->color_range != new_other_state->color_range ||
old_other_state->color_encoding != new_other_state->color_encoding)
return true;
- /* Colorspace changes. */
- if (old_plane_state->color_range != new_plane_state->color_range ||
old_plane_state->color_encoding != new_plane_state->color_encoding)
return true;
/* Framebuffer checks fall at the end. */
if (!old_other_state->fb || !new_other_state->fb)
continue;
- /* Framebuffer checks fall at the end. */
- if (!old_plane_state->fb || !new_plane_state->fb)
return false;
/* Pixel format changes can require bandwidth updates. */
if (old_other_state->fb->format != new_other_state->fb->format)
return true;
- /* Pixel format changes can require bandwidth updates. */
- if (old_plane_state->fb->format != new_plane_state->fb->format)
return true;
old_dm_plane_state = to_dm_plane_state(old_other_state);
new_dm_plane_state = to_dm_plane_state(new_other_state);
- old_dm_plane_state = to_dm_plane_state(old_plane_state);
- new_dm_plane_state = to_dm_plane_state(new_plane_state);
/* Tiling and DCC changes also require bandwidth updates. */
if (old_dm_plane_state->tiling_flags !=
new_dm_plane_state->tiling_flags)
return true;
- }
/* Tiling and DCC changes also require bandwidth updates. */
if (old_dm_plane_state->tiling_flags !=
new_dm_plane_state->tiling_flags)
return true;
return false; }
@@ -8386,15 +8252,12 @@ static int dm_update_plane_state(struct dc *dc, struct drm_plane *plane, struct drm_plane_state *old_plane_state, struct drm_plane_state *new_plane_state,
{bool enable, bool *lock_and_validation_needed)
- struct dm_atomic_state *dm_state = NULL; struct drm_crtc *new_plane_crtc, *old_plane_crtc;
- struct drm_crtc_state *old_crtc_state, *new_crtc_state;
- struct dm_crtc_state *dm_new_crtc_state, *dm_old_crtc_state;
- struct dm_plane_state *dm_new_plane_state, *dm_old_plane_state;
- struct drm_crtc_state *new_crtc_state;
- struct dm_crtc_state *dm_new_crtc_state;
- struct dm_plane_state *dm_new_plane_state; struct amdgpu_crtc *new_acrtc; bool needs_reset; int ret = 0;
@@ -8403,12 +8266,12 @@ static int dm_update_plane_state(struct dc *dc, new_plane_crtc = new_plane_state->crtc; old_plane_crtc = old_plane_state->crtc; dm_new_plane_state = to_dm_plane_state(new_plane_state);
dm_old_plane_state = to_dm_plane_state(old_plane_state);
/*TODO Implement better atomic check for cursor plane */ if (plane->type == DRM_PLANE_TYPE_CURSOR) {
if (!enable || !new_plane_crtc ||
drm_atomic_plane_disabling(plane->state, new_plane_state))
/* Cursor disabled is always OK. */
if (!new_plane_crtc ||
drm_atomic_plane_disabling(plane->state, new_plane_state)) return 0;
new_acrtc = to_amdgpu_crtc(new_plane_crtc);
@@ -8423,123 +8286,72 @@ static int dm_update_plane_state(struct dc *dc, return 0; }
- /* Check if the plane requires a reset for bandwidth validation. */ needs_reset = should_reset_plane(state, plane, old_plane_state, new_plane_state);
- /* Remove any changed/removed planes */
- if (!enable) {
if (!needs_reset)
return 0;
if (!old_plane_crtc)
return 0;
old_crtc_state = drm_atomic_get_old_crtc_state(
state, old_plane_crtc);
dm_old_crtc_state = to_dm_crtc_state(old_crtc_state);
if (!dm_old_crtc_state->stream)
return 0;
/* Exit early if the plane hasn't been trivially modified. */
if (!needs_reset)
return 0;
/**
* Remove exisiting plane, if any - we can't reuse it for validation
Nit: exisiting -> existing
* because we'd be modifying the current state applied to HW.
*/
- if (dm_new_plane_state->dc_state) { DRM_DEBUG_ATOMIC("Disabling DRM plane: %d on DRM crtc %d\n",
plane->base.id, old_plane_crtc->base.id);
ret = dm_atomic_get_state(state, &dm_state);
if (ret)
return ret;
plane->base.id, old_plane_crtc->base.id);
if (!dc_remove_plane_from_context(
dc,
dm_old_crtc_state->stream,
dm_old_plane_state->dc_state,
dm_state->context)) {
ret = EINVAL;
return ret;
}
dc_plane_state_release(dm_old_plane_state->dc_state);
dc_plane_state_release(dm_new_plane_state->dc_state);
dm_new_plane_state->dc_state = NULL;
*lock_and_validation_needed = true;
}
- } else { /* Add new planes */
struct dc_plane_state *dc_new_plane_state;
if (drm_atomic_plane_disabling(plane->state, new_plane_state))
return 0;
if (!new_plane_crtc)
return 0;
new_crtc_state = drm_atomic_get_new_crtc_state(state, new_plane_crtc);
dm_new_crtc_state = to_dm_crtc_state(new_crtc_state);
if (!dm_new_crtc_state->stream)
return 0;
if (!needs_reset)
return 0;
ret = dm_plane_helper_check_state(new_plane_state, new_crtc_state);
if (ret)
return ret;
WARN_ON(dm_new_plane_state->dc_state);
dc_new_plane_state = dc_create_plane_state(dc);
if (!dc_new_plane_state)
return -ENOMEM;
DRM_DEBUG_DRIVER("Enabling DRM plane: %d on DRM crtc %d\n",
plane->base.id, new_plane_crtc->base.id);
- /**
* If the plane is disabling exit early. Also serves as a guarantee
* that we have a framebuffer below if we do have a CRTC.
*/
- if (drm_atomic_plane_disabling(plane->state, new_plane_state))
return 0;
ret = fill_dc_plane_attributes(
new_plane_crtc->dev->dev_private,
dc_new_plane_state,
new_plane_state,
new_crtc_state);
if (ret) {
dc_plane_state_release(dc_new_plane_state);
return ret;
}
- /* If we don't have a CRTC then the plane is disabled. */
- if (!new_plane_crtc)
return 0;
ret = dm_atomic_get_state(state, &dm_state);
if (ret) {
dc_plane_state_release(dc_new_plane_state);
return ret;
}
- new_crtc_state = drm_atomic_get_new_crtc_state(state, new_plane_crtc);
- dm_new_crtc_state = to_dm_crtc_state(new_crtc_state);
/*
* Any atomic check errors that occur after this will
* not need a release. The plane state will be attached
* to the stream, and therefore part of the atomic
* state. It'll be released when the atomic state is
* cleaned.
*/
if (!dc_add_plane_to_context(
dc,
dm_new_crtc_state->stream,
dc_new_plane_state,
dm_state->context)) {
- /* Don't enable the plane if there's no stream for output. */
- if (!dm_new_crtc_state->stream)
return 0;
dc_plane_state_release(dc_new_plane_state);
return -EINVAL;
}
- /**
* Freeing the plane will take care of freeing the dc_state
* on failure, so we don't need to explicitly release below.
*/
- dm_new_plane_state->dc_state = dc_create_plane_state(dc);
- if (!dm_new_plane_state->dc_state)
return -ENOMEM;
dm_new_plane_state->dc_state = dc_new_plane_state;
- DRM_DEBUG_DRIVER("Enabling DRM plane: %d on DRM crtc %d\n",
plane->base.id, new_plane_crtc->base.id);
/* Tell DC to do a full surface update every time there
* is a plane change. Inefficient, but works for now.
*/
dm_new_plane_state->dc_state->update_flags.bits.full_update = 1;
- ret = fill_dc_plane_attributes(new_plane_crtc->dev->dev_private,
dm_new_plane_state->dc_state,
new_plane_state, new_crtc_state);
- if (ret)
return ret;
*lock_and_validation_needed = true;
- }
/**
* Tell DC to do a full surface update every time there
* is a plane change. Inefficient, but works for now.
*/
dm_new_plane_state->dc_state->update_flags.bits.full_update = 1;
*lock_and_validation_needed = true;
- return ret;
return 0; }
#if defined(CONFIG_DRM_AMD_DC_DCN)
@@ -8567,6 +8379,113 @@ static int add_affected_mst_dsc_crtcs(struct drm_atomic_state *state, struct drm } #endif
+static int dm_atomic_state_init_context(struct drm_device *dev,
struct drm_atomic_state *state)
+{
- struct dm_atomic_state *dm_state = to_dm_atomic_state(state);
- struct amdgpu_device *adev = dev->dev_private;
- struct drm_crtc *crtc;
- struct drm_crtc_state *new_crtc_state;
- struct dm_crtc_state *new_dm_crtc_state;
- struct drm_plane *plane;
- struct drm_plane_state *new_plane_state, *old_plane_state;
- struct dm_plane_state *new_dm_plane_state;
- int ret, i;
- dm_state->context = dc_create_state(adev->dm.dc);
- if (!dm_state->context)
return -ENOMEM;
- /**
* DC validation requires an absolute set of streams and planes to work
* with so add all planes and CRTCs to the state to make this work.
* Pipe allocation can change so there's no easy way to work around
* this constraint.
*
* Unfortunately this also means that whenever userspace requests a
* change that only needs to modify one CRTC's planes we still have to
* stall out fast updates affecting other CRTCs - introducing judder
* in multi-monitor scenarios.
*
* Userspace should avoid frequent updates to properties that can
* require bandwidth changes.
*/
- drm_for_each_crtc(crtc, dev) {
new_crtc_state = drm_atomic_get_crtc_state(state, crtc);
if (IS_ERR(new_crtc_state))
return PTR_ERR(new_crtc_state);
/**
* Cursor planes may not strictly be necessary here
* but it's probably safer to add them.
*/
ret = drm_atomic_add_affected_planes(state, crtc);
if (ret)
return ret;
- }
- for_each_new_crtc_in_state(state, crtc, new_crtc_state, i) {
new_dm_crtc_state = to_dm_crtc_state(new_crtc_state);
if (!new_dm_crtc_state->stream)
continue;
if (dc_add_stream_to_ctx(adev->dm.dc, dm_state->context,
new_dm_crtc_state->stream) != DC_OK)
return -EINVAL;
- }
- /**
* Planes are added in reverse order to ensure correct blending order
* in DC - planes with higher priority go first, and the cursor and
* MPO planes are at the very end of the list.
*/
- for_each_oldnew_plane_in_state_reverse(state, plane, old_plane_state, new_plane_state, i) {
/* Cursors aren't real hardware planes. */
if (plane->type == DRM_PLANE_TYPE_CURSOR)
continue;
if (!new_plane_state->crtc)
continue;
new_crtc_state = drm_atomic_get_new_crtc_state(
state, new_plane_state->crtc);
if (!new_crtc_state) {
DRM_WARN("No CRTC state found: plane=%d crtc=%d\n",
plane->base.id,
new_plane_state->crtc->base.id);
return -EINVAL;
}
new_dm_crtc_state = to_dm_crtc_state(new_crtc_state);
/* Skip planes for disabled streams. */
if (!new_dm_crtc_state->stream)
continue;
new_dm_plane_state = to_dm_plane_state(new_plane_state);
if (!new_dm_plane_state->dc_state) {
DRM_WARN("No DC state found: plane=%d crtc=%d\n",
plane->base.id,
new_plane_state->crtc->base.id);
return -EINVAL;
}
if (!dc_add_plane_to_context(
adev->dm.dc, new_dm_crtc_state->stream,
new_dm_plane_state->dc_state, dm_state->context)) {
DRM_DEBUG_KMS(
"Couldn't add plane to context: plane=%d\n",
plane->base.id);
return -EINVAL;
}
- }
- return 0;
+}
- /**
- amdgpu_dm_atomic_check() - Atomic check implementation for AMDgpu DM.
- @dev: The DRM device
@@ -8595,7 +8514,7 @@ static int amdgpu_dm_atomic_check(struct drm_device *dev, struct drm_atomic_state *state) { struct amdgpu_device *adev = dev->dev_private;
- struct dm_atomic_state *dm_state = NULL;
- struct dm_atomic_state *dm_state = to_dm_atomic_state(state); struct dc *dc = adev->dm.dc; struct drm_connector *connector; struct drm_connector_state *old_con_state, *new_con_state;
@@ -8607,6 +8526,11 @@ static int amdgpu_dm_atomic_check(struct drm_device *dev, int ret, i; bool lock_and_validation_needed = false;
- /**
* Check for modesets on CRTCs in the new state. DRM internally checks
* that the mode has actually changed for us as well in here, so we can
* avoid modesets.
ret = drm_atomic_helper_check_modeset(dev, state); if (ret) goto fail;*/
@@ -8634,6 +8558,18 @@ static int amdgpu_dm_atomic_check(struct drm_device *dev, new_crtc_state->connectors_changed = true; }
- /**
* Add all required objects for performing the commit and stalling out
* other commits that may be touching the same resources.
*/
Nit: Join the above and below comment in a single one, and also use '/*' instead of '/**'.
Reviewed-by: Rodrigo Siqueira Rodrigo.Siqueira@amd.com
Best Regards
- /**
* Pass one: Add all affected CRTCs on a MST DSC topology that has a
* CRTC undergoing a modeset and mark mode_changed = true for each one.
*
* Optimization: DSC can only be supported on DCN2 onwards, so we can
* skip on earlier ASIC.
#if defined(CONFIG_DRM_AMD_DC_DCN) if (adev->asic_type >= CHIP_NAVI10) { for_each_oldnew_crtc_in_state(state, crtc, old_crtc_state, new_crtc_state, i) {*/
@@ -8645,6 +8581,8 @@ static int amdgpu_dm_atomic_check(struct drm_device *dev, } } #endif
- /* Pass two: Add connectors and planes for CRTCs as needed. */ for_each_oldnew_crtc_in_state(state, crtc, old_crtc_state, new_crtc_state, i) { if (!drm_atomic_crtc_needs_modeset(new_crtc_state) && !new_crtc_state->color_mgmt_changed &&
@@ -8663,42 +8601,6 @@ static int amdgpu_dm_atomic_check(struct drm_device *dev, goto fail; }
- /*
* Add all primary and overlay planes on the CRTC to the state
* whenever a plane is enabled to maintain correct z-ordering
* and to enable fast surface updates.
*/
- drm_for_each_crtc(crtc, dev) {
bool modified = false;
for_each_oldnew_plane_in_state(state, plane, old_plane_state, new_plane_state, i) {
if (plane->type == DRM_PLANE_TYPE_CURSOR)
continue;
if (new_plane_state->crtc == crtc ||
old_plane_state->crtc == crtc) {
modified = true;
break;
}
}
if (!modified)
continue;
drm_for_each_plane_mask(plane, state->dev, crtc->state->plane_mask) {
if (plane->type == DRM_PLANE_TYPE_CURSOR)
continue;
new_plane_state =
drm_atomic_get_plane_state(state, plane);
if (IS_ERR(new_plane_state)) {
ret = PTR_ERR(new_plane_state);
goto fail;
}
}
- }
- /* Prepass for updating tiling flags on new planes. */ for_each_new_plane_in_state(state, plane, new_plane_state, i) { struct dm_plane_state *new_dm_plane_state = to_dm_plane_state(new_plane_state);
@@ -8710,45 +8612,21 @@ static int amdgpu_dm_atomic_check(struct drm_device *dev, goto fail; }
- /* Remove exiting planes if they are modified */
- for_each_oldnew_plane_in_state_reverse(state, plane, old_plane_state, new_plane_state, i) {
ret = dm_update_plane_state(dc, state, plane,
old_plane_state,
new_plane_state,
false,
&lock_and_validation_needed);
if (ret)
goto fail;
- }
- /* Disable all crtcs which require disable */
- /* Update modified CRTCs. */ for_each_oldnew_crtc_in_state(state, crtc, old_crtc_state, new_crtc_state, i) { ret = dm_update_crtc_state(&adev->dm, state, crtc, old_crtc_state, new_crtc_state,
false, &lock_and_validation_needed);
if (ret) goto fail; }
/* Enable all crtcs which require enable */
for_each_oldnew_crtc_in_state(state, crtc, old_crtc_state, new_crtc_state, i) {
ret = dm_update_crtc_state(&adev->dm, state, crtc,
old_crtc_state,
new_crtc_state,
true,
&lock_and_validation_needed);
if (ret)
goto fail;
}
/* Add new/modified planes */
- /* Update modified planes. */ for_each_oldnew_plane_in_state_reverse(state, plane, old_plane_state, new_plane_state, i) { ret = dm_update_plane_state(dc, state, plane, old_plane_state, new_plane_state,
if (ret) goto fail;true, &lock_and_validation_needed);
@@ -8812,10 +8690,11 @@ static int amdgpu_dm_atomic_check(struct drm_device *dev, * DRM state directly - we can end up disabling interrupts too early * if we don't. *
* TODO: Remove this stall and drop DM state private objects.
*/ if (lock_and_validation_needed) {* TODO: Remove this stall.
ret = dm_atomic_get_state(state, &dm_state);
/* Create a new DC context to validate. */
if (ret) goto fail;ret = dm_atomic_state_init_context(dev, state);
@@ -8848,47 +8727,6 @@ static int amdgpu_dm_atomic_check(struct drm_device *dev, ret = -EINVAL; goto fail; }
} else {
/*
* The commit is a fast update. Fast updates shouldn't change
* the DC context, affect global validation, and can have their
* commit work done in parallel with other commits not touching
* the same resource. If we have a new DC context as part of
* the DM atomic state from validation we need to free it and
* retain the existing one instead.
*
* Furthermore, since the DM atomic state only contains the DC
* context and can safely be annulled, we can free the state
* and clear the associated private object now to free
* some memory and avoid a possible use-after-free later.
*/
for (i = 0; i < state->num_private_objs; i++) {
struct drm_private_obj *obj = state->private_objs[i].ptr;
if (obj->funcs == adev->dm.atomic_obj.funcs) {
int j = state->num_private_objs-1;
dm_atomic_destroy_state(obj,
state->private_objs[i].state);
/* If i is not at the end of the array then the
* last element needs to be moved to where i was
* before the array can safely be truncated.
*/
if (i != j)
state->private_objs[i] =
state->private_objs[j];
state->private_objs[j].ptr = NULL;
state->private_objs[j].state = NULL;
state->private_objs[j].old_state = NULL;
state->private_objs[j].new_state = NULL;
state->num_private_objs = j;
break;
}
}
}
/* Store the overall update type for use later in atomic check. */
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h index ad025f5cfd3e..1c3aa7cb83b9 100644 --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h @@ -217,15 +217,6 @@ struct amdgpu_display_manager { struct drm_device *ddev; u16 display_indexes_num;
- /**
* @atomic_obj:
*
* In combination with &dm_atomic_state it helps manage
* global atomic state that doesn't map cleanly into existing
* drm resources, like &dc_context.
*/
- struct drm_private_obj atomic_obj;
- /**
- @dc_lock:
@@ -440,7 +431,7 @@ struct dm_crtc_state { #define to_dm_crtc_state(x) container_of(x, struct dm_crtc_state, base)
struct dm_atomic_state {
- struct drm_private_state base;
struct drm_atomic_state base;
struct dc_state *context; };
-- 2.25.1
dri-devel mailing list dri-devel@lists.freedesktop.org https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.free...
On Thu, Jul 30, 2020 at 04:36:42PM -0400, Nicholas Kazlauskas wrote:
[Why] DM atomic check was structured in a way that we required old DC state in order to dynamically add and remove planes and streams from the context to build the DC state context for validation.
DRM private objects were used to carry over the last DC state and were added to the context on nearly every commit - regardless of fast or full so we could check whether or not the new state could affect bandwidth.
The problem with this model is that DRM private objects do not implicitly stall out other commits.
So if you have two commits touching separate DRM objects they could run concurrently and potentially execute out of order - leading to a use-after-free.
If we want this to be safe we have two options:
- Stall out concurrent commits since they touch the same private object
- Refactor DM to not require old DC state and drop private object usage
[How] This implements approach #2 since it still allows for judder free updates in multi-display scenarios.
I'll list the big changes in order at a high level:
- Subclass DRM atomic state instead of using DRM private objects.
DC relied on the old state to determine which changes cause bandwidth updates but now we have DM perform similar checks based on DRM state instead - dropping the requirement for old state to exist at all.
This means that we can now build a new DC context from scratch whenever we have something that DM thinks could affect bandwidth.
Whenever we need to rebuild bandwidth we now add all CRTCs and planes to the DRM state in order to get the absolute set of DC streams and DC planes.
This introduces a stall on other commits, but this stall already exists because of the lock_and_validation logic and it's necessary since updates may move around pipes and require full reprogramming.
Hm I think long term would be neat if you can first just add the dc streams for the current planes (if you convert them individually to private state objects), and ask DC to recompute specifics just for that. If DC then says "yes I need an even bigger recompute" only then do you grab all the streams and everything else.
The idea is that essentially you treat individual stream objects as read-locks on that part of the overall global state, and only when you need to do a write do you grab all the "read locks" to do the update.
But might not actually help for your hw. Just highlighting this as a pattern we've done. -Daniel
- Drop workarounds to add planes to maintain z-order early in atomic
check. This is no longer needed because of the changes for (1).
This also involves fixing up should_plane_reset checks since we can just avoid resetting streams and planes when they haven't actually changed.
- Rework dm_update_crtc_state and dm_update_plane_state to be single
pass instead of two pass.
This is necessary since we no longer have the dc_state to add and remove planes to the context in and we want to defer creation to the end of commit_check.
It also makes the logic a lot simpler to follow since as an added bonus.
Cc: Bhawanpreet Lakha Bhawanpreet.Lakha@amd.com Cc: Harry Wentland harry.wentland@amd.com Cc: Leo Li sunpeng.li@amd.com Cc: Daniel Vetter daniel@ffwll.ch Signed-off-by: Nicholas Kazlauskas nicholas.kazlauskas@amd.com
.../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 720 +++++++----------- .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h | 11 +- 2 files changed, 280 insertions(+), 451 deletions(-)
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c index 59829ec81694..97a7dfc620e8 100644 --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c @@ -1839,7 +1839,6 @@ static int dm_resume(void *handle) struct drm_plane *plane; struct drm_plane_state *new_plane_state; struct dm_plane_state *dm_new_plane_state;
- struct dm_atomic_state *dm_state = to_dm_atomic_state(dm->atomic_obj.state); enum dc_connection_type new_connection_type = dc_connection_none; struct dc_state *dc_state; int i, r, j;
@@ -1879,11 +1878,6 @@ static int dm_resume(void *handle)
return 0;
}
/* Recreate dc_state - DC invalidates it when setting power state to S3. */
dc_release_state(dm_state->context);
dm_state->context = dc_create_state(dm->dc);
/* TODO: Remove dc_state->dccg, use dc->dccg directly. */
dc_resource_state_construct(dm->dc, dm_state->context);
/* Before powering on DC we need to re-initialize DMUB. */ r = dm_dmub_hw_init(adev);
@@ -2019,11 +2013,51 @@ const struct amdgpu_ip_block_version dm_ip_block =
- *WIP*
*/
+struct drm_atomic_state *dm_atomic_state_alloc(struct drm_device *dev) +{
- struct dm_atomic_state *dm_state;
- dm_state = kzalloc(sizeof(*dm_state), GFP_KERNEL);
- if (!dm_state)
return NULL;
- if (drm_atomic_state_init(dev, &dm_state->base) < 0) {
kfree(dm_state);
return NULL;
- }
- return &dm_state->base;
+}
+void dm_atomic_state_free(struct drm_atomic_state *state) +{
- struct dm_atomic_state *dm_state = to_dm_atomic_state(state);
- if (dm_state->context) {
dc_release_state(dm_state->context);
dm_state->context = NULL;
- }
- drm_atomic_state_default_release(state);
- kfree(state);
+}
+void dm_atomic_state_clear(struct drm_atomic_state *state) +{
- struct dm_atomic_state *dm_state = to_dm_atomic_state(state);
- drm_atomic_state_default_clear(&dm_state->base);
+}
static const struct drm_mode_config_funcs amdgpu_dm_mode_funcs = { .fb_create = amdgpu_display_user_framebuffer_create, .output_poll_changed = drm_fb_helper_output_poll_changed, .atomic_check = amdgpu_dm_atomic_check, .atomic_commit = amdgpu_dm_atomic_commit,
- .atomic_state_alloc = dm_atomic_state_alloc,
- .atomic_state_free = dm_atomic_state_free,
- .atomic_state_clear = dm_atomic_state_clear,
};
static struct drm_mode_config_helper_funcs amdgpu_dm_mode_config_helperfuncs = { @@ -2782,108 +2816,6 @@ static int dcn10_register_irq_handlers(struct amdgpu_device *adev) } #endif
-/*
- Acquires the lock for the atomic state object and returns
- the new atomic state.
- This should only be called during atomic check.
- */
-static int dm_atomic_get_state(struct drm_atomic_state *state,
struct dm_atomic_state **dm_state)
-{
- struct drm_device *dev = state->dev;
- struct amdgpu_device *adev = dev->dev_private;
- struct amdgpu_display_manager *dm = &adev->dm;
- struct drm_private_state *priv_state;
- if (*dm_state)
return 0;
- priv_state = drm_atomic_get_private_obj_state(state, &dm->atomic_obj);
- if (IS_ERR(priv_state))
return PTR_ERR(priv_state);
- *dm_state = to_dm_atomic_state(priv_state);
- return 0;
-}
-static struct dm_atomic_state * -dm_atomic_get_new_state(struct drm_atomic_state *state) -{
- struct drm_device *dev = state->dev;
- struct amdgpu_device *adev = dev->dev_private;
- struct amdgpu_display_manager *dm = &adev->dm;
- struct drm_private_obj *obj;
- struct drm_private_state *new_obj_state;
- int i;
- for_each_new_private_obj_in_state(state, obj, new_obj_state, i) {
if (obj->funcs == dm->atomic_obj.funcs)
return to_dm_atomic_state(new_obj_state);
- }
- return NULL;
-}
-static struct dm_atomic_state * -dm_atomic_get_old_state(struct drm_atomic_state *state) -{
- struct drm_device *dev = state->dev;
- struct amdgpu_device *adev = dev->dev_private;
- struct amdgpu_display_manager *dm = &adev->dm;
- struct drm_private_obj *obj;
- struct drm_private_state *old_obj_state;
- int i;
- for_each_old_private_obj_in_state(state, obj, old_obj_state, i) {
if (obj->funcs == dm->atomic_obj.funcs)
return to_dm_atomic_state(old_obj_state);
- }
- return NULL;
-}
-static struct drm_private_state * -dm_atomic_duplicate_state(struct drm_private_obj *obj) -{
- struct dm_atomic_state *old_state, *new_state;
- new_state = kzalloc(sizeof(*new_state), GFP_KERNEL);
- if (!new_state)
return NULL;
- __drm_atomic_helper_private_obj_duplicate_state(obj, &new_state->base);
- old_state = to_dm_atomic_state(obj->state);
- if (old_state && old_state->context)
new_state->context = dc_copy_state(old_state->context);
- if (!new_state->context) {
kfree(new_state);
return NULL;
- }
- return &new_state->base;
-}
-static void dm_atomic_destroy_state(struct drm_private_obj *obj,
struct drm_private_state *state)
-{
- struct dm_atomic_state *dm_state = to_dm_atomic_state(state);
- if (dm_state && dm_state->context)
dc_release_state(dm_state->context);
- kfree(dm_state);
-}
-static struct drm_private_state_funcs dm_atomic_state_funcs = {
- .atomic_duplicate_state = dm_atomic_duplicate_state,
- .atomic_destroy_state = dm_atomic_destroy_state,
-};
static int amdgpu_dm_mode_config_init(struct amdgpu_device *adev) { struct dm_atomic_state *state; @@ -2916,11 +2848,6 @@ static int amdgpu_dm_mode_config_init(struct amdgpu_device *adev)
dc_resource_state_copy_construct_current(adev->dm.dc, state->context);
- drm_atomic_private_obj_init(adev->ddev,
&adev->dm.atomic_obj,
&state->base,
&dm_atomic_state_funcs);
- r = amdgpu_display_modeset_create_props(adev); if (r) return r;
@@ -3360,7 +3287,6 @@ static int amdgpu_dm_initialize_drm_device(struct amdgpu_device *adev) static void amdgpu_dm_destroy_drm_device(struct amdgpu_display_manager *dm) { drm_mode_config_cleanup(dm->ddev);
- drm_atomic_private_obj_fini(&dm->atomic_obj); return;
}
@@ -7533,7 +7459,7 @@ static void amdgpu_dm_atomic_commit_tail(struct drm_atomic_state *state) struct drm_device *dev = state->dev; struct amdgpu_device *adev = dev->dev_private; struct amdgpu_display_manager *dm = &adev->dm;
- struct dm_atomic_state *dm_state;
- struct dm_atomic_state *dm_state = to_dm_atomic_state(state); struct dc_state *dc_state = NULL, *dc_state_temp = NULL; uint32_t i, j; struct drm_crtc *crtc;
@@ -7547,8 +7473,7 @@ static void amdgpu_dm_atomic_commit_tail(struct drm_atomic_state *state)
drm_atomic_helper_update_legacy_modeset_state(dev, state);
- dm_state = dm_atomic_get_new_state(state);
- if (dm_state && dm_state->context) {
- if (dm_state->context) { dc_state = dm_state->context; } else { /* No state changes, retain current state. */
@@ -8052,10 +7977,8 @@ static int dm_update_crtc_state(struct amdgpu_display_manager *dm, struct drm_crtc *crtc, struct drm_crtc_state *old_crtc_state, struct drm_crtc_state *new_crtc_state,
bool enable, bool *lock_and_validation_needed)
{
- struct dm_atomic_state *dm_state = NULL; struct dm_crtc_state *dm_old_crtc_state, *dm_new_crtc_state; struct dc_stream_state *new_stream; int ret = 0;
@@ -8077,7 +8000,7 @@ static int dm_update_crtc_state(struct amdgpu_display_manager *dm, aconnector = amdgpu_dm_find_first_crtc_matching_connector(state, crtc);
/* TODO This hack should go away */
- if (aconnector && enable) {
- if (aconnector) { /* Make sure fake sink is created in plug-in scenario */ drm_new_conn_state = drm_atomic_get_new_connector_state(state, &aconnector->base);
@@ -8155,36 +8078,20 @@ static int dm_update_crtc_state(struct amdgpu_display_manager *dm, new_crtc_state->active_changed, new_crtc_state->connectors_changed);
- /* Remove stream for any changed/disabled CRTC */
- if (!enable) {
if (!dm_old_crtc_state->stream)
goto skip_modeset;
ret = dm_atomic_get_state(state, &dm_state);
if (ret)
goto fail;
DRM_DEBUG_DRIVER("Disabling DRM crtc: %d\n",
crtc->base.id);
- /* Remove stream for changed CRTC - can't reuse it for validation. */
- if (dm_new_crtc_state->stream) {
DRM_DEBUG_DRIVER("Disabling DRM crtc: %d\n", crtc->base.id);
/* i.e. reset mode */
if (dc_remove_stream_from_ctx(
dm->dc,
dm_state->context,
dm_old_crtc_state->stream) != DC_OK) {
ret = -EINVAL;
goto fail;
}
dc_stream_release(dm_old_crtc_state->stream);
dc_stream_release(dm_new_crtc_state->stream);
dm_new_crtc_state->stream = NULL;
reset_freesync_config_for_crtc(dm_new_crtc_state);
*lock_and_validation_needed = true;
}
- } else {/* Add stream for any updated/enabled CRTC */
- /* Add stream for any updated/enabled CRTC - active implies enabled. */
- if (new_crtc_state->active) { /*
- Quick fix to prevent NULL pointer on new_stream when
- added MST connectors not found in existing crtc_state in the chained mode
@@ -8193,35 +8100,13 @@ static int dm_update_crtc_state(struct amdgpu_display_manager *dm, if (!aconnector || (!aconnector->dc_sink && aconnector->mst_port)) goto skip_modeset;
if (modereset_required(new_crtc_state))
goto skip_modeset;
if (modeset_required(new_crtc_state, new_stream,
dm_old_crtc_state->stream)) {
WARN_ON(dm_new_crtc_state->stream);
ret = dm_atomic_get_state(state, &dm_state);
if (ret)
goto fail;
dm_new_crtc_state->stream = new_stream;
WARN_ON(dm_new_crtc_state->stream);
dm_new_crtc_state->stream = new_stream;
dc_stream_retain(new_stream);
dc_stream_retain(new_stream);
DRM_DEBUG_DRIVER("Enabling DRM crtc: %d\n", crtc->base.id);
DRM_DEBUG_DRIVER("Enabling DRM crtc: %d\n",
crtc->base.id);
if (dc_add_stream_to_ctx(
dm->dc,
dm_state->context,
dm_new_crtc_state->stream) != DC_OK) {
ret = -EINVAL;
goto fail;
}
*lock_and_validation_needed = true;
}
}*lock_and_validation_needed = true;
skip_modeset: @@ -8233,7 +8118,7 @@ static int dm_update_crtc_state(struct amdgpu_display_manager *dm, * We want to do dc stream updates that do not require a * full modeset below. */
- if (!(enable && aconnector && new_crtc_state->active))
- if (!(aconnector && new_crtc_state->enable && new_crtc_state->active)) return 0; /*
- Given above conditions, the dc state cannot be NULL because:
@@ -8281,10 +8166,12 @@ static bool should_reset_plane(struct drm_atomic_state *state, struct drm_plane_state *old_plane_state, struct drm_plane_state *new_plane_state) {
- struct drm_plane *other;
- struct drm_plane_state *old_other_state, *new_other_state; struct drm_crtc_state *new_crtc_state;
- int i;
struct dm_plane_state *old_dm_plane_state, *new_dm_plane_state;
/* Cursor planes don't affect bandwidth. */
if (plane->type == DRM_PLANE_TYPE_CURSOR)
return false;
/*
- TODO: Remove this hack once the checks below are sufficient
@@ -8312,71 +8199,50 @@ static bool should_reset_plane(struct drm_atomic_state *state, if (new_crtc_state->color_mgmt_changed) return true;
- /* Plane scaling can change with a modeset, so reset. */ if (drm_atomic_crtc_needs_modeset(new_crtc_state)) return true;
- /*
* If there are any new primary or overlay planes being added or
* removed then the z-order can potentially change. To ensure
* correct z-order and pipe acquisition the current DC architecture
* requires us to remove and recreate all existing planes.
*
* TODO: Come up with a more elegant solution for this.
*/
- for_each_oldnew_plane_in_state(state, other, old_other_state, new_other_state, i) {
struct dm_plane_state *old_dm_plane_state, *new_dm_plane_state;
if (other->type == DRM_PLANE_TYPE_CURSOR)
continue;
if (old_other_state->crtc != new_plane_state->crtc &&
new_other_state->crtc != new_plane_state->crtc)
continue;
if (old_other_state->crtc != new_other_state->crtc)
return true;
/* Src/dst size and scaling updates. */
if (old_other_state->src_w != new_other_state->src_w ||
old_other_state->src_h != new_other_state->src_h ||
old_other_state->crtc_w != new_other_state->crtc_w ||
old_other_state->crtc_h != new_other_state->crtc_h)
return true;
- /* Src/dst size and scaling updates. */
- if (old_plane_state->src_w != new_plane_state->src_w ||
old_plane_state->src_h != new_plane_state->src_h ||
old_plane_state->crtc_w != new_plane_state->crtc_w ||
old_plane_state->crtc_h != new_plane_state->crtc_h)
return true;
/* Rotation / mirroring updates. */
if (old_other_state->rotation != new_other_state->rotation)
return true;
- /* Rotation / mirroring updates. */
- if (old_plane_state->rotation != new_plane_state->rotation)
return true;
/* Blending updates. */
if (old_other_state->pixel_blend_mode !=
new_other_state->pixel_blend_mode)
return true;
- /* Blending updates. */
- if (old_plane_state->pixel_blend_mode !=
new_plane_state->pixel_blend_mode)
return true;
/* Alpha updates. */
if (old_other_state->alpha != new_other_state->alpha)
return true;
- /* Alpha updates. */
- if (old_plane_state->alpha != new_plane_state->alpha)
return true;
/* Colorspace changes. */
if (old_other_state->color_range != new_other_state->color_range ||
old_other_state->color_encoding != new_other_state->color_encoding)
return true;
- /* Colorspace changes. */
- if (old_plane_state->color_range != new_plane_state->color_range ||
old_plane_state->color_encoding != new_plane_state->color_encoding)
return true;
/* Framebuffer checks fall at the end. */
if (!old_other_state->fb || !new_other_state->fb)
continue;
- /* Framebuffer checks fall at the end. */
- if (!old_plane_state->fb || !new_plane_state->fb)
return false;
/* Pixel format changes can require bandwidth updates. */
if (old_other_state->fb->format != new_other_state->fb->format)
return true;
- /* Pixel format changes can require bandwidth updates. */
- if (old_plane_state->fb->format != new_plane_state->fb->format)
return true;
old_dm_plane_state = to_dm_plane_state(old_other_state);
new_dm_plane_state = to_dm_plane_state(new_other_state);
- old_dm_plane_state = to_dm_plane_state(old_plane_state);
- new_dm_plane_state = to_dm_plane_state(new_plane_state);
/* Tiling and DCC changes also require bandwidth updates. */
if (old_dm_plane_state->tiling_flags !=
new_dm_plane_state->tiling_flags)
return true;
- }
/* Tiling and DCC changes also require bandwidth updates. */
if (old_dm_plane_state->tiling_flags !=
new_dm_plane_state->tiling_flags)
return true;
return false;
} @@ -8386,15 +8252,12 @@ static int dm_update_plane_state(struct dc *dc, struct drm_plane *plane, struct drm_plane_state *old_plane_state, struct drm_plane_state *new_plane_state,
bool enable, bool *lock_and_validation_needed)
{
- struct dm_atomic_state *dm_state = NULL; struct drm_crtc *new_plane_crtc, *old_plane_crtc;
- struct drm_crtc_state *old_crtc_state, *new_crtc_state;
- struct dm_crtc_state *dm_new_crtc_state, *dm_old_crtc_state;
- struct dm_plane_state *dm_new_plane_state, *dm_old_plane_state;
- struct drm_crtc_state *new_crtc_state;
- struct dm_crtc_state *dm_new_crtc_state;
- struct dm_plane_state *dm_new_plane_state; struct amdgpu_crtc *new_acrtc; bool needs_reset; int ret = 0;
@@ -8403,12 +8266,12 @@ static int dm_update_plane_state(struct dc *dc, new_plane_crtc = new_plane_state->crtc; old_plane_crtc = old_plane_state->crtc; dm_new_plane_state = to_dm_plane_state(new_plane_state);
dm_old_plane_state = to_dm_plane_state(old_plane_state);
/*TODO Implement better atomic check for cursor plane */ if (plane->type == DRM_PLANE_TYPE_CURSOR) {
if (!enable || !new_plane_crtc ||
drm_atomic_plane_disabling(plane->state, new_plane_state))
/* Cursor disabled is always OK. */
if (!new_plane_crtc ||
drm_atomic_plane_disabling(plane->state, new_plane_state)) return 0;
new_acrtc = to_amdgpu_crtc(new_plane_crtc);
@@ -8423,123 +8286,72 @@ static int dm_update_plane_state(struct dc *dc, return 0; }
- /* Check if the plane requires a reset for bandwidth validation. */ needs_reset = should_reset_plane(state, plane, old_plane_state, new_plane_state);
- /* Remove any changed/removed planes */
- if (!enable) {
if (!needs_reset)
return 0;
if (!old_plane_crtc)
return 0;
old_crtc_state = drm_atomic_get_old_crtc_state(
state, old_plane_crtc);
dm_old_crtc_state = to_dm_crtc_state(old_crtc_state);
if (!dm_old_crtc_state->stream)
return 0;
/* Exit early if the plane hasn't been trivially modified. */
if (!needs_reset)
return 0;
/**
* Remove exisiting plane, if any - we can't reuse it for validation
* because we'd be modifying the current state applied to HW.
*/
if (dm_new_plane_state->dc_state) { DRM_DEBUG_ATOMIC("Disabling DRM plane: %d on DRM crtc %d\n",
plane->base.id, old_plane_crtc->base.id);
ret = dm_atomic_get_state(state, &dm_state);
if (ret)
return ret;
plane->base.id, old_plane_crtc->base.id);
if (!dc_remove_plane_from_context(
dc,
dm_old_crtc_state->stream,
dm_old_plane_state->dc_state,
dm_state->context)) {
ret = EINVAL;
return ret;
}
dc_plane_state_release(dm_old_plane_state->dc_state);
dc_plane_state_release(dm_new_plane_state->dc_state);
dm_new_plane_state->dc_state = NULL;
*lock_and_validation_needed = true;
}
- } else { /* Add new planes */
struct dc_plane_state *dc_new_plane_state;
if (drm_atomic_plane_disabling(plane->state, new_plane_state))
return 0;
if (!new_plane_crtc)
return 0;
new_crtc_state = drm_atomic_get_new_crtc_state(state, new_plane_crtc);
dm_new_crtc_state = to_dm_crtc_state(new_crtc_state);
if (!dm_new_crtc_state->stream)
return 0;
if (!needs_reset)
return 0;
ret = dm_plane_helper_check_state(new_plane_state, new_crtc_state);
if (ret)
return ret;
WARN_ON(dm_new_plane_state->dc_state);
dc_new_plane_state = dc_create_plane_state(dc);
if (!dc_new_plane_state)
return -ENOMEM;
DRM_DEBUG_DRIVER("Enabling DRM plane: %d on DRM crtc %d\n",
plane->base.id, new_plane_crtc->base.id);
- /**
* If the plane is disabling exit early. Also serves as a guarantee
* that we have a framebuffer below if we do have a CRTC.
*/
- if (drm_atomic_plane_disabling(plane->state, new_plane_state))
return 0;
ret = fill_dc_plane_attributes(
new_plane_crtc->dev->dev_private,
dc_new_plane_state,
new_plane_state,
new_crtc_state);
if (ret) {
dc_plane_state_release(dc_new_plane_state);
return ret;
}
- /* If we don't have a CRTC then the plane is disabled. */
- if (!new_plane_crtc)
return 0;
ret = dm_atomic_get_state(state, &dm_state);
if (ret) {
dc_plane_state_release(dc_new_plane_state);
return ret;
}
- new_crtc_state = drm_atomic_get_new_crtc_state(state, new_plane_crtc);
- dm_new_crtc_state = to_dm_crtc_state(new_crtc_state);
/*
* Any atomic check errors that occur after this will
* not need a release. The plane state will be attached
* to the stream, and therefore part of the atomic
* state. It'll be released when the atomic state is
* cleaned.
*/
if (!dc_add_plane_to_context(
dc,
dm_new_crtc_state->stream,
dc_new_plane_state,
dm_state->context)) {
- /* Don't enable the plane if there's no stream for output. */
- if (!dm_new_crtc_state->stream)
return 0;
dc_plane_state_release(dc_new_plane_state);
return -EINVAL;
}
- /**
* Freeing the plane will take care of freeing the dc_state
* on failure, so we don't need to explicitly release below.
*/
- dm_new_plane_state->dc_state = dc_create_plane_state(dc);
- if (!dm_new_plane_state->dc_state)
return -ENOMEM;
dm_new_plane_state->dc_state = dc_new_plane_state;
- DRM_DEBUG_DRIVER("Enabling DRM plane: %d on DRM crtc %d\n",
plane->base.id, new_plane_crtc->base.id);
/* Tell DC to do a full surface update every time there
* is a plane change. Inefficient, but works for now.
*/
dm_new_plane_state->dc_state->update_flags.bits.full_update = 1;
- ret = fill_dc_plane_attributes(new_plane_crtc->dev->dev_private,
dm_new_plane_state->dc_state,
new_plane_state, new_crtc_state);
- if (ret)
return ret;
*lock_and_validation_needed = true;
- }
/**
* Tell DC to do a full surface update every time there
* is a plane change. Inefficient, but works for now.
*/
dm_new_plane_state->dc_state->update_flags.bits.full_update = 1;
*lock_and_validation_needed = true;
- return ret;
- return 0;
}
#if defined(CONFIG_DRM_AMD_DC_DCN) @@ -8567,6 +8379,113 @@ static int add_affected_mst_dsc_crtcs(struct drm_atomic_state *state, struct drm } #endif
+static int dm_atomic_state_init_context(struct drm_device *dev,
struct drm_atomic_state *state)
+{
- struct dm_atomic_state *dm_state = to_dm_atomic_state(state);
- struct amdgpu_device *adev = dev->dev_private;
- struct drm_crtc *crtc;
- struct drm_crtc_state *new_crtc_state;
- struct dm_crtc_state *new_dm_crtc_state;
- struct drm_plane *plane;
- struct drm_plane_state *new_plane_state, *old_plane_state;
- struct dm_plane_state *new_dm_plane_state;
- int ret, i;
- dm_state->context = dc_create_state(adev->dm.dc);
- if (!dm_state->context)
return -ENOMEM;
- /**
* DC validation requires an absolute set of streams and planes to work
* with so add all planes and CRTCs to the state to make this work.
* Pipe allocation can change so there's no easy way to work around
* this constraint.
*
* Unfortunately this also means that whenever userspace requests a
* change that only needs to modify one CRTC's planes we still have to
* stall out fast updates affecting other CRTCs - introducing judder
* in multi-monitor scenarios.
*
* Userspace should avoid frequent updates to properties that can
* require bandwidth changes.
*/
- drm_for_each_crtc(crtc, dev) {
new_crtc_state = drm_atomic_get_crtc_state(state, crtc);
if (IS_ERR(new_crtc_state))
return PTR_ERR(new_crtc_state);
/**
* Cursor planes may not strictly be necessary here
* but it's probably safer to add them.
*/
ret = drm_atomic_add_affected_planes(state, crtc);
if (ret)
return ret;
- }
- for_each_new_crtc_in_state(state, crtc, new_crtc_state, i) {
new_dm_crtc_state = to_dm_crtc_state(new_crtc_state);
if (!new_dm_crtc_state->stream)
continue;
if (dc_add_stream_to_ctx(adev->dm.dc, dm_state->context,
new_dm_crtc_state->stream) != DC_OK)
return -EINVAL;
- }
- /**
* Planes are added in reverse order to ensure correct blending order
* in DC - planes with higher priority go first, and the cursor and
* MPO planes are at the very end of the list.
*/
- for_each_oldnew_plane_in_state_reverse(state, plane, old_plane_state, new_plane_state, i) {
/* Cursors aren't real hardware planes. */
if (plane->type == DRM_PLANE_TYPE_CURSOR)
continue;
if (!new_plane_state->crtc)
continue;
new_crtc_state = drm_atomic_get_new_crtc_state(
state, new_plane_state->crtc);
if (!new_crtc_state) {
DRM_WARN("No CRTC state found: plane=%d crtc=%d\n",
plane->base.id,
new_plane_state->crtc->base.id);
return -EINVAL;
}
new_dm_crtc_state = to_dm_crtc_state(new_crtc_state);
/* Skip planes for disabled streams. */
if (!new_dm_crtc_state->stream)
continue;
new_dm_plane_state = to_dm_plane_state(new_plane_state);
if (!new_dm_plane_state->dc_state) {
DRM_WARN("No DC state found: plane=%d crtc=%d\n",
plane->base.id,
new_plane_state->crtc->base.id);
return -EINVAL;
}
if (!dc_add_plane_to_context(
adev->dm.dc, new_dm_crtc_state->stream,
new_dm_plane_state->dc_state, dm_state->context)) {
DRM_DEBUG_KMS(
"Couldn't add plane to context: plane=%d\n",
plane->base.id);
return -EINVAL;
}
- }
- return 0;
+}
/**
- amdgpu_dm_atomic_check() - Atomic check implementation for AMDgpu DM.
- @dev: The DRM device
@@ -8595,7 +8514,7 @@ static int amdgpu_dm_atomic_check(struct drm_device *dev, struct drm_atomic_state *state) { struct amdgpu_device *adev = dev->dev_private;
- struct dm_atomic_state *dm_state = NULL;
- struct dm_atomic_state *dm_state = to_dm_atomic_state(state); struct dc *dc = adev->dm.dc; struct drm_connector *connector; struct drm_connector_state *old_con_state, *new_con_state;
@@ -8607,6 +8526,11 @@ static int amdgpu_dm_atomic_check(struct drm_device *dev, int ret, i; bool lock_and_validation_needed = false;
- /**
* Check for modesets on CRTCs in the new state. DRM internally checks
* that the mode has actually changed for us as well in here, so we can
* avoid modesets.
ret = drm_atomic_helper_check_modeset(dev, state); if (ret) goto fail;*/
@@ -8634,6 +8558,18 @@ static int amdgpu_dm_atomic_check(struct drm_device *dev, new_crtc_state->connectors_changed = true; }
- /**
* Add all required objects for performing the commit and stalling out
* other commits that may be touching the same resources.
*/
- /**
* Pass one: Add all affected CRTCs on a MST DSC topology that has a
* CRTC undergoing a modeset and mark mode_changed = true for each one.
*
* Optimization: DSC can only be supported on DCN2 onwards, so we can
* skip on earlier ASIC.
*/
#if defined(CONFIG_DRM_AMD_DC_DCN) if (adev->asic_type >= CHIP_NAVI10) { for_each_oldnew_crtc_in_state(state, crtc, old_crtc_state, new_crtc_state, i) { @@ -8645,6 +8581,8 @@ static int amdgpu_dm_atomic_check(struct drm_device *dev, } } #endif
- /* Pass two: Add connectors and planes for CRTCs as needed. */ for_each_oldnew_crtc_in_state(state, crtc, old_crtc_state, new_crtc_state, i) { if (!drm_atomic_crtc_needs_modeset(new_crtc_state) && !new_crtc_state->color_mgmt_changed &&
@@ -8663,42 +8601,6 @@ static int amdgpu_dm_atomic_check(struct drm_device *dev, goto fail; }
- /*
* Add all primary and overlay planes on the CRTC to the state
* whenever a plane is enabled to maintain correct z-ordering
* and to enable fast surface updates.
*/
- drm_for_each_crtc(crtc, dev) {
bool modified = false;
for_each_oldnew_plane_in_state(state, plane, old_plane_state, new_plane_state, i) {
if (plane->type == DRM_PLANE_TYPE_CURSOR)
continue;
if (new_plane_state->crtc == crtc ||
old_plane_state->crtc == crtc) {
modified = true;
break;
}
}
if (!modified)
continue;
drm_for_each_plane_mask(plane, state->dev, crtc->state->plane_mask) {
if (plane->type == DRM_PLANE_TYPE_CURSOR)
continue;
new_plane_state =
drm_atomic_get_plane_state(state, plane);
if (IS_ERR(new_plane_state)) {
ret = PTR_ERR(new_plane_state);
goto fail;
}
}
- }
- /* Prepass for updating tiling flags on new planes. */ for_each_new_plane_in_state(state, plane, new_plane_state, i) { struct dm_plane_state *new_dm_plane_state = to_dm_plane_state(new_plane_state);
@@ -8710,45 +8612,21 @@ static int amdgpu_dm_atomic_check(struct drm_device *dev, goto fail; }
- /* Remove exiting planes if they are modified */
- for_each_oldnew_plane_in_state_reverse(state, plane, old_plane_state, new_plane_state, i) {
ret = dm_update_plane_state(dc, state, plane,
old_plane_state,
new_plane_state,
false,
&lock_and_validation_needed);
if (ret)
goto fail;
- }
- /* Disable all crtcs which require disable */
- /* Update modified CRTCs. */ for_each_oldnew_crtc_in_state(state, crtc, old_crtc_state, new_crtc_state, i) { ret = dm_update_crtc_state(&adev->dm, state, crtc, old_crtc_state, new_crtc_state,
false, &lock_and_validation_needed);
if (ret) goto fail; }
/* Enable all crtcs which require enable */
for_each_oldnew_crtc_in_state(state, crtc, old_crtc_state, new_crtc_state, i) {
ret = dm_update_crtc_state(&adev->dm, state, crtc,
old_crtc_state,
new_crtc_state,
true,
&lock_and_validation_needed);
if (ret)
goto fail;
}
/* Add new/modified planes */
- /* Update modified planes. */ for_each_oldnew_plane_in_state_reverse(state, plane, old_plane_state, new_plane_state, i) { ret = dm_update_plane_state(dc, state, plane, old_plane_state, new_plane_state,
if (ret) goto fail;true, &lock_and_validation_needed);
@@ -8812,10 +8690,11 @@ static int amdgpu_dm_atomic_check(struct drm_device *dev, * DRM state directly - we can end up disabling interrupts too early * if we don't. *
* TODO: Remove this stall and drop DM state private objects.
*/ if (lock_and_validation_needed) {* TODO: Remove this stall.
ret = dm_atomic_get_state(state, &dm_state);
/* Create a new DC context to validate. */
if (ret) goto fail;ret = dm_atomic_state_init_context(dev, state);
@@ -8848,47 +8727,6 @@ static int amdgpu_dm_atomic_check(struct drm_device *dev, ret = -EINVAL; goto fail; }
} else {
/*
* The commit is a fast update. Fast updates shouldn't change
* the DC context, affect global validation, and can have their
* commit work done in parallel with other commits not touching
* the same resource. If we have a new DC context as part of
* the DM atomic state from validation we need to free it and
* retain the existing one instead.
*
* Furthermore, since the DM atomic state only contains the DC
* context and can safely be annulled, we can free the state
* and clear the associated private object now to free
* some memory and avoid a possible use-after-free later.
*/
for (i = 0; i < state->num_private_objs; i++) {
struct drm_private_obj *obj = state->private_objs[i].ptr;
if (obj->funcs == adev->dm.atomic_obj.funcs) {
int j = state->num_private_objs-1;
dm_atomic_destroy_state(obj,
state->private_objs[i].state);
/* If i is not at the end of the array then the
* last element needs to be moved to where i was
* before the array can safely be truncated.
*/
if (i != j)
state->private_objs[i] =
state->private_objs[j];
state->private_objs[j].ptr = NULL;
state->private_objs[j].state = NULL;
state->private_objs[j].old_state = NULL;
state->private_objs[j].new_state = NULL;
state->num_private_objs = j;
break;
}
}
}
/* Store the overall update type for use later in atomic check. */
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h index ad025f5cfd3e..1c3aa7cb83b9 100644 --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h @@ -217,15 +217,6 @@ struct amdgpu_display_manager { struct drm_device *ddev; u16 display_indexes_num;
- /**
* @atomic_obj:
*
* In combination with &dm_atomic_state it helps manage
* global atomic state that doesn't map cleanly into existing
* drm resources, like &dc_context.
*/
- struct drm_private_obj atomic_obj;
- /**
- @dc_lock:
@@ -440,7 +431,7 @@ struct dm_crtc_state { #define to_dm_crtc_state(x) container_of(x, struct dm_crtc_state, base)
struct dm_atomic_state {
- struct drm_private_state base;
struct drm_atomic_state base;
struct dc_state *context;
};
2.25.1
On Thu, Jul 30, 2020 at 04:36:42PM -0400, Nicholas Kazlauskas wrote:
@@ -440,7 +431,7 @@ struct dm_crtc_state { #define to_dm_crtc_state(x) container_of(x, struct dm_crtc_state, base)
struct dm_atomic_state {
- struct drm_private_state base;
struct drm_atomic_state base;
struct dc_state *context;
Also curiosity: Can't we just embed dc_state here, instead of a pointer? Then it would become a lot more obvious that mostly this is a state object container like drm_atomic_state, but for the DC specific state structures. And we could look into moving the actual DC states into drm private states perhaps (if that helps with the code structure and overall flow).
Maybe as next steps. -Daniel
On 2020-08-07 4:52 a.m., daniel@ffwll.ch wrote:
On Thu, Jul 30, 2020 at 04:36:42PM -0400, Nicholas Kazlauskas wrote:
@@ -440,7 +431,7 @@ struct dm_crtc_state { #define to_dm_crtc_state(x) container_of(x, struct dm_crtc_state, base)
struct dm_atomic_state {
- struct drm_private_state base;
struct drm_atomic_state base;
struct dc_state *context;
Also curiosity: Can't we just embed dc_state here, instead of a pointer? Then it would become a lot more obvious that mostly this is a state object container like drm_atomic_state, but for the DC specific state structures. And we could look into moving the actual DC states into drm private states perhaps (if that helps with the code structure and overall flow).
Maybe as next steps. -Daniel
It's the refcounting that's the problem with this stuff. I'd like to move DC to a model where we have no memory allocation/ownership but that might be a bit of a more long term plan at this point.
Same with dc_plane_state and dc_stream_state as well - these could exist on the DRM objects as long as they're not refcounted.
Regards, Nicholas Kazlauskas
On Fri, Aug 07, 2020 at 10:32:11AM -0400, Kazlauskas, Nicholas wrote:
On 2020-08-07 4:52 a.m., daniel@ffwll.ch wrote:
On Thu, Jul 30, 2020 at 04:36:42PM -0400, Nicholas Kazlauskas wrote:
@@ -440,7 +431,7 @@ struct dm_crtc_state { #define to_dm_crtc_state(x) container_of(x, struct dm_crtc_state, base) struct dm_atomic_state {
- struct drm_private_state base;
- struct drm_atomic_state base; struct dc_state *context;
Also curiosity: Can't we just embed dc_state here, instead of a pointer? Then it would become a lot more obvious that mostly this is a state object container like drm_atomic_state, but for the DC specific state structures. And we could look into moving the actual DC states into drm private states perhaps (if that helps with the code structure and overall flow).
Maybe as next steps. -Daniel
It's the refcounting that's the problem with this stuff. I'd like to move DC to a model where we have no memory allocation/ownership but that might be a bit of a more long term plan at this point.
Same with dc_plane_state and dc_stream_state as well - these could exist on the DRM objects as long as they're not refcounted.
Hm what's the refcounting problem you're having? Or is it the lack of refcounting, and dc having different ideas about lifetimes than atomic? -Daniel
On Thu, Jul 30, 2020 at 04:36:35PM -0400, Nicholas Kazlauskas wrote:
Based on the analysis of the bug from [1] the best course of action seems to be swapping off of DRM private objects back to subclassing DRM atomic state instead.
This patch series implements this change, but not yet the other changes suggested in the threads from that bug - these will come later.
CCing dri-devel per Daniel's suggestion since this issue brought some interesting misuse of private objects.
I ended up reading around a bit, and it feels like the sub-objects might make a reasonable private state structure perhaps. Like dc_stream_state, at least when reading around in e.g. dc_remove_stream_from_ctx.
But would need to come up with a plan how to integrate this on the other os side of DC I guess :-)
Anyway I'd say more evidence that dc_state needs to subclass drm_atomic_state.
Another thing I wondered is whether we should rename drm_atomic_state to drm_atomic_state_update, so it's clear it's the container with the updated states, not a real state object thing itself. -Daniel
[1] https://bugzilla.kernel.org/show_bug.cgi?id=207383
Nicholas Kazlauskas (7): drm/amd/display: Store tiling_flags and tmz_surface on dm_plane_state drm/amd/display: Reset plane when tiling flags change drm/amd/display: Avoid using unvalidated tiling_flags and tmz_surface in prepare_planes drm/amd/display: Use validated tiling_flags and tmz_surface in commit_tail drm/amd/display: Reset plane for anything that's not a FAST update drm/amd/display: Drop dm_determine_update_type_for_commit drm/amd/display: Replace DRM private objects with subclassed DRM atomic state
.../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 967 ++++++------------ .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h | 13 +- 2 files changed, 343 insertions(+), 637 deletions(-)
-- 2.25.1
dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel
dri-devel@lists.freedesktop.org