This series sets up runtime PM for MDP5 based SoCs.
We have a top level MDSS device, which parents devices like MDP5, DSI, HDMI etc. The parent child relation between them establishes a similar relationship for their power domains too. In the HW (and the DT bindings), only MDSS has control over the power domain (i.e, the MDSS GDSC). In software, we do the same by assigning the GDSC to MDSS, and the children end up voting for the GDSC by calling the pm_runtime_get/put() API.
For this to work, we need to convert all the child drivers to DT. The patchset converts MDP5, DSI and HDMI to use runtime PM. The first 2 patches and the last patch are minor fixes not directly related to runtime PM.
Tested on DB410c and DB820c.
Archit Taneja (10): drm/msm/mdp5: Fix typo in encoder_enable path drm/msm/mdp5: Drop clock names with "_clk" suffix drm/msm/mdp5: Use runtime PM get/put API instead of toggling clocks drm/msm/hdmi: Set up runtime PM for HDMI drm/msm/dsi: Set up runtime PM for DSI drm/msm/dsi: Implement RPM suspend/resume callbacks drm/msm/mdp5: Don't use mode_set helper funcs for encoders and CRTCs drm/msm/mdp5: Write to SMP registers even if allocations don't change drm/msm/mdp5: Set up runtime PM for MDSS drm/msm/adreno: Prevent unclocked access when retrieving timestamps
drivers/gpu/drm/msm/adreno/adreno_gpu.c | 11 ++- drivers/gpu/drm/msm/dsi/dsi.c | 5 ++ drivers/gpu/drm/msm/dsi/dsi.h | 2 + drivers/gpu/drm/msm/dsi/dsi_host.c | 94 ++++++++++++++----------- drivers/gpu/drm/msm/dsi/phy/dsi_phy.c | 2 +- drivers/gpu/drm/msm/hdmi/hdmi.c | 2 + drivers/gpu/drm/msm/hdmi/hdmi_bridge.c | 4 ++ drivers/gpu/drm/msm/hdmi/hdmi_connector.c | 63 ++++++++++++----- drivers/gpu/drm/msm/mdp/mdp5/mdp5_cmd_encoder.c | 7 +- drivers/gpu/drm/msm/mdp/mdp5/mdp5_crtc.c | 25 ++++--- drivers/gpu/drm/msm/mdp/mdp5/mdp5_encoder.c | 14 ++-- drivers/gpu/drm/msm/mdp/mdp5/mdp5_irq.c | 27 ++++--- drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.c | 63 ++++++++++++----- drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.h | 3 - drivers/gpu/drm/msm/mdp/mdp5/mdp5_mdss.c | 63 ++++++++++++++--- drivers/gpu/drm/msm/mdp/mdp5/mdp5_smp.c | 59 +++++++++++++--- drivers/gpu/drm/msm/msm_drv.c | 29 ++++++++ drivers/gpu/drm/msm/msm_kms.h | 2 + 18 files changed, 350 insertions(+), 125 deletions(-)
The mdp5_cmd_encoder_disable is accidentally called in the encoder enable path. We've not seen any problems since we haven't tested with command mode panels in a while. Fix the copy-paste error.
Signed-off-by: Archit Taneja architt@codeaurora.org --- drivers/gpu/drm/msm/mdp/mdp5/mdp5_encoder.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_encoder.c b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_encoder.c index 97f3294fbfc6..70bef51245af 100644 --- a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_encoder.c +++ b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_encoder.c @@ -299,7 +299,7 @@ static void mdp5_encoder_enable(struct drm_encoder *encoder) struct mdp5_interface *intf = mdp5_encoder->intf;
if (intf->mode == MDP5_INTF_DSI_MODE_COMMAND) - mdp5_cmd_encoder_disable(encoder); + mdp5_cmd_encoder_enable(encoder); else mdp5_vid_encoder_enable(encoder); }
We have upstream bindings (msm8916) that have the "_clk" suffix in the clock names. The downstream bindings also require it.
We want to drop the "_clk" suffix and at the same time support existing bindings. Update the MDP5 code with the the msm_clk_get() helper to support both old and new clock names.
Signed-off-by: Archit Taneja architt@codeaurora.org --- drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.c | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-)
diff --git a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.c b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.c index 5694344db762..efde2d69ec09 100644 --- a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.c +++ b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.c @@ -511,7 +511,7 @@ static int get_clk(struct platform_device *pdev, struct clk **clkp, const char *name, bool mandatory) { struct device *dev = &pdev->dev; - struct clk *clk = devm_clk_get(dev, name); + struct clk *clk = msm_clk_get(pdev, name); if (IS_ERR(clk) && mandatory) { dev_err(dev, "failed to get %s (%ld)\n", name, PTR_ERR(clk)); return PTR_ERR(clk); @@ -896,22 +896,22 @@ static int mdp5_init(struct platform_device *pdev, struct drm_device *dev) }
/* mandatory clocks: */ - ret = get_clk(pdev, &mdp5_kms->axi_clk, "bus_clk", true); + ret = get_clk(pdev, &mdp5_kms->axi_clk, "bus", true); if (ret) goto fail; - ret = get_clk(pdev, &mdp5_kms->ahb_clk, "iface_clk", true); + ret = get_clk(pdev, &mdp5_kms->ahb_clk, "iface", true); if (ret) goto fail; - ret = get_clk(pdev, &mdp5_kms->core_clk, "core_clk", true); + ret = get_clk(pdev, &mdp5_kms->core_clk, "core", true); if (ret) goto fail; - ret = get_clk(pdev, &mdp5_kms->vsync_clk, "vsync_clk", true); + ret = get_clk(pdev, &mdp5_kms->vsync_clk, "vsync", true); if (ret) goto fail;
/* optional clocks: */ - get_clk(pdev, &mdp5_kms->lut_clk, "lut_clk", false); - get_clk(pdev, &mdp5_kms->iommu_clk, "iommu_clk", false); + get_clk(pdev, &mdp5_kms->lut_clk, "lut", false); + get_clk(pdev, &mdp5_kms->iommu_clk, "iommu", false);
/* we need to set a default rate before enabling. Set a safe * rate first, then figure out hw revision, and then set a
mdp5_enable/disable calls are scattered all around in the MDP5 code. Use the pm_runtime_get/put calls here instead, and populate the runtime PM suspend/resume ops to manage the clocks.
About the overall design: MDP5 is a child of the top level MDSS device. MDSS is also the parent to DSI, HDMI and other interfaces. When we enable MDP5's power domain, we end up enabling MDSS's PD too. It is only MDSS's PD that actually controlls the GDSC HW. Therefore, calling runtime_get/put on the MDP5 device is like just requesting a vote to enable/disable the GDSC.
Functionally, replacing the clock enable/disable calls with the RPM API can result in the power domain (GDSC) state being toggled if no other child isn't powered on. This can result in the register context being lost. We make sure (in future commits) that code paths don't end up configuring registers and then later lose state, resulting in a bad HW state.
For now, we've replaced each mdp5_enable/disable with runtime_get/put API. We could optimize things later by removing runtime_get/put calls which don't really need to be there. This could prevent unnecessary toggling of the power domain and clocks.
Signed-off-by: Archit Taneja architt@codeaurora.org --- drivers/gpu/drm/msm/mdp/mdp5/mdp5_cmd_encoder.c | 7 +++- drivers/gpu/drm/msm/mdp/mdp5/mdp5_crtc.c | 21 +++++++---- drivers/gpu/drm/msm/mdp/mdp5/mdp5_encoder.c | 7 +++- drivers/gpu/drm/msm/mdp/mdp5/mdp5_irq.c | 27 +++++++++----- drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.c | 49 +++++++++++++++++++------ drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.h | 3 -- 6 files changed, 78 insertions(+), 36 deletions(-)
diff --git a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_cmd_encoder.c b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_cmd_encoder.c index aa7402e03f67..60790df91bfa 100644 --- a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_cmd_encoder.c +++ b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_cmd_encoder.c @@ -192,6 +192,7 @@ int mdp5_cmd_encoder_set_split_display(struct drm_encoder *encoder, { struct mdp5_encoder *mdp5_cmd_enc = to_mdp5_encoder(encoder); struct mdp5_kms *mdp5_kms; + struct device *dev; int intf_num; u32 data = 0;
@@ -214,14 +215,16 @@ int mdp5_cmd_encoder_set_split_display(struct drm_encoder *encoder, /* Smart Panel, Sync mode */ data |= MDP5_SPLIT_DPL_UPPER_SMART_PANEL;
+ dev = &mdp5_kms->pdev->dev; + /* Make sure clocks are on when connectors calling this function. */ - mdp5_enable(mdp5_kms); + pm_runtime_get_sync(dev); mdp5_write(mdp5_kms, REG_MDP5_SPLIT_DPL_UPPER, data);
mdp5_write(mdp5_kms, REG_MDP5_SPLIT_DPL_LOWER, MDP5_SPLIT_DPL_LOWER_SMART_PANEL); mdp5_write(mdp5_kms, REG_MDP5_SPLIT_DPL_EN, 1); - mdp5_disable(mdp5_kms); + pm_runtime_put_autosuspend(dev);
return 0; } diff --git a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_crtc.c b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_crtc.c index 9d01656d853e..fb78425db52c 100644 --- a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_crtc.c +++ b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_crtc.c @@ -414,6 +414,7 @@ static void mdp5_crtc_disable(struct drm_crtc *crtc) struct mdp5_crtc *mdp5_crtc = to_mdp5_crtc(crtc); struct mdp5_crtc_state *mdp5_cstate = to_mdp5_crtc_state(crtc->state); struct mdp5_kms *mdp5_kms = get_kms(crtc); + struct device *dev = &mdp5_kms->pdev->dev;
DBG("%s", crtc->name);
@@ -424,7 +425,7 @@ static void mdp5_crtc_disable(struct drm_crtc *crtc) mdp_irq_unregister(&mdp5_kms->base, &mdp5_crtc->pp_done);
mdp_irq_unregister(&mdp5_kms->base, &mdp5_crtc->err); - mdp5_disable(mdp5_kms); + pm_runtime_put_autosuspend(dev);
mdp5_crtc->enabled = false; } @@ -434,13 +435,14 @@ static void mdp5_crtc_enable(struct drm_crtc *crtc) struct mdp5_crtc *mdp5_crtc = to_mdp5_crtc(crtc); struct mdp5_crtc_state *mdp5_cstate = to_mdp5_crtc_state(crtc->state); struct mdp5_kms *mdp5_kms = get_kms(crtc); + struct device *dev = &mdp5_kms->pdev->dev;
DBG("%s", crtc->name);
if (WARN_ON(mdp5_crtc->enabled)) return;
- mdp5_enable(mdp5_kms); + pm_runtime_get_sync(dev); mdp_irq_register(&mdp5_kms->base, &mdp5_crtc->err);
if (mdp5_cstate->cmd_mode) @@ -725,6 +727,7 @@ static int mdp5_crtc_cursor_set(struct drm_crtc *crtc, struct mdp5_pipeline *pipeline = &mdp5_cstate->pipeline; struct drm_device *dev = crtc->dev; struct mdp5_kms *mdp5_kms = get_kms(crtc); + struct platform_device *pdev = mdp5_kms->pdev; struct msm_kms *kms = &mdp5_kms->base.base; struct drm_gem_object *cursor_bo, *old_bo = NULL; uint32_t blendcfg, stride; @@ -753,7 +756,7 @@ static int mdp5_crtc_cursor_set(struct drm_crtc *crtc, if (!handle) { DBG("Cursor off"); cursor_enable = false; - mdp5_enable(mdp5_kms); + pm_runtime_get_sync(&pdev->dev); goto set_cursor; }
@@ -768,6 +771,8 @@ static int mdp5_crtc_cursor_set(struct drm_crtc *crtc, lm = mdp5_cstate->pipeline.mixer->lm; stride = width * drm_format_plane_cpp(DRM_FORMAT_ARGB8888, 0);
+ pm_runtime_get_sync(&pdev->dev); + spin_lock_irqsave(&mdp5_crtc->cursor.lock, flags); old_bo = mdp5_crtc->cursor.scanout_bo;
@@ -777,8 +782,6 @@ static int mdp5_crtc_cursor_set(struct drm_crtc *crtc,
get_roi(crtc, &roi_w, &roi_h);
- mdp5_enable(mdp5_kms); - mdp5_write(mdp5_kms, REG_MDP5_LM_CURSOR_STRIDE(lm), stride); mdp5_write(mdp5_kms, REG_MDP5_LM_CURSOR_FORMAT(lm), MDP5_LM_CURSOR_FORMAT_FORMAT(CURSOR_FMT_ARGB8888)); @@ -796,6 +799,8 @@ static int mdp5_crtc_cursor_set(struct drm_crtc *crtc,
spin_unlock_irqrestore(&mdp5_crtc->cursor.lock, flags);
+ pm_runtime_put_autosuspend(&pdev->dev); + set_cursor: ret = mdp5_ctl_set_cursor(ctl, pipeline, 0, cursor_enable); if (ret) { @@ -807,7 +812,7 @@ static int mdp5_crtc_cursor_set(struct drm_crtc *crtc, crtc_flush(crtc, flush_mask);
end: - mdp5_disable(mdp5_kms); + pm_runtime_put_autosuspend(&pdev->dev); if (old_bo) { drm_flip_work_queue(&mdp5_crtc->unref_cursor_work, old_bo); /* enable vblank to complete cursor work: */ @@ -840,7 +845,7 @@ static int mdp5_crtc_cursor_move(struct drm_crtc *crtc, int x, int y)
get_roi(crtc, &roi_w, &roi_h);
- mdp5_enable(mdp5_kms); + pm_runtime_get_sync(&mdp5_kms->pdev->dev);
spin_lock_irqsave(&mdp5_crtc->cursor.lock, flags); mdp5_write(mdp5_kms, REG_MDP5_LM_CURSOR_SIZE(lm), @@ -853,7 +858,7 @@ static int mdp5_crtc_cursor_move(struct drm_crtc *crtc, int x, int y)
crtc_flush(crtc, flush_mask);
- mdp5_disable(mdp5_kms); + pm_runtime_put_autosuspend(&mdp5_kms->pdev->dev);
return 0; } diff --git a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_encoder.c b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_encoder.c index 70bef51245af..0ca9e4033bb6 100644 --- a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_encoder.c +++ b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_encoder.c @@ -350,6 +350,7 @@ int mdp5_vid_encoder_set_split_display(struct drm_encoder *encoder, struct mdp5_encoder *mdp5_encoder = to_mdp5_encoder(encoder); struct mdp5_encoder *mdp5_slave_enc = to_mdp5_encoder(slave_encoder); struct mdp5_kms *mdp5_kms; + struct device *dev; int intf_num; u32 data = 0;
@@ -369,8 +370,10 @@ int mdp5_vid_encoder_set_split_display(struct drm_encoder *encoder, else return -EINVAL;
+ dev = &mdp5_kms->pdev->dev; /* Make sure clocks are on when connectors calling this function. */ - mdp5_enable(mdp5_kms); + pm_runtime_get_sync(dev); + /* Dumb Panel, Sync mode */ mdp5_write(mdp5_kms, REG_MDP5_SPLIT_DPL_UPPER, 0); mdp5_write(mdp5_kms, REG_MDP5_SPLIT_DPL_LOWER, data); @@ -378,7 +381,7 @@ int mdp5_vid_encoder_set_split_display(struct drm_encoder *encoder,
mdp5_ctl_pair(mdp5_encoder->ctl, mdp5_slave_enc->ctl, true);
- mdp5_disable(mdp5_kms); + pm_runtime_put_autosuspend(dev);
return 0; } diff --git a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_irq.c b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_irq.c index 3ce8b9dec9c1..bb5deb00c899 100644 --- a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_irq.c +++ b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_irq.c @@ -49,16 +49,19 @@ static void mdp5_irq_error_handler(struct mdp_irq *irq, uint32_t irqstatus) void mdp5_irq_preinstall(struct msm_kms *kms) { struct mdp5_kms *mdp5_kms = to_mdp5_kms(to_mdp_kms(kms)); - mdp5_enable(mdp5_kms); + struct device *dev = &mdp5_kms->pdev->dev; + + pm_runtime_get_sync(dev); mdp5_write(mdp5_kms, REG_MDP5_INTR_CLEAR, 0xffffffff); mdp5_write(mdp5_kms, REG_MDP5_INTR_EN, 0x00000000); - mdp5_disable(mdp5_kms); + pm_runtime_put_autosuspend(dev); }
int mdp5_irq_postinstall(struct msm_kms *kms) { struct mdp_kms *mdp_kms = to_mdp_kms(kms); struct mdp5_kms *mdp5_kms = to_mdp5_kms(mdp_kms); + struct device *dev = &mdp5_kms->pdev->dev; struct mdp_irq *error_handler = &mdp5_kms->error_handler;
error_handler->irq = mdp5_irq_error_handler; @@ -67,9 +70,9 @@ int mdp5_irq_postinstall(struct msm_kms *kms) MDP5_IRQ_INTF2_UNDER_RUN | MDP5_IRQ_INTF3_UNDER_RUN;
- mdp5_enable(mdp5_kms); + pm_runtime_get_sync(dev); mdp_irq_register(mdp_kms, error_handler); - mdp5_disable(mdp5_kms); + pm_runtime_put_autosuspend(dev);
return 0; } @@ -77,9 +80,11 @@ int mdp5_irq_postinstall(struct msm_kms *kms) void mdp5_irq_uninstall(struct msm_kms *kms) { struct mdp5_kms *mdp5_kms = to_mdp5_kms(to_mdp_kms(kms)); - mdp5_enable(mdp5_kms); + struct device *dev = &mdp5_kms->pdev->dev; + + pm_runtime_get_sync(dev); mdp5_write(mdp5_kms, REG_MDP5_INTR_EN, 0x00000000); - mdp5_disable(mdp5_kms); + pm_runtime_put_autosuspend(dev); }
irqreturn_t mdp5_irq(struct msm_kms *kms) @@ -109,11 +114,12 @@ irqreturn_t mdp5_irq(struct msm_kms *kms) int mdp5_enable_vblank(struct msm_kms *kms, struct drm_crtc *crtc) { struct mdp5_kms *mdp5_kms = to_mdp5_kms(to_mdp_kms(kms)); + struct device *dev = &mdp5_kms->pdev->dev;
- mdp5_enable(mdp5_kms); + pm_runtime_get_sync(dev); mdp_update_vblank_mask(to_mdp_kms(kms), mdp5_crtc_vblank(crtc), true); - mdp5_disable(mdp5_kms); + pm_runtime_put_autosuspend(dev);
return 0; } @@ -121,9 +127,10 @@ int mdp5_enable_vblank(struct msm_kms *kms, struct drm_crtc *crtc) void mdp5_disable_vblank(struct msm_kms *kms, struct drm_crtc *crtc) { struct mdp5_kms *mdp5_kms = to_mdp5_kms(to_mdp_kms(kms)); + struct device *dev = &mdp5_kms->pdev->dev;
- mdp5_enable(mdp5_kms); + pm_runtime_get_sync(dev); mdp_update_vblank_mask(to_mdp_kms(kms), mdp5_crtc_vblank(crtc), false); - mdp5_disable(mdp5_kms); + pm_runtime_put_autosuspend(dev); } diff --git a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.c b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.c index efde2d69ec09..16bd54cf486a 100644 --- a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.c +++ b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.c @@ -30,11 +30,10 @@ static const char *iommu_ports[] = { static int mdp5_hw_init(struct msm_kms *kms) { struct mdp5_kms *mdp5_kms = to_mdp5_kms(to_mdp_kms(kms)); - struct platform_device *pdev = mdp5_kms->pdev; + struct device *dev = &mdp5_kms->pdev->dev; unsigned long flags;
- pm_runtime_get_sync(&pdev->dev); - mdp5_enable(mdp5_kms); + pm_runtime_get_sync(dev);
/* Magic unknown register writes: * @@ -66,8 +65,7 @@ static int mdp5_hw_init(struct msm_kms *kms)
mdp5_ctlm_hw_reset(mdp5_kms->ctlm);
- mdp5_disable(mdp5_kms); - pm_runtime_put_sync(&pdev->dev); + pm_runtime_put_sync(dev);
return 0; } @@ -111,8 +109,9 @@ static void mdp5_swap_state(struct msm_kms *kms, struct drm_atomic_state *state) static void mdp5_prepare_commit(struct msm_kms *kms, struct drm_atomic_state *state) { struct mdp5_kms *mdp5_kms = to_mdp5_kms(to_mdp_kms(kms)); + struct device *dev = &mdp5_kms->pdev->dev;
- mdp5_enable(mdp5_kms); + pm_runtime_get_sync(dev);
if (mdp5_kms->smp) mdp5_smp_prepare_commit(mdp5_kms->smp, &mdp5_kms->state->smp); @@ -121,11 +120,12 @@ static void mdp5_prepare_commit(struct msm_kms *kms, struct drm_atomic_state *st static void mdp5_complete_commit(struct msm_kms *kms, struct drm_atomic_state *state) { struct mdp5_kms *mdp5_kms = to_mdp5_kms(to_mdp_kms(kms)); + struct device *dev = &mdp5_kms->pdev->dev;
if (mdp5_kms->smp) mdp5_smp_complete_commit(mdp5_kms->smp, &mdp5_kms->state->smp);
- mdp5_disable(mdp5_kms); + pm_runtime_put_autosuspend(dev); }
static void mdp5_wait_for_crtc_commit_done(struct msm_kms *kms, @@ -495,11 +495,12 @@ static int modeset_init(struct mdp5_kms *mdp5_kms) static void read_mdp_hw_revision(struct mdp5_kms *mdp5_kms, u32 *major, u32 *minor) { + struct device *dev = &mdp5_kms->pdev->dev; u32 version;
- mdp5_enable(mdp5_kms); + pm_runtime_get_sync(dev); version = mdp5_read(mdp5_kms, REG_MDP5_HW_VERSION); - mdp5_disable(mdp5_kms); + pm_runtime_put_autosuspend(dev);
*major = FIELD(version, MDP5_HW_VERSION_MAJOR); *minor = FIELD(version, MDP5_HW_VERSION_MINOR); @@ -652,7 +653,7 @@ struct msm_kms *mdp5_kms_init(struct drm_device *dev) * have left things on, in which case we'll start getting faults if * we don't disable): */ - mdp5_enable(mdp5_kms); + pm_runtime_get_sync(&pdev->dev); for (i = 0; i < MDP5_INTF_NUM_MAX; i++) { if (mdp5_cfg_intf_is_virtual(config->hw->intf.connect[i]) || !config->hw->intf.base[i]) @@ -661,7 +662,6 @@ struct msm_kms *mdp5_kms_init(struct drm_device *dev)
mdp5_write(mdp5_kms, REG_MDP5_INTF_FRAME_LINE_COUNT_EN(i), 0x3); } - mdp5_disable(mdp5_kms); mdelay(16);
if (config->platform.iommu) { @@ -687,6 +687,8 @@ struct msm_kms *mdp5_kms_init(struct drm_device *dev) aspace = NULL;; }
+ pm_runtime_put_autosuspend(&pdev->dev); + ret = modeset_init(mdp5_kms); if (ret) { dev_err(&pdev->dev, "modeset_init failed: %d\n", ret); @@ -1015,6 +1017,30 @@ static int mdp5_dev_remove(struct platform_device *pdev) return 0; }
+static int mdp5_runtime_suspend(struct device *dev) +{ + struct platform_device *pdev = to_platform_device(dev); + struct mdp5_kms *mdp5_kms = platform_get_drvdata(pdev); + + DBG(""); + + return mdp5_disable(mdp5_kms); +} + +static int mdp5_runtime_resume(struct device *dev) +{ + struct platform_device *pdev = to_platform_device(dev); + struct mdp5_kms *mdp5_kms = platform_get_drvdata(pdev); + + DBG(""); + + return mdp5_enable(mdp5_kms); +} + +static const struct dev_pm_ops mdp5_pm_ops = { + SET_RUNTIME_PM_OPS(mdp5_runtime_suspend, mdp5_runtime_resume, NULL) +}; + static const struct of_device_id mdp5_dt_match[] = { { .compatible = "qcom,mdp5", }, /* to support downstream DT files */ @@ -1029,6 +1055,7 @@ static struct platform_driver mdp5_driver = { .driver = { .name = "msm_mdp", .of_match_table = mdp5_dt_match, + .pm = &mdp5_pm_ops, }, };
diff --git a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.h b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.h index 91cd31161087..cf46b00b2d14 100644 --- a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.h +++ b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.h @@ -260,9 +260,6 @@ static inline uint32_t lm2ppdone(struct mdp5_hw_mixer *mixer) return MDP5_IRQ_PING_PONG_0_DONE << mixer->pp; }
-int mdp5_disable(struct mdp5_kms *mdp5_kms); -int mdp5_enable(struct mdp5_kms *mdp5_kms); - void mdp5_set_irqmask(struct mdp_kms *mdp_kms, uint32_t irqmask, uint32_t old_irqmask); void mdp5_irq_preinstall(struct msm_kms *kms);
Enable rudimentary runtime PM in the HDMI driver. We can't really do agressive PM toggling at the moment because we need to leave the hpd clocks enabled all the time. There isn't much benefit of creating suspend/resume ops to toggle clocks either.
We just make sure that we configure the power domain in the HDMI bridge's enable/disable paths, and the HDMI connector's detect() op.
Signed-off-by: Archit Taneja architt@codeaurora.org --- drivers/gpu/drm/msm/hdmi/hdmi.c | 2 + drivers/gpu/drm/msm/hdmi/hdmi_bridge.c | 4 ++ drivers/gpu/drm/msm/hdmi/hdmi_connector.c | 63 +++++++++++++++++++++---------- 3 files changed, 50 insertions(+), 19 deletions(-)
diff --git a/drivers/gpu/drm/msm/hdmi/hdmi.c b/drivers/gpu/drm/msm/hdmi/hdmi.c index a968cad509c2..17e069a133a4 100644 --- a/drivers/gpu/drm/msm/hdmi/hdmi.c +++ b/drivers/gpu/drm/msm/hdmi/hdmi.c @@ -239,6 +239,8 @@ static struct hdmi *msm_hdmi_init(struct platform_device *pdev) hdmi->pwr_clks[i] = clk; }
+ pm_runtime_enable(&pdev->dev); + hdmi->workq = alloc_ordered_workqueue("msm_hdmi", 0);
hdmi->i2c = msm_hdmi_i2c_init(hdmi); diff --git a/drivers/gpu/drm/msm/hdmi/hdmi_bridge.c b/drivers/gpu/drm/msm/hdmi/hdmi_bridge.c index ae40e7179d4f..e9b5e886e747 100644 --- a/drivers/gpu/drm/msm/hdmi/hdmi_bridge.c +++ b/drivers/gpu/drm/msm/hdmi/hdmi_bridge.c @@ -35,6 +35,8 @@ static void msm_hdmi_power_on(struct drm_bridge *bridge) const struct hdmi_platform_config *config = hdmi->config; int i, ret;
+ pm_runtime_get_sync(&hdmi->pdev->dev); + for (i = 0; i < config->pwr_reg_cnt; i++) { ret = regulator_enable(hdmi->pwr_regs[i]); if (ret) { @@ -84,6 +86,8 @@ static void power_off(struct drm_bridge *bridge) config->pwr_reg_names[i], ret); } } + + pm_runtime_put_autosuspend(&hdmi->pdev->dev); }
#define AVI_IFRAME_LINE_NUMBER 1 diff --git a/drivers/gpu/drm/msm/hdmi/hdmi_connector.c b/drivers/gpu/drm/msm/hdmi/hdmi_connector.c index 2a23b92f585f..5a57a81665b9 100644 --- a/drivers/gpu/drm/msm/hdmi/hdmi_connector.c +++ b/drivers/gpu/drm/msm/hdmi/hdmi_connector.c @@ -137,6 +137,36 @@ static int gpio_config(struct hdmi *hdmi, bool on) return ret; }
+static void enable_hpd_clocks(struct hdmi *hdmi, bool enable) +{ + const struct hdmi_platform_config *config = hdmi->config; + struct device *dev = &hdmi->pdev->dev; + int i, ret; + + if (enable) { + for (i = 0; i < config->hpd_clk_cnt; i++) { + if (config->hpd_freq && config->hpd_freq[i]) { + ret = clk_set_rate(hdmi->hpd_clks[i], + config->hpd_freq[i]); + if (ret) + dev_warn(dev, + "failed to set clk %s (%d)\n", + config->hpd_clk_names[i], ret); + } + + ret = clk_prepare_enable(hdmi->hpd_clks[i]); + if (ret) { + dev_err(dev, + "failed to enable hpd clk: %s (%d)\n", + config->hpd_clk_names[i], ret); + } + } + } else { + for (i = config->hpd_clk_cnt - 1; i >= 0; i--) + clk_disable_unprepare(hdmi->hpd_clks[i]); + } +} + static int hpd_enable(struct hdmi_connector *hdmi_connector) { struct hdmi *hdmi = hdmi_connector->hdmi; @@ -167,22 +197,8 @@ static int hpd_enable(struct hdmi_connector *hdmi_connector) goto fail; }
- for (i = 0; i < config->hpd_clk_cnt; i++) { - if (config->hpd_freq && config->hpd_freq[i]) { - ret = clk_set_rate(hdmi->hpd_clks[i], - config->hpd_freq[i]); - if (ret) - dev_warn(dev, "failed to set clk %s (%d)\n", - config->hpd_clk_names[i], ret); - } - - ret = clk_prepare_enable(hdmi->hpd_clks[i]); - if (ret) { - dev_err(dev, "failed to enable hpd clk: %s (%d)\n", - config->hpd_clk_names[i], ret); - goto fail; - } - } + pm_runtime_get_sync(dev); + enable_hpd_clocks(hdmi, true);
msm_hdmi_set_mode(hdmi, false); msm_hdmi_phy_reset(hdmi); @@ -232,8 +248,8 @@ static void hdp_disable(struct hdmi_connector *hdmi_connector)
msm_hdmi_set_mode(hdmi, false);
- for (i = 0; i < config->hpd_clk_cnt; i++) - clk_disable_unprepare(hdmi->hpd_clks[i]); + enable_hpd_clocks(hdmi, false); + pm_runtime_put_autosuspend(dev);
ret = gpio_config(hdmi, false); if (ret) @@ -292,7 +308,16 @@ void msm_hdmi_connector_irq(struct drm_connector *connector)
static enum drm_connector_status detect_reg(struct hdmi *hdmi) { - uint32_t hpd_int_status = hdmi_read(hdmi, REG_HDMI_HPD_INT_STATUS); + uint32_t hpd_int_status; + + pm_runtime_get_sync(&hdmi->pdev->dev); + enable_hpd_clocks(hdmi, true); + + hpd_int_status = hdmi_read(hdmi, REG_HDMI_HPD_INT_STATUS); + + enable_hpd_clocks(hdmi, false); + pm_runtime_put_autosuspend(&hdmi->pdev->dev); + return (hpd_int_status & HDMI_HPD_INT_STATUS_CABLE_DETECTED) ? connector_status_connected : connector_status_disconnected; }
Call the pm_runtime_get/put API where we need the clocks enabled.
The main entry/exit points are 1) enabling/disabling the DSI bridge and 2) Sending commands from the DSI host to the device.
Signed-off-by: Archit Taneja architt@codeaurora.org --- drivers/gpu/drm/msm/dsi/dsi_host.c | 11 +++++++++++ drivers/gpu/drm/msm/dsi/phy/dsi_phy.c | 2 +- 2 files changed, 12 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/msm/dsi/dsi_host.c b/drivers/gpu/drm/msm/dsi/dsi_host.c index c7b612c3d771..da0913a33ca0 100644 --- a/drivers/gpu/drm/msm/dsi/dsi_host.c +++ b/drivers/gpu/drm/msm/dsi/dsi_host.c @@ -221,6 +221,8 @@ static const struct msm_dsi_cfg_handler *dsi_get_config( goto put_gdsc; }
+ pm_runtime_get_sync(dev); + ret = regulator_enable(gdsc_reg); if (ret) { pr_err("%s: unable to enable gdsc\n", __func__); @@ -247,6 +249,7 @@ static const struct msm_dsi_cfg_handler *dsi_get_config( clk_disable_unprepare(ahb_clk); disable_gdsc: regulator_disable(gdsc_reg); + pm_runtime_put_autosuspend(dev); put_clk: clk_put(ahb_clk); put_gdsc: @@ -1713,6 +1716,8 @@ int msm_dsi_host_init(struct msm_dsi *msm_dsi) goto fail; }
+ pm_runtime_enable(&pdev->dev); + msm_host->cfg_hnd = dsi_get_config(msm_host); if (!msm_host->cfg_hnd) { ret = -EINVAL; @@ -1786,6 +1791,8 @@ void msm_dsi_host_destroy(struct mipi_dsi_host *host) mutex_destroy(&msm_host->clk_mutex); mutex_destroy(&msm_host->cmd_mutex); mutex_destroy(&msm_host->dev_mutex); + + pm_runtime_disable(&msm_host->pdev->dev); }
int msm_dsi_host_modeset_init(struct mipi_dsi_host *host, @@ -1881,6 +1888,7 @@ int msm_dsi_host_xfer_prepare(struct mipi_dsi_host *host, * mdss interrupt is generated in mdp core clock domain * mdp clock need to be enabled to receive dsi interrupt */ + pm_runtime_get_sync(&msm_host->pdev->dev); dsi_clk_ctrl(msm_host, 1);
/* TODO: vote for bus bandwidth */ @@ -1912,6 +1920,7 @@ void msm_dsi_host_xfer_restore(struct mipi_dsi_host *host, /* TODO: unvote for bus bandwidth */
dsi_clk_ctrl(msm_host, 0); + pm_runtime_put_autosuspend(&msm_host->pdev->dev); }
int msm_dsi_host_cmd_tx(struct mipi_dsi_host *host, @@ -2217,6 +2226,7 @@ int msm_dsi_host_power_on(struct mipi_dsi_host *host, goto unlock_ret; }
+ pm_runtime_get_sync(&msm_host->pdev->dev); ret = dsi_clk_ctrl(msm_host, 1); if (ret) { pr_err("%s: failed to enable clocks. ret=%d\n", __func__, ret); @@ -2269,6 +2279,7 @@ int msm_dsi_host_power_off(struct mipi_dsi_host *host) pinctrl_pm_select_sleep_state(&msm_host->pdev->dev);
dsi_clk_ctrl(msm_host, 0); + pm_runtime_put_autosuspend(&msm_host->pdev->dev);
dsi_host_regulator_disable(msm_host);
diff --git a/drivers/gpu/drm/msm/dsi/phy/dsi_phy.c b/drivers/gpu/drm/msm/dsi/phy/dsi_phy.c index 0c2eb9c9a1fc..7c9bf91bc22b 100644 --- a/drivers/gpu/drm/msm/dsi/phy/dsi_phy.c +++ b/drivers/gpu/drm/msm/dsi/phy/dsi_phy.c @@ -373,7 +373,7 @@ static int dsi_phy_enable_resource(struct msm_dsi_phy *phy) static void dsi_phy_disable_resource(struct msm_dsi_phy *phy) { clk_disable_unprepare(phy->ahb_clk); - pm_runtime_put_sync(&phy->pdev->dev); + pm_runtime_put_autosuspend(&phy->pdev->dev); }
static const struct of_device_id dsi_phy_dt_match[] = {
The bus clocks are always enabled/disabled along with the power domain, so move it to the runtime suspend/resume ops. This cleans up the clock code a bit. Get rid of the clk_mutex mutex since it isn't needed.
Signed-off-by: Archit Taneja architt@codeaurora.org --- drivers/gpu/drm/msm/dsi/dsi.c | 5 +++ drivers/gpu/drm/msm/dsi/dsi.h | 2 + drivers/gpu/drm/msm/dsi/dsi_host.c | 83 +++++++++++++++++++------------------- 3 files changed, 49 insertions(+), 41 deletions(-)
diff --git a/drivers/gpu/drm/msm/dsi/dsi.c b/drivers/gpu/drm/msm/dsi/dsi.c index 311c1c1e7d6c..98742d7af6dc 100644 --- a/drivers/gpu/drm/msm/dsi/dsi.c +++ b/drivers/gpu/drm/msm/dsi/dsi.c @@ -161,12 +161,17 @@ static const struct of_device_id dt_match[] = { {} };
+static const struct dev_pm_ops dsi_pm_ops = { + SET_RUNTIME_PM_OPS(msm_dsi_runtime_suspend, msm_dsi_runtime_resume, NULL) +}; + static struct platform_driver dsi_driver = { .probe = dsi_dev_probe, .remove = dsi_dev_remove, .driver = { .name = "msm_dsi", .of_match_table = dt_match, + .pm = &dsi_pm_ops, }, };
diff --git a/drivers/gpu/drm/msm/dsi/dsi.h b/drivers/gpu/drm/msm/dsi/dsi.h index 9e6017387efb..2302046197a8 100644 --- a/drivers/gpu/drm/msm/dsi/dsi.h +++ b/drivers/gpu/drm/msm/dsi/dsi.h @@ -179,6 +179,8 @@ void msm_dsi_host_destroy(struct mipi_dsi_host *host); int msm_dsi_host_modeset_init(struct mipi_dsi_host *host, struct drm_device *dev); int msm_dsi_host_init(struct msm_dsi *msm_dsi); +int msm_dsi_runtime_suspend(struct device *dev); +int msm_dsi_runtime_resume(struct device *dev);
/* dsi phy */ struct msm_dsi_phy; diff --git a/drivers/gpu/drm/msm/dsi/dsi_host.c b/drivers/gpu/drm/msm/dsi/dsi_host.c index da0913a33ca0..dbb31a014419 100644 --- a/drivers/gpu/drm/msm/dsi/dsi_host.c +++ b/drivers/gpu/drm/msm/dsi/dsi_host.c @@ -135,7 +135,6 @@ struct msm_dsi_host { struct completion video_comp; struct mutex dev_mutex; struct mutex cmd_mutex; - struct mutex clk_mutex; spinlock_t intr_lock; /* Protect interrupt ctrl register */
u32 err_work_state; @@ -458,6 +457,34 @@ static void dsi_bus_clk_disable(struct msm_dsi_host *msm_host) clk_disable_unprepare(msm_host->bus_clks[i]); }
+int msm_dsi_runtime_suspend(struct device *dev) +{ + struct platform_device *pdev = to_platform_device(dev); + struct msm_dsi *msm_dsi = platform_get_drvdata(pdev); + struct mipi_dsi_host *host = msm_dsi->host; + struct msm_dsi_host *msm_host = to_msm_dsi_host(host); + + if (!msm_host->cfg_hnd) + return 0; + + dsi_bus_clk_disable(msm_host); + + return 0; +} + +int msm_dsi_runtime_resume(struct device *dev) +{ + struct platform_device *pdev = to_platform_device(dev); + struct msm_dsi *msm_dsi = platform_get_drvdata(pdev); + struct mipi_dsi_host *host = msm_dsi->host; + struct msm_dsi_host *msm_host = to_msm_dsi_host(host); + + if (!msm_host->cfg_hnd) + return 0; + + return dsi_bus_clk_enable(msm_host); +} + static int dsi_link_clk_enable_6g(struct msm_dsi_host *msm_host) { int ret; @@ -599,35 +626,6 @@ static void dsi_link_clk_disable(struct msm_dsi_host *msm_host) } }
-static int dsi_clk_ctrl(struct msm_dsi_host *msm_host, bool enable) -{ - int ret = 0; - - mutex_lock(&msm_host->clk_mutex); - if (enable) { - ret = dsi_bus_clk_enable(msm_host); - if (ret) { - pr_err("%s: Can not enable bus clk, %d\n", - __func__, ret); - goto unlock_ret; - } - ret = dsi_link_clk_enable(msm_host); - if (ret) { - pr_err("%s: Can not enable link clk, %d\n", - __func__, ret); - dsi_bus_clk_disable(msm_host); - goto unlock_ret; - } - } else { - dsi_link_clk_disable(msm_host); - dsi_bus_clk_disable(msm_host); - } - -unlock_ret: - mutex_unlock(&msm_host->clk_mutex); - return ret; -} - static int dsi_calc_clk_rate(struct msm_dsi_host *msm_host) { struct drm_display_mode *mode = msm_host->mode; @@ -1702,6 +1700,7 @@ int msm_dsi_host_init(struct msm_dsi *msm_dsi) }
msm_host->pdev = pdev; + msm_dsi->host = &msm_host->base;
ret = dsi_host_parse_dt(msm_host); if (ret) { @@ -1758,7 +1757,6 @@ int msm_dsi_host_init(struct msm_dsi *msm_dsi) init_completion(&msm_host->video_comp); mutex_init(&msm_host->dev_mutex); mutex_init(&msm_host->cmd_mutex); - mutex_init(&msm_host->clk_mutex); spin_lock_init(&msm_host->intr_lock);
/* setup workqueue */ @@ -1766,7 +1764,6 @@ int msm_dsi_host_init(struct msm_dsi *msm_dsi) INIT_WORK(&msm_host->err_work, dsi_err_worker); INIT_WORK(&msm_host->hpd_work, dsi_hpd_worker);
- msm_dsi->host = &msm_host->base; msm_dsi->id = msm_host->id;
DBG("Dsi Host %d initialized", msm_host->id); @@ -1788,7 +1785,6 @@ void msm_dsi_host_destroy(struct mipi_dsi_host *host) msm_host->workqueue = NULL; }
- mutex_destroy(&msm_host->clk_mutex); mutex_destroy(&msm_host->cmd_mutex); mutex_destroy(&msm_host->dev_mutex);
@@ -1889,7 +1885,7 @@ int msm_dsi_host_xfer_prepare(struct mipi_dsi_host *host, * mdp clock need to be enabled to receive dsi interrupt */ pm_runtime_get_sync(&msm_host->pdev->dev); - dsi_clk_ctrl(msm_host, 1); + dsi_link_clk_enable(msm_host);
/* TODO: vote for bus bandwidth */
@@ -1919,7 +1915,7 @@ void msm_dsi_host_xfer_restore(struct mipi_dsi_host *host,
/* TODO: unvote for bus bandwidth */
- dsi_clk_ctrl(msm_host, 0); + dsi_link_clk_disable(msm_host); pm_runtime_put_autosuspend(&msm_host->pdev->dev); }
@@ -2169,8 +2165,11 @@ int msm_dsi_host_enable(struct mipi_dsi_host *host) * and only turned on before MDP START. * This part of code should be enabled once mdp driver support it. */ - /* if (msm_panel->mode == MSM_DSI_CMD_MODE) - dsi_clk_ctrl(msm_host, 0); */ + /* if (msm_panel->mode == MSM_DSI_CMD_MODE) { + * dsi_link_clk_disable(msm_host); + * pm_runtime_put_autosuspend(&msm_host->pdev->dev); + * } + */
return 0; } @@ -2227,9 +2226,10 @@ int msm_dsi_host_power_on(struct mipi_dsi_host *host, }
pm_runtime_get_sync(&msm_host->pdev->dev); - ret = dsi_clk_ctrl(msm_host, 1); + ret = dsi_link_clk_enable(msm_host); if (ret) { - pr_err("%s: failed to enable clocks. ret=%d\n", __func__, ret); + pr_err("%s: failed to enable link clocks. ret=%d\n", + __func__, ret); goto fail_disable_reg; }
@@ -2253,7 +2253,8 @@ int msm_dsi_host_power_on(struct mipi_dsi_host *host, return 0;
fail_disable_clk: - dsi_clk_ctrl(msm_host, 0); + dsi_link_clk_disable(msm_host); + pm_runtime_put_autosuspend(&msm_host->pdev->dev); fail_disable_reg: dsi_host_regulator_disable(msm_host); unlock_ret: @@ -2278,7 +2279,7 @@ int msm_dsi_host_power_off(struct mipi_dsi_host *host)
pinctrl_pm_select_sleep_state(&msm_host->pdev->dev);
- dsi_clk_ctrl(msm_host, 0); + dsi_link_clk_disable(msm_host); pm_runtime_put_autosuspend(&msm_host->pdev->dev);
dsi_host_regulator_disable(msm_host);
We shouldn't use use mode_set/mode_set_nofb helpers when we use runtime PM. The registers configured in these funcs lose their state when we eventually enable the display pipeline.
Do not implement these vfuncs in the helpers, and call them in the crtc_enable/encoder_enable paths instead.
Signed-off-by: Archit Taneja architt@codeaurora.org --- drivers/gpu/drm/msm/mdp/mdp5/mdp5_crtc.c | 4 +++- drivers/gpu/drm/msm/mdp/mdp5/mdp5_encoder.c | 5 ++++- 2 files changed, 7 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_crtc.c b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_crtc.c index fb78425db52c..0de6e3436ac8 100644 --- a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_crtc.c +++ b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_crtc.c @@ -443,6 +443,9 @@ static void mdp5_crtc_enable(struct drm_crtc *crtc) return;
pm_runtime_get_sync(dev); + + mdp5_crtc_mode_set_nofb(crtc); + mdp_irq_register(&mdp5_kms->base, &mdp5_crtc->err);
if (mdp5_cstate->cmd_mode) @@ -951,7 +954,6 @@ static const struct drm_crtc_funcs mdp5_crtc_no_lm_cursor_funcs = { };
static const struct drm_crtc_helper_funcs mdp5_crtc_helper_funcs = { - .mode_set_nofb = mdp5_crtc_mode_set_nofb, .disable = mdp5_crtc_disable, .enable = mdp5_crtc_enable, .atomic_check = mdp5_crtc_atomic_check, diff --git a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_encoder.c b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_encoder.c index 0ca9e4033bb6..5b851380d3f2 100644 --- a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_encoder.c +++ b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_encoder.c @@ -297,6 +297,10 @@ static void mdp5_encoder_enable(struct drm_encoder *encoder) { struct mdp5_encoder *mdp5_encoder = to_mdp5_encoder(encoder); struct mdp5_interface *intf = mdp5_encoder->intf; + /* this isn't right I think */ + struct drm_crtc_state *cstate = encoder->crtc->state; + + mdp5_encoder_mode_set(encoder, &cstate->mode, &cstate->adjusted_mode);
if (intf->mode == MDP5_INTF_DSI_MODE_COMMAND) mdp5_cmd_encoder_enable(encoder); @@ -320,7 +324,6 @@ static int mdp5_encoder_atomic_check(struct drm_encoder *encoder, }
static const struct drm_encoder_helper_funcs mdp5_encoder_helper_funcs = { - .mode_set = mdp5_encoder_mode_set, .disable = mdp5_encoder_disable, .enable = mdp5_encoder_enable, .atomic_check = mdp5_encoder_atomic_check,
Requests for assigning/freeing SMP blocks by planes are collected during the atomic check phase, and represented by mdp5_smp_state's 'assigned' and 'released' members.
Once the atomic state is committed, these members are reset to 0, indicating that the existing configuration satisfies all the planes. Future atomic commits will copy the old mdp5_smp_state, and the 'assigned' and 'released' members would be updated only if there was a change in the plane configurations.
When we disable and re-enable display, we lose the values we wrote to the SMP registers, but the code doesn't program the registers because there isn't any change in mdp5_smp_state.
Fix this by writing to the registers irrespective of whether there was a change in SMP state or not. We do this by keeping a cache of the register values, and write them every time we commit a state.
Signed-off-by: Archit Taneja architt@codeaurora.org --- drivers/gpu/drm/msm/mdp/mdp5/mdp5_smp.c | 59 ++++++++++++++++++++++++++++----- 1 file changed, 51 insertions(+), 8 deletions(-)
diff --git a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_smp.c b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_smp.c index 58f712d37e7f..ae4983d9d0a5 100644 --- a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_smp.c +++ b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_smp.c @@ -28,6 +28,13 @@ struct mdp5_smp {
int blk_cnt; int blk_size; + + /* register cache */ + u32 alloc_w[22]; + u32 alloc_r[22]; + u32 pipe_reqprio_fifo_wm0[SSPP_MAX]; + u32 pipe_reqprio_fifo_wm1[SSPP_MAX]; + u32 pipe_reqprio_fifo_wm2[SSPP_MAX]; };
static inline @@ -98,16 +105,15 @@ static int smp_request_block(struct mdp5_smp *smp, static void set_fifo_thresholds(struct mdp5_smp *smp, enum mdp5_pipe pipe, int nblks) { - struct mdp5_kms *mdp5_kms = get_kms(smp); u32 smp_entries_per_blk = smp->blk_size / (128 / BITS_PER_BYTE); u32 val;
/* 1/4 of SMP pool that is being fetched */ val = (nblks * smp_entries_per_blk) / 4;
- mdp5_write(mdp5_kms, REG_MDP5_PIPE_REQPRIO_FIFO_WM_0(pipe), val * 1); - mdp5_write(mdp5_kms, REG_MDP5_PIPE_REQPRIO_FIFO_WM_1(pipe), val * 2); - mdp5_write(mdp5_kms, REG_MDP5_PIPE_REQPRIO_FIFO_WM_2(pipe), val * 3); + smp->pipe_reqprio_fifo_wm0[pipe] = val * 1; + smp->pipe_reqprio_fifo_wm1[pipe] = val * 2; + smp->pipe_reqprio_fifo_wm2[pipe] = val * 3; }
/* @@ -222,7 +228,6 @@ void mdp5_smp_release(struct mdp5_smp *smp, struct mdp5_smp_state *state, static unsigned update_smp_state(struct mdp5_smp *smp, u32 cid, mdp5_smp_state_t *assigned) { - struct mdp5_kms *mdp5_kms = get_kms(smp); int cnt = smp->blk_cnt; unsigned nblks = 0; u32 blk, val; @@ -231,7 +236,7 @@ static unsigned update_smp_state(struct mdp5_smp *smp, int idx = blk / 3; int fld = blk % 3;
- val = mdp5_read(mdp5_kms, REG_MDP5_SMP_ALLOC_W_REG(idx)); + val = smp->alloc_w[idx];
switch (fld) { case 0: @@ -248,8 +253,8 @@ static unsigned update_smp_state(struct mdp5_smp *smp, break; }
- mdp5_write(mdp5_kms, REG_MDP5_SMP_ALLOC_W_REG(idx), val); - mdp5_write(mdp5_kms, REG_MDP5_SMP_ALLOC_R_REG(idx), val); + smp->alloc_w[idx] = val; + smp->alloc_r[idx] = val;
nblks++; } @@ -257,6 +262,39 @@ static unsigned update_smp_state(struct mdp5_smp *smp, return nblks; }
+static void write_smp_alloc_regs(struct mdp5_smp *smp) +{ + struct mdp5_kms *mdp5_kms = get_kms(smp); + int i, num_regs; + + num_regs = smp->blk_cnt / 3 + 1; + + for (i = 0; i < num_regs; i++) { + mdp5_write(mdp5_kms, REG_MDP5_SMP_ALLOC_W_REG(i), + smp->alloc_w[i]); + mdp5_write(mdp5_kms, REG_MDP5_SMP_ALLOC_R_REG(i), + smp->alloc_r[i]); + } +} + +static void write_smp_fifo_regs(struct mdp5_smp *smp) +{ + struct mdp5_kms *mdp5_kms = get_kms(smp); + int i; + + for (i = 0; i < mdp5_kms->num_hwpipes; i++) { + struct mdp5_hw_pipe *hwpipe = mdp5_kms->hwpipes[i]; + enum mdp5_pipe pipe = hwpipe->pipe; + + mdp5_write(mdp5_kms, REG_MDP5_PIPE_REQPRIO_FIFO_WM_0(pipe), + smp->pipe_reqprio_fifo_wm0[pipe]); + mdp5_write(mdp5_kms, REG_MDP5_PIPE_REQPRIO_FIFO_WM_1(pipe), + smp->pipe_reqprio_fifo_wm1[pipe]); + mdp5_write(mdp5_kms, REG_MDP5_PIPE_REQPRIO_FIFO_WM_2(pipe), + smp->pipe_reqprio_fifo_wm2[pipe]); + } +} + void mdp5_smp_prepare_commit(struct mdp5_smp *smp, struct mdp5_smp_state *state) { enum mdp5_pipe pipe; @@ -277,6 +315,9 @@ void mdp5_smp_prepare_commit(struct mdp5_smp *smp, struct mdp5_smp_state *state) set_fifo_thresholds(smp, pipe, nblks); }
+ write_smp_alloc_regs(smp); + write_smp_fifo_regs(smp); + state->assigned = 0; }
@@ -289,6 +330,8 @@ void mdp5_smp_complete_commit(struct mdp5_smp *smp, struct mdp5_smp_state *state set_fifo_thresholds(smp, pipe, 0); }
+ write_smp_fifo_regs(smp); + state->released = 0; }
MDSS represents the top level wrapper that contains MDP5, DSI, HDMI and other sub-blocks. W.r.t device heirarchy, it's the parent of all these devices. The power domain of this device is actually tied to the GDSC hw. When any sub-device enables its PD, MDSS's PD is also enabled.
The suspend/resume ops enable the top level clocks that end at the MDSS boundary. For now, we're letting them all be optional, since the child devices anyway hold a ref to these clocks.
Until now, we'd called a runtime_get() during probe, which ensured that the GDSC was always on. Now that we've set up runtime PM for the children devices, we can get rid of this hack.
Note: that the MDSS device is the platform_device in msm_drv.c. The msm_runtime_suspend/resume ops call the funcs that enable/disable the top level MDSS clocks. This is different from MDP4, where the platform device created in msm_drv.c represents MDP4 itself. It would have been nicer to hide these differences by adding new kms funcs, but runtime PM needs to be enabled before kms is set up (i.e, msm_kms_init is called).
Signed-off-by: Archit Taneja architt@codeaurora.org --- drivers/gpu/drm/msm/mdp/mdp5/mdp5_mdss.c | 63 ++++++++++++++++++++++++++++---- drivers/gpu/drm/msm/msm_drv.c | 29 +++++++++++++++ drivers/gpu/drm/msm/msm_kms.h | 2 + 3 files changed, 86 insertions(+), 8 deletions(-)
diff --git a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_mdss.c b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_mdss.c index 9c34d7824988..f2a0db7a8a03 100644 --- a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_mdss.c +++ b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_mdss.c @@ -31,6 +31,10 @@ struct msm_mdss {
struct regulator *vdd;
+ struct clk *ahb_clk; + struct clk *axi_clk; + struct clk *vsync_clk; + struct { volatile unsigned long enabled_mask; struct irq_domain *domain; @@ -140,6 +144,51 @@ static int mdss_irq_domain_init(struct msm_mdss *mdss) return 0; }
+int msm_mdss_enable(struct msm_mdss *mdss) +{ + DBG(""); + + clk_prepare_enable(mdss->ahb_clk); + if (mdss->axi_clk) + clk_prepare_enable(mdss->axi_clk); + if (mdss->vsync_clk) + clk_prepare_enable(mdss->vsync_clk); + + return 0; +} + +int msm_mdss_disable(struct msm_mdss *mdss) +{ + DBG(""); + + if (mdss->vsync_clk) + clk_disable_unprepare(mdss->vsync_clk); + if (mdss->axi_clk) + clk_disable_unprepare(mdss->axi_clk); + clk_disable_unprepare(mdss->ahb_clk); + + return 0; +} + +static int msm_mdss_get_clocks(struct msm_mdss *mdss) +{ + struct platform_device *pdev = to_platform_device(mdss->dev->dev); + + mdss->ahb_clk = msm_clk_get(pdev, "iface"); + if (IS_ERR(mdss->ahb_clk)) + mdss->ahb_clk = NULL; + + mdss->axi_clk = msm_clk_get(pdev, "bus"); + if (IS_ERR(mdss->axi_clk)) + mdss->axi_clk = NULL; + + mdss->vsync_clk = msm_clk_get(pdev, "vsync"); + if (IS_ERR(mdss->vsync_clk)) + mdss->vsync_clk = NULL; + + return 0; +} + void msm_mdss_destroy(struct drm_device *dev) { struct msm_drm_private *priv = dev->dev_private; @@ -153,8 +202,6 @@ void msm_mdss_destroy(struct drm_device *dev)
regulator_disable(mdss->vdd);
- pm_runtime_put_sync(dev->dev); - pm_runtime_disable(dev->dev); }
@@ -190,6 +237,12 @@ int msm_mdss_init(struct drm_device *dev) goto fail; }
+ ret = msm_mdss_get_clocks(mdss); + if (ret) { + dev_err(dev->dev, "failed to get clocks: %d\n", ret); + goto fail; + } + /* Regulator to enable GDSCs in downstream kernels */ mdss->vdd = devm_regulator_get(dev->dev, "vdd"); if (IS_ERR(mdss->vdd)) { @@ -221,12 +274,6 @@ int msm_mdss_init(struct drm_device *dev)
pm_runtime_enable(dev->dev);
- /* - * TODO: This is needed as the MDSS GDSC is only tied to MDSS's power - * domain. Remove this once runtime PM is adapted for all the devices. - */ - pm_runtime_get_sync(dev->dev); - return 0; fail_irq: regulator_disable(mdss->vdd); diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c index f49f6ac5585c..a19c393f7e45 100644 --- a/drivers/gpu/drm/msm/msm_drv.c +++ b/drivers/gpu/drm/msm/msm_drv.c @@ -879,8 +879,37 @@ static int msm_pm_resume(struct device *dev) } #endif
+#ifdef CONFIG_PM +static int msm_runtime_suspend(struct device *dev) +{ + struct drm_device *ddev = dev_get_drvdata(dev); + struct msm_drm_private *priv = ddev->dev_private; + + DBG(""); + + if (priv->mdss) + return msm_mdss_disable(priv->mdss); + + return 0; +} + +static int msm_runtime_resume(struct device *dev) +{ + struct drm_device *ddev = dev_get_drvdata(dev); + struct msm_drm_private *priv = ddev->dev_private; + + DBG(""); + + if (priv->mdss) + return msm_mdss_enable(priv->mdss); + + return 0; +} +#endif + static const struct dev_pm_ops msm_pm_ops = { SET_SYSTEM_SLEEP_PM_OPS(msm_pm_suspend, msm_pm_resume) + SET_RUNTIME_PM_OPS(msm_runtime_suspend, msm_runtime_resume, NULL) };
/* diff --git a/drivers/gpu/drm/msm/msm_kms.h b/drivers/gpu/drm/msm/msm_kms.h index a8f2ba5e5f07..17d5824417ad 100644 --- a/drivers/gpu/drm/msm/msm_kms.h +++ b/drivers/gpu/drm/msm/msm_kms.h @@ -99,5 +99,7 @@ struct msm_kms *mdp4_kms_init(struct drm_device *dev); struct msm_kms *mdp5_kms_init(struct drm_device *dev); int msm_mdss_init(struct drm_device *dev); void msm_mdss_destroy(struct drm_device *dev); +int msm_mdss_enable(struct msm_mdss *mdss); +int msm_mdss_disable(struct msm_mdss *mdss);
#endif /* __MSM_KMS_H__ */
msm_gpu's get_timestamp() op (called by the MSM_GET_PARAM ioctl) can result in register accesses. We need our power domain and clocks to be active for that. Make sure they are enabled here.
Signed-off-by: Archit Taneja architt@codeaurora.org --- drivers/gpu/drm/msm/adreno/adreno_gpu.c | 11 +++++++++-- 1 file changed, 9 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/adreno/adreno_gpu.c index f1ab2703674a..7414c6bbd582 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c @@ -48,8 +48,15 @@ int adreno_get_param(struct msm_gpu *gpu, uint32_t param, uint64_t *value) *value = adreno_gpu->base.fast_rate; return 0; case MSM_PARAM_TIMESTAMP: - if (adreno_gpu->funcs->get_timestamp) - return adreno_gpu->funcs->get_timestamp(gpu, value); + if (adreno_gpu->funcs->get_timestamp) { + int ret; + + pm_runtime_get_sync(&gpu->pdev->dev); + ret = adreno_gpu->funcs->get_timestamp(gpu, value); + pm_runtime_put_autosuspend(&gpu->pdev->dev); + + return ret; + } return -EINVAL; default: DBG("%s: invalid param: %u", gpu->name, param);
On Fri, Jul 28, 2017 at 04:17:08PM +0530, Archit Taneja wrote:
msm_gpu's get_timestamp() op (called by the MSM_GET_PARAM ioctl) can result in register accesses. We need our power domain and clocks to be active for that. Make sure they are enabled here.
Signed-off-by: Archit Taneja architt@codeaurora.org
drivers/gpu/drm/msm/adreno/adreno_gpu.c | 11 +++++++++-- 1 file changed, 9 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/adreno/adreno_gpu.c index f1ab2703674a..7414c6bbd582 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c @@ -48,8 +48,15 @@ int adreno_get_param(struct msm_gpu *gpu, uint32_t param, uint64_t *value) *value = adreno_gpu->base.fast_rate; return 0; case MSM_PARAM_TIMESTAMP:
if (adreno_gpu->funcs->get_timestamp)
return adreno_gpu->funcs->get_timestamp(gpu, value);
if (adreno_gpu->funcs->get_timestamp) {
int ret;
pm_runtime_get_sync(&gpu->pdev->dev);
ret = adreno_gpu->funcs->get_timestamp(gpu, value);
pm_runtime_put_autosuspend(&gpu->pdev->dev);
return ret;
return -EINVAL; default: DBG("%s: invalid param: %u", gpu->name, param);}
This is clearly correct from a stability standpoint but it does raise a few side questions.
When the system is idle it is not interesting to go to all the work of bringing up the system to read a timestamp that will be 0 (or very close to it) and then take the system down again. Furthermore when the system is going up and down repeatedly between submits the counter will keep getting reset to zero and makes it less useful over the long term.
Downstream had these problems too and has an elaborate mechanism to save the value of the counters at power down and (on 4XX and newer) reload the counters with the previous value when power comes back. This has the advantage of consistent numbers with the downside of a reasonable amount of work to maintain the database of allocated counters and taking the to save them on power down and restore them on power up.
This patch should be obviously merged because system hangs are bad but I just wanted to toss out more food for thought as we get more power aggressive and more interested in performance monitoring.
Jordan
dri-devel@lists.freedesktop.org