This patchset enables dynamic assignment of Layer Mixer blocks to CRTCs in order to support wide resolution modes and planes.
The approach is similar to what was done for decoupling MDP5 planes from the hwpipes (SSPPs). We go further by letting a CRTC comprise of 2 Layer Mixers, one to scanout the left half of the image, and other the right half. The same thing is then done for drm_planes. I.e, a plane can be tied to 2 hwpipes.
Doing this enables us to support wide planes, and support wider modes (like HDMI 4K modes).
Here's a quick description of what the patches do:
Patches #1 to #7 do mostly preparation/clean-ups to decouple CRTCs and Layer Mixers.
Patch #8 subclasses CRTC state so that we can add our custom state to our CRTCs.
Patch #9 adds the method to assign a mixer to a CRTC. We add a global atomic state object that tracks which mixers are assigned to which CRTCs.
Patches #10 to #13 moves more things to CRTC state rather than being assigned to mdp5_crtc.
Patches #14 and #15 update the mdp5_crtc.c and mdp5_ctl.c code to work with 2 Layer Mixers, but without the support of wide planes.
Patches #16 to #18 updates mdp5_plane so that a drm_plane can comprise comprise of 2 hwpipes.
Patches #19 to #21 complete the CRTC code to let 2 LMs stage 2 hwpipes.
Patch #22 updates the mixer assignment code so that a CRTC can be assigned 2 LMs.
Patches #23 and #24 make some updates in mdp5_ctl to let a display pipeline comprise of 2 LMs.
A 4.11 branch on which we can try out the changes on a DB820c:
https://github.com/boddob/linux/tree/mixer_virt_v1_4.11
Archit Taneja (24): drm/msm/mdp5: Bring back pipe_lock to mdp5_plane struct drm/msm/mdp5: describe LM instances in mdp5_cfg drm/msm/mdp5: Add structs for hw Layer Mixers drm/msm/mdp5: Start using mdp5_hw_mixer drm/msm/mdp5: Simplify LM <-> PP mapping drm/msm/mdp5: Clean up interface assignment drm/msm/mdp5: Remove the pipeline stuff in mdp5_ctl drm/msm/mdp5: subclass CRTC state drm/msm/mdp5: Prepare for dynamic assignment of mixers drm/msm/mdp5: Assign INTF and CTL in encoder's atomic_check() drm/msm/mdp5: Add more stuff to CRTC state drm/msm/mdp5: Start using parameters from CRTC state drm/msm/mdp5: Remove mixer/intf pointers from mdp5_ctl drm/msm/mdp5: Add a CAP for Source Split drm/msm/mdp5: Add optional 'right' Layer Mixer in CRTC state drm/msm/mdp5: Create mdp5_hwpipe_mode_set drm/msm/mdp5: Assign a 'right hwpipe' to plane state drm/msm/mdp5: Configure 'right' hwpipe drm/msm/mdp5: Prepare Layer Mixers for source split drm/msm/mdp5: Stage right side hwpipes on Right-side Layer Mixer drm/msm/mdp5: Stage border out on base stage if CRTC has 2 LMs drm/msm/mdp5: Assign 'right' mixer to CRTC state drm/msm/mdp5: Reset CTL blend registers before configuring them drm/msm/mdp5: Enable 3D mux in mdp5_ctl
drivers/gpu/drm/msm/Makefile | 1 + drivers/gpu/drm/msm/mdp/mdp5/mdp5_cfg.c | 81 +++++ drivers/gpu/drm/msm/mdp/mdp5/mdp5_cfg.h | 8 + drivers/gpu/drm/msm/mdp/mdp5/mdp5_cmd_encoder.c | 30 +- drivers/gpu/drm/msm/mdp/mdp5/mdp5_crtc.c | 454 +++++++++++++++++++----- drivers/gpu/drm/msm/mdp/mdp5/mdp5_ctl.c | 192 ++++++---- drivers/gpu/drm/msm/mdp/mdp5/mdp5_ctl.h | 21 +- drivers/gpu/drm/msm/mdp/mdp5/mdp5_encoder.c | 66 ++-- drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.c | 121 +++++-- drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.h | 53 ++- drivers/gpu/drm/msm/mdp/mdp5/mdp5_mixer.c | 172 +++++++++ drivers/gpu/drm/msm/mdp/mdp5/mdp5_mixer.h | 47 +++ drivers/gpu/drm/msm/mdp/mdp5/mdp5_pipe.c | 2 - drivers/gpu/drm/msm/mdp/mdp5/mdp5_pipe.h | 1 - drivers/gpu/drm/msm/mdp/mdp5/mdp5_plane.c | 340 ++++++++++++------ drivers/gpu/drm/msm/mdp/mdp_kms.h | 6 + 16 files changed, 1256 insertions(+), 339 deletions(-) create mode 100644 drivers/gpu/drm/msm/mdp/mdp5/mdp5_mixer.c create mode 100644 drivers/gpu/drm/msm/mdp/mdp5/mdp5_mixer.h
We'd previously moved the pipe_lock spinlock to the hwpipe struct. Bring it back to mdp5_plane. We will need this because an mdp5_plane in the future could comprise of 2 hw pipes. It makes more sense to have a single lock to protect the registers for the hw pipes used by a plane, rather than trying to take individual locks per hwpipe when committing a configuration.
Signed-off-by: Archit Taneja architt@codeaurora.org --- drivers/gpu/drm/msm/mdp/mdp5/mdp5_pipe.c | 2 -- drivers/gpu/drm/msm/mdp/mdp5/mdp5_pipe.h | 1 - drivers/gpu/drm/msm/mdp/mdp5/mdp5_plane.c | 9 +++++++-- 3 files changed, 7 insertions(+), 5 deletions(-)
diff --git a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_pipe.c b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_pipe.c index 35c4dabb0c0c..2bfac3712685 100644 --- a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_pipe.c +++ b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_pipe.c @@ -135,7 +135,5 @@ struct mdp5_hw_pipe *mdp5_pipe_init(enum mdp5_pipe pipe, hwpipe->caps = caps; hwpipe->flush_mask = mdp_ctl_flush_mask_pipe(pipe);
- spin_lock_init(&hwpipe->pipe_lock); - return hwpipe; } diff --git a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_pipe.h b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_pipe.h index 238901987e00..924c3e6f9517 100644 --- a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_pipe.h +++ b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_pipe.h @@ -28,7 +28,6 @@ struct mdp5_hw_pipe { const char *name; enum mdp5_pipe pipe;
- spinlock_t pipe_lock; /* protect REG_MDP5_PIPE_* registers */ uint32_t reg_offset; uint32_t caps;
diff --git a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_plane.c b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_plane.c index 0ffb8affef35..964b2e3089c3 100644 --- a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_plane.c +++ b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_plane.c @@ -22,6 +22,8 @@ struct mdp5_plane { struct drm_plane base;
+ spinlock_t pipe_lock; /* protect REG_MDP5_PIPE_* registers */ + uint32_t nformats; uint32_t formats[32]; }; @@ -718,6 +720,7 @@ static int mdp5_plane_mode_set(struct drm_plane *plane, struct drm_crtc *crtc, struct drm_framebuffer *fb, struct drm_rect *src, struct drm_rect *dest) { + struct mdp5_plane *mdp5_plane = to_mdp5_plane(plane); struct drm_plane_state *pstate = plane->state; struct mdp5_hw_pipe *hwpipe = to_mdp5_plane_state(pstate)->hwpipe; struct mdp5_kms *mdp5_kms = get_kms(plane); @@ -797,7 +800,7 @@ static int mdp5_plane_mode_set(struct drm_plane *plane, hflip = !!(rotation & DRM_REFLECT_X); vflip = !!(rotation & DRM_REFLECT_Y);
- spin_lock_irqsave(&hwpipe->pipe_lock, flags); + spin_lock_irqsave(&mdp5_plane->pipe_lock, flags);
mdp5_write(mdp5_kms, REG_MDP5_PIPE_SRC_IMG_SIZE(pipe), MDP5_PIPE_SRC_IMG_SIZE_WIDTH(min(fb->width, src_w)) | @@ -876,7 +879,7 @@ static int mdp5_plane_mode_set(struct drm_plane *plane,
set_scanout_locked(plane, fb);
- spin_unlock_irqrestore(&hwpipe->pipe_lock, flags); + spin_unlock_irqrestore(&mdp5_plane->pipe_lock, flags);
return ret; } @@ -996,6 +999,8 @@ struct drm_plane *mdp5_plane_init(struct drm_device *dev, mdp5_plane->nformats = mdp_get_formats(mdp5_plane->formats, ARRAY_SIZE(mdp5_plane->formats), false);
+ spin_lock_init(&mdp5_plane->pipe_lock); + if (type == DRM_PLANE_TYPE_CURSOR) ret = drm_universal_plane_init(dev, plane, 0xff, &mdp5_cursor_plane_funcs,
The number of Layer Mixers and the downstream blocks (DSPPs and PPs) connected to each LM can vary with different MDP5 revisions. These parameters are also static.
Keep the per instance LM data in mdp5_cfg. This will avoid the need to have macros which identify PP id or DSPP id the LM is connected to. We don't configure DSPPs at the moment, but keeping the DSPP instance # here might come handy later.
Also add a 'caps' field that identifies features supported by a LM instance. Introduce the caps MDP_LM_CAP_DISPLAY and MDP_LM_CAP_WB that identify whether a LM instance can be used for display or writeback.
Signed-off-by: Archit Taneja architt@codeaurora.org --- drivers/gpu/drm/msm/mdp/mdp5/mdp5_cfg.c | 72 +++++++++++++++++++++++++++++++++ drivers/gpu/drm/msm/mdp/mdp5/mdp5_cfg.h | 8 ++++ drivers/gpu/drm/msm/mdp/mdp_kms.h | 4 ++ 3 files changed, 84 insertions(+)
diff --git a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_cfg.c b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_cfg.c index 34ab553f6897..e7b15846457c 100644 --- a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_cfg.c +++ b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_cfg.c @@ -70,6 +70,18 @@ const struct mdp5_cfg_hw msm8x74v1_config = { .lm = { .count = 5, .base = { 0x03100, 0x03500, 0x03900, 0x03d00, 0x04100 }, + .instances = { + { .id = 0, .pp = 0, .dspp = 0, + .caps = MDP_LM_CAP_DISPLAY, }, + { .id = 1, .pp = 1, .dspp = 1, + .caps = MDP_LM_CAP_DISPLAY, }, + { .id = 2, .pp = 2, .dspp = 2, + .caps = MDP_LM_CAP_DISPLAY, }, + { .id = 3, .pp = -1, .dspp = -1, + .caps = MDP_LM_CAP_WB }, + { .id = 4, .pp = -1, .dspp = -1, + .caps = MDP_LM_CAP_WB }, + }, .nb_stages = 5, }, .dspp = { @@ -134,6 +146,18 @@ const struct mdp5_cfg_hw msm8x74v2_config = { .lm = { .count = 5, .base = { 0x03100, 0x03500, 0x03900, 0x03d00, 0x04100 }, + .instances = { + { .id = 0, .pp = 0, .dspp = 0, + .caps = MDP_LM_CAP_DISPLAY, }, + { .id = 1, .pp = 1, .dspp = 1, + .caps = MDP_LM_CAP_DISPLAY, }, + { .id = 2, .pp = 2, .dspp = 2, + .caps = MDP_LM_CAP_DISPLAY, }, + { .id = 3, .pp = -1, .dspp = -1, + .caps = MDP_LM_CAP_WB, }, + { .id = 4, .pp = -1, .dspp = -1, + .caps = MDP_LM_CAP_WB, }, + }, .nb_stages = 5, .max_width = 2048, .max_height = 0xFFFF, @@ -211,6 +235,20 @@ const struct mdp5_cfg_hw apq8084_config = { .lm = { .count = 6, .base = { 0x03900, 0x03d00, 0x04100, 0x04500, 0x04900, 0x04d00 }, + .instances = { + { .id = 0, .pp = 0, .dspp = 0, + .caps = MDP_LM_CAP_DISPLAY, }, + { .id = 1, .pp = 1, .dspp = 1, + .caps = MDP_LM_CAP_DISPLAY, }, + { .id = 2, .pp = 2, .dspp = 2, + .caps = MDP_LM_CAP_DISPLAY, }, + { .id = 3, .pp = -1, .dspp = -1, + .caps = MDP_LM_CAP_WB, }, + { .id = 4, .pp = -1, .dspp = -1, + .caps = MDP_LM_CAP_WB, }, + { .id = 5, .pp = 3, .dspp = 3, + .caps = MDP_LM_CAP_DISPLAY, }, + }, .nb_stages = 5, .max_width = 2048, .max_height = 0xFFFF, @@ -282,6 +320,12 @@ const struct mdp5_cfg_hw msm8x16_config = { .lm = { .count = 2, /* LM0 and LM3 */ .base = { 0x44000, 0x47000 }, + .instances = { + { .id = 0, .pp = 0, .dspp = 0, + .caps = MDP_LM_CAP_DISPLAY, }, + { .id = 3, .pp = -1, .dspp = -1, + .caps = MDP_LM_CAP_WB }, + }, .nb_stages = 8, .max_width = 2048, .max_height = 0xFFFF, @@ -350,6 +394,20 @@ const struct mdp5_cfg_hw msm8x94_config = { .lm = { .count = 6, .base = { 0x44000, 0x45000, 0x46000, 0x47000, 0x48000, 0x49000 }, + .instances = { + { .id = 0, .pp = 0, .dspp = 0, + .caps = MDP_LM_CAP_DISPLAY, }, + { .id = 1, .pp = 1, .dspp = 1, + .caps = MDP_LM_CAP_DISPLAY, }, + { .id = 2, .pp = 2, .dspp = 2, + .caps = MDP_LM_CAP_DISPLAY, }, + { .id = 3, .pp = -1, .dspp = -1, + .caps = MDP_LM_CAP_WB, }, + { .id = 4, .pp = -1, .dspp = -1, + .caps = MDP_LM_CAP_WB, }, + { .id = 5, .pp = 3, .dspp = 3, + .caps = MDP_LM_CAP_DISPLAY, }, + }, .nb_stages = 8, .max_width = 2048, .max_height = 0xFFFF, @@ -434,6 +492,20 @@ const struct mdp5_cfg_hw msm8x96_config = { .lm = { .count = 6, .base = { 0x44000, 0x45000, 0x46000, 0x47000, 0x48000, 0x49000 }, + .instances = { + { .id = 0, .pp = 0, .dspp = 0, + .caps = MDP_LM_CAP_DISPLAY }, + { .id = 1, .pp = 1, .dspp = 1, + .caps = MDP_LM_CAP_DISPLAY, }, + { .id = 2, .pp = 2, .dspp = -1, + .caps = MDP_LM_CAP_DISPLAY }, + { .id = 3, .pp = -1, .dspp = -1, + .caps = MDP_LM_CAP_WB, }, + { .id = 4, .pp = -1, .dspp = -1, + .caps = MDP_LM_CAP_WB, }, + { .id = 5, .pp = 3, .dspp = -1, + .caps = MDP_LM_CAP_DISPLAY, }, + }, .nb_stages = 8, .max_width = 2560, .max_height = 0xFFFF, diff --git a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_cfg.h b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_cfg.h index b1c7daaede86..75910d0f2f4c 100644 --- a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_cfg.h +++ b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_cfg.h @@ -39,8 +39,16 @@ struct mdp5_sub_block { MDP5_SUB_BLOCK_DEFINITION; };
+struct mdp5_lm_instance { + int id; + int pp; + int dspp; + uint32_t caps; +}; + struct mdp5_lm_block { MDP5_SUB_BLOCK_DEFINITION; + struct mdp5_lm_instance instances[MAX_BASES]; uint32_t nb_stages; /* number of stages per blender */ uint32_t max_width; /* Maximum output resolution */ uint32_t max_height; diff --git a/drivers/gpu/drm/msm/mdp/mdp_kms.h b/drivers/gpu/drm/msm/mdp/mdp_kms.h index 7574cdfef418..bf4db664ee86 100644 --- a/drivers/gpu/drm/msm/mdp/mdp_kms.h +++ b/drivers/gpu/drm/msm/mdp/mdp_kms.h @@ -114,6 +114,10 @@ const struct msm_format *mdp_get_format(struct msm_kms *kms, uint32_t format); #define MDP_PIPE_CAP_SW_PIX_EXT BIT(5) #define MDP_PIPE_CAP_CURSOR BIT(6)
+/* MDP layer mixer caps */ +#define MDP_LM_CAP_DISPLAY BIT(0) +#define MDP_LM_CAP_WB BIT(1) + static inline bool pipe_supports_yuv(uint32_t pipe_caps) { return (pipe_caps & MDP_PIPE_CAP_SCALE) &&
Create a struct to represent MDP5 Layer Mixer instances. This will eventually allow us to detach CRTCs from the Layer Mixers, and generally clean things up a bit.
This is very similar to how hwpipes were previously abstracted away from drm planes.
Signed-off-by: Archit Taneja architt@codeaurora.org --- drivers/gpu/drm/msm/Makefile | 1 + drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.c | 33 +++++++++++++++++++++++ drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.h | 4 +++ drivers/gpu/drm/msm/mdp/mdp5/mdp5_mixer.c | 44 +++++++++++++++++++++++++++++++ drivers/gpu/drm/msm/mdp/mdp5/mdp5_mixer.h | 37 ++++++++++++++++++++++++++ 5 files changed, 119 insertions(+) create mode 100644 drivers/gpu/drm/msm/mdp/mdp5/mdp5_mixer.c create mode 100644 drivers/gpu/drm/msm/mdp/mdp5/mdp5_mixer.h
diff --git a/drivers/gpu/drm/msm/Makefile b/drivers/gpu/drm/msm/Makefile index 39055362da95..5241ac8803ba 100644 --- a/drivers/gpu/drm/msm/Makefile +++ b/drivers/gpu/drm/msm/Makefile @@ -40,6 +40,7 @@ msm-y := \ mdp/mdp5/mdp5_mdss.o \ mdp/mdp5/mdp5_kms.o \ mdp/mdp5/mdp5_pipe.o \ + mdp/mdp5/mdp5_mixer.o \ mdp/mdp5/mdp5_plane.o \ mdp/mdp5/mdp5_smp.o \ msm_atomic.o \ diff --git a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.c b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.c index 9d21317b2cc3..c55ba6e8387c 100644 --- a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.c +++ b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.c @@ -162,6 +162,9 @@ static void mdp5_kms_destroy(struct msm_kms *kms) struct msm_gem_address_space *aspace = mdp5_kms->aspace; int i;
+ for (i = 0; i < mdp5_kms->num_hwmixers; i++) + mdp5_mixer_destroy(mdp5_kms->hwmixers[i]); + for (i = 0; i < mdp5_kms->num_hwpipes; i++) mdp5_pipe_destroy(mdp5_kms->hwpipes[i]);
@@ -868,6 +871,32 @@ static int hwpipe_init(struct mdp5_kms *mdp5_kms) return 0; }
+static int hwmixer_init(struct mdp5_kms *mdp5_kms) +{ + struct drm_device *dev = mdp5_kms->dev; + const struct mdp5_cfg_hw *hw_cfg; + int i, ret; + + hw_cfg = mdp5_cfg_get_hw_config(mdp5_kms->cfg); + + for (i = 0; i < hw_cfg->lm.count; i++) { + struct mdp5_hw_mixer *mixer; + + mixer = mdp5_mixer_init(&hw_cfg->lm.instances[i]); + if (IS_ERR(mixer)) { + ret = PTR_ERR(mixer); + dev_err(dev->dev, "failed to construct LM%d (%d)\n", + i, ret); + return ret; + } + + mixer->idx = mdp5_kms->num_hwmixers; + mdp5_kms->hwmixers[mdp5_kms->num_hwmixers++] = mixer; + } + + return 0; +} + static int mdp5_init(struct platform_device *pdev, struct drm_device *dev) { struct msm_drm_private *priv = dev->dev_private; @@ -980,6 +1009,10 @@ static int mdp5_init(struct platform_device *pdev, struct drm_device *dev) if (ret) goto fail;
+ ret = hwmixer_init(mdp5_kms); + if (ret) + goto fail; + /* set uninit-ed kms */ priv->kms = &mdp5_kms->base.base;
diff --git a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.h b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.h index 9258871d879c..f3a3b767e35c 100644 --- a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.h +++ b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.h @@ -25,6 +25,7 @@ #include "mdp5.xml.h" #include "mdp5_ctl.h" #include "mdp5_pipe.h" +#include "mdp5_mixer.h" #include "mdp5_smp.h"
struct mdp5_state; @@ -39,6 +40,9 @@ struct mdp5_kms { unsigned num_hwpipes; struct mdp5_hw_pipe *hwpipes[SSPP_MAX];
+ unsigned num_hwmixers; + struct mdp5_hw_mixer *hwmixers[8]; + struct mdp5_cfg_handler *cfg; uint32_t caps; /* MDP capabilities (MDP_CAP_XXX bits) */
diff --git a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_mixer.c b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_mixer.c new file mode 100644 index 000000000000..dd38f0bc2494 --- /dev/null +++ b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_mixer.c @@ -0,0 +1,44 @@ +/* + * Copyright (C) 2017 The Linux Foundation. All rights reserved. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License version 2 as published by + * the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + * + * You should have received a copy of the GNU General Public License along with + * this program. If not, see http://www.gnu.org/licenses/. + */ + +#include "mdp5_kms.h" + +void mdp5_mixer_destroy(struct mdp5_hw_mixer *mixer) +{ + kfree(mixer); +} + +static const char * const mixer_names[] = { + "LM0", "LM1", "LM2", "LM3", "LM4", "LM5", +}; + +struct mdp5_hw_mixer *mdp5_mixer_init(const struct mdp5_lm_instance *lm) +{ + struct mdp5_hw_mixer *mixer; + + mixer = kzalloc(sizeof(*mixer), GFP_KERNEL); + if (!mixer) + return ERR_PTR(-ENOMEM); + + mixer->name = mixer_names[lm->id]; + mixer->lm = lm->id; + mixer->caps = lm->caps; + mixer->pp = lm->pp; + mixer->dspp = lm->dspp; + mixer->flush_mask = mdp_ctl_flush_mask_lm(lm->id); + + return mixer; +} diff --git a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_mixer.h b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_mixer.h new file mode 100644 index 000000000000..08d2173703bf --- /dev/null +++ b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_mixer.h @@ -0,0 +1,37 @@ +/* + * Copyright (C) 2017 The Linux Foundation. All rights reserved. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License version 2 as published by + * the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + * + * You should have received a copy of the GNU General Public License along with + * this program. If not, see http://www.gnu.org/licenses/. + */ + +#ifndef __MDP5_LM_H__ +#define __MDP5_LM_H__ + +/* represents a hw Layer Mixer, one (or more) is dynamically assigned to a crtc */ +struct mdp5_hw_mixer { + int idx; + + const char *name; + + int lm; /* the LM instance # */ + uint32_t caps; + int pp; + int dspp; + + uint32_t flush_mask; /* used to commit LM registers */ +}; + +struct mdp5_hw_mixer *mdp5_mixer_init(const struct mdp5_lm_instance *lm); +void mdp5_mixer_destroy(struct mdp5_hw_mixer *lm); + +#endif /* __MDP5_LM_H__ */
Use the mdp5_hw_mixer struct in the mdp5_crtc and mdp5_ctl instead of using the LM index.
Like before, the Layer Mixers are assigned statically to the CRTCs. The hwmixer(s) will later be dynamically assigned to CRTCs.
For now, ignore the hwmixers that can only do WB.
Signed-off-by: Archit Taneja architt@codeaurora.org --- drivers/gpu/drm/msm/mdp/mdp5/mdp5_cmd_encoder.c | 9 ++++-- drivers/gpu/drm/msm/mdp/mdp5/mdp5_crtc.c | 42 +++++++++++++++---------- drivers/gpu/drm/msm/mdp/mdp5/mdp5_ctl.c | 38 +++++++++++----------- drivers/gpu/drm/msm/mdp/mdp5/mdp5_ctl.h | 2 +- drivers/gpu/drm/msm/mdp/mdp5/mdp5_encoder.c | 4 +-- drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.c | 6 +++- drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.h | 4 +-- drivers/gpu/drm/msm/mdp/mdp5/mdp5_mixer.c | 4 +++ 8 files changed, 66 insertions(+), 43 deletions(-)
diff --git a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_cmd_encoder.c b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_cmd_encoder.c index df1c8adec3f3..2e6ceafbd3e2 100644 --- a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_cmd_encoder.c +++ b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_cmd_encoder.c @@ -51,7 +51,8 @@ static int pingpong_tearcheck_setup(struct drm_encoder *encoder, struct device *dev = encoder->dev->dev; u32 total_lines_x100, vclks_line, cfg; long vsync_clk_speed; - int pp_id = GET_PING_PONG_ID(mdp5_crtc_get_lm(encoder->crtc)); + struct mdp5_hw_mixer *mixer = mdp5_crtc_get_mixer(encoder->crtc); + int pp_id = GET_PING_PONG_ID(mixer->lm);
if (IS_ERR_OR_NULL(mdp5_kms->vsync_clk)) { dev_err(dev, "vsync_clk is not initialized\n"); @@ -94,7 +95,8 @@ static int pingpong_tearcheck_setup(struct drm_encoder *encoder, static int pingpong_tearcheck_enable(struct drm_encoder *encoder) { struct mdp5_kms *mdp5_kms = get_kms(encoder); - int pp_id = GET_PING_PONG_ID(mdp5_crtc_get_lm(encoder->crtc)); + struct mdp5_hw_mixer *mixer = mdp5_crtc_get_mixer(encoder->crtc); + int pp_id = GET_PING_PONG_ID(mixer->lm); int ret;
ret = clk_set_rate(mdp5_kms->vsync_clk, @@ -119,7 +121,8 @@ static int pingpong_tearcheck_enable(struct drm_encoder *encoder) static void pingpong_tearcheck_disable(struct drm_encoder *encoder) { struct mdp5_kms *mdp5_kms = get_kms(encoder); - int pp_id = GET_PING_PONG_ID(mdp5_crtc_get_lm(encoder->crtc)); + struct mdp5_hw_mixer *mixer = mdp5_crtc_get_mixer(encoder->crtc); + int pp_id = GET_PING_PONG_ID(mixer->lm);
mdp5_write(mdp5_kms, REG_MDP5_PP_TEAR_CHECK_EN(pp_id), 0); clk_disable_unprepare(mdp5_kms->vsync_clk); diff --git a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_crtc.c b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_crtc.c index d0c8b38b96ce..09f947fc81b9 100644 --- a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_crtc.c +++ b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_crtc.c @@ -32,10 +32,8 @@ struct mdp5_crtc { int id; bool enabled;
- /* layer mixer used for this CRTC (+ its lock): */ -#define GET_LM_ID(crtc_id) ((crtc_id == 3) ? 5 : crtc_id) - int lm; - spinlock_t lm_lock; /* protect REG_MDP5_LM_* registers */ + struct mdp5_hw_mixer *mixer; + spinlock_t lm_lock; /* protect REG_MDP5_LM_* registers */
/* CTL used for this CRTC: */ struct mdp5_ctl *ctl; @@ -111,6 +109,7 @@ static u32 crtc_flush(struct drm_crtc *crtc, u32 flush_mask) static u32 crtc_flush_all(struct drm_crtc *crtc) { struct mdp5_crtc *mdp5_crtc = to_mdp5_crtc(crtc); + struct mdp5_hw_mixer *mixer; struct drm_plane *plane; uint32_t flush_mask = 0;
@@ -122,7 +121,8 @@ static u32 crtc_flush_all(struct drm_crtc *crtc) flush_mask |= mdp5_plane_get_flush(plane); }
- flush_mask |= mdp_ctl_flush_mask_lm(mdp5_crtc->lm); + mixer = mdp5_crtc->mixer; + flush_mask |= mdp_ctl_flush_mask_lm(mixer->lm);
return crtc_flush(crtc, flush_mask); } @@ -207,7 +207,8 @@ static void blend_setup(struct drm_crtc *crtc) const struct mdp5_cfg_hw *hw_cfg; struct mdp5_plane_state *pstate, *pstates[STAGE_MAX + 1] = {NULL}; const struct mdp_format *format; - uint32_t lm = mdp5_crtc->lm; + struct mdp5_hw_mixer *mixer = mdp5_crtc->mixer; + uint32_t lm = mixer->lm; uint32_t blend_op, fg_alpha, bg_alpha, ctl_blend_flags = 0; unsigned long flags; enum mdp5_pipe stage[STAGE_MAX + 1] = { SSPP_NONE }; @@ -308,6 +309,8 @@ static void mdp5_crtc_mode_set_nofb(struct drm_crtc *crtc) { struct mdp5_crtc *mdp5_crtc = to_mdp5_crtc(crtc); struct mdp5_kms *mdp5_kms = get_kms(crtc); + struct mdp5_hw_mixer *mixer = mdp5_crtc->mixer; + uint32_t lm = mixer->lm; unsigned long flags; struct drm_display_mode *mode;
@@ -326,7 +329,7 @@ static void mdp5_crtc_mode_set_nofb(struct drm_crtc *crtc) mode->type, mode->flags);
spin_lock_irqsave(&mdp5_crtc->lm_lock, flags); - mdp5_write(mdp5_kms, REG_MDP5_LM_OUT_SIZE(mdp5_crtc->lm), + mdp5_write(mdp5_kms, REG_MDP5_LM_OUT_SIZE(lm), MDP5_LM_OUT_SIZE_WIDTH(mode->hdisplay) | MDP5_LM_OUT_SIZE_HEIGHT(mode->vdisplay)); spin_unlock_irqrestore(&mdp5_crtc->lm_lock, flags); @@ -561,7 +564,7 @@ static int mdp5_crtc_cursor_set(struct drm_crtc *crtc, if (ret) return -EINVAL;
- lm = mdp5_crtc->lm; + lm = mdp5_crtc->mixer->lm; stride = width * drm_format_plane_cpp(DRM_FORMAT_ARGB8888, 0);
spin_lock_irqsave(&mdp5_crtc->cursor.lock, flags); @@ -613,6 +616,7 @@ static int mdp5_crtc_cursor_move(struct drm_crtc *crtc, int x, int y) { struct mdp5_kms *mdp5_kms = get_kms(crtc); struct mdp5_crtc *mdp5_crtc = to_mdp5_crtc(crtc); + uint32_t lm = mdp5_crtc->mixer->lm; uint32_t flush_mask = mdp_ctl_flush_mask_cursor(0); uint32_t roi_w; uint32_t roi_h; @@ -628,10 +632,10 @@ static int mdp5_crtc_cursor_move(struct drm_crtc *crtc, int x, int y) get_roi(crtc, &roi_w, &roi_h);
spin_lock_irqsave(&mdp5_crtc->cursor.lock, flags); - mdp5_write(mdp5_kms, REG_MDP5_LM_CURSOR_SIZE(mdp5_crtc->lm), + mdp5_write(mdp5_kms, REG_MDP5_LM_CURSOR_SIZE(lm), MDP5_LM_CURSOR_SIZE_ROI_H(roi_h) | MDP5_LM_CURSOR_SIZE_ROI_W(roi_w)); - mdp5_write(mdp5_kms, REG_MDP5_LM_CURSOR_START_XY(mdp5_crtc->lm), + mdp5_write(mdp5_kms, REG_MDP5_LM_CURSOR_START_XY(lm), MDP5_LM_CURSOR_START_XY_Y_START(y) | MDP5_LM_CURSOR_START_XY_X_START(x)); spin_unlock_irqrestore(&mdp5_crtc->cursor.lock, flags); @@ -715,7 +719,8 @@ static void mdp5_crtc_wait_for_pp_done(struct drm_crtc *crtc) ret = wait_for_completion_timeout(&mdp5_crtc->pp_completion, msecs_to_jiffies(50)); if (ret == 0) - dev_warn(dev->dev, "pp done time out, lm=%d\n", mdp5_crtc->lm); + dev_warn(dev->dev, "pp done time out, lm=%d\n", + mdp5_crtc->mixer->lm); }
static void mdp5_crtc_wait_for_flush_done(struct drm_crtc *crtc) @@ -755,7 +760,8 @@ void mdp5_crtc_set_pipeline(struct drm_crtc *crtc, { struct mdp5_crtc *mdp5_crtc = to_mdp5_crtc(crtc); struct mdp5_kms *mdp5_kms = get_kms(crtc); - int lm = mdp5_crtc_get_lm(crtc); + struct mdp5_hw_mixer *mixer = mdp5_crtc->mixer; + uint32_t lm = mixer->lm;
/* now that we know what irq's we want: */ mdp5_crtc->err.irqmask = intf2err(intf->num); @@ -775,7 +781,7 @@ void mdp5_crtc_set_pipeline(struct drm_crtc *crtc, mdp_irq_update(&mdp5_kms->base);
mdp5_crtc->ctl = ctl; - mdp5_ctl_set_pipeline(ctl, intf, lm); + mdp5_ctl_set_pipeline(ctl, intf, mixer); }
struct mdp5_ctl *mdp5_crtc_get_ctl(struct drm_crtc *crtc) @@ -785,10 +791,11 @@ struct mdp5_ctl *mdp5_crtc_get_ctl(struct drm_crtc *crtc) return mdp5_crtc->ctl; }
-int mdp5_crtc_get_lm(struct drm_crtc *crtc) +struct mdp5_hw_mixer *mdp5_crtc_get_mixer(struct drm_crtc *crtc) { struct mdp5_crtc *mdp5_crtc = to_mdp5_crtc(crtc); - return WARN_ON(!crtc) ? -EINVAL : mdp5_crtc->lm; + return WARN_ON(!crtc) || WARN_ON(!mdp5_crtc->mixer) ? + ERR_PTR(-EINVAL) : mdp5_crtc->mixer; }
void mdp5_crtc_wait_for_commit_done(struct drm_crtc *crtc) @@ -808,6 +815,7 @@ struct drm_crtc *mdp5_crtc_init(struct drm_device *dev, { struct drm_crtc *crtc = NULL; struct mdp5_crtc *mdp5_crtc; + struct mdp5_kms *mdp5_kms;
mdp5_crtc = kzalloc(sizeof(*mdp5_crtc), GFP_KERNEL); if (!mdp5_crtc) @@ -816,7 +824,6 @@ struct drm_crtc *mdp5_crtc_init(struct drm_device *dev, crtc = &mdp5_crtc->base;
mdp5_crtc->id = id; - mdp5_crtc->lm = GET_LM_ID(id);
spin_lock_init(&mdp5_crtc->lm_lock); spin_lock_init(&mdp5_crtc->cursor.lock); @@ -838,5 +845,8 @@ struct drm_crtc *mdp5_crtc_init(struct drm_device *dev, drm_crtc_helper_add(crtc, &mdp5_crtc_helper_funcs); plane->crtc = crtc;
+ mdp5_kms = get_kms(crtc); + mdp5_crtc->mixer = mdp5_kms->hwmixers[id]; + return crtc; } diff --git a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_ctl.c b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_ctl.c index 8b93f7e13200..fa4f27a1a551 100644 --- a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_ctl.c +++ b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_ctl.c @@ -43,7 +43,7 @@ struct mdp5_ctl { struct mdp5_ctl_manager *ctlm;
u32 id; - int lm; + struct mdp5_hw_mixer *mixer;
/* CTL status bitmask */ u32 status; @@ -174,8 +174,8 @@ static void set_ctl_op(struct mdp5_ctl *ctl, struct mdp5_interface *intf) spin_unlock_irqrestore(&ctl->hw_lock, flags); }
-int mdp5_ctl_set_pipeline(struct mdp5_ctl *ctl, - struct mdp5_interface *intf, int lm) +int mdp5_ctl_set_pipeline(struct mdp5_ctl *ctl, struct mdp5_interface *intf, + struct mdp5_hw_mixer *mixer) { struct mdp5_ctl_manager *ctl_mgr = ctl->ctlm; struct mdp5_kms *mdp5_kms = get_kms(ctl_mgr); @@ -187,11 +187,11 @@ int mdp5_ctl_set_pipeline(struct mdp5_ctl *ctl, return -EINVAL; }
- ctl->lm = lm; + ctl->mixer = mixer;
memcpy(&ctl->pipeline.intf, intf, sizeof(*intf));
- ctl->pipeline.start_mask = mdp_ctl_flush_mask_lm(ctl->lm) | + ctl->pipeline.start_mask = mdp_ctl_flush_mask_lm(mixer->lm) | mdp_ctl_flush_mask_encoder(intf);
/* Virtual interfaces need not set a display intf (e.g.: Writeback) */ @@ -241,7 +241,7 @@ static void refill_start_mask(struct mdp5_ctl *ctl) struct op_mode *pipeline = &ctl->pipeline; struct mdp5_interface *intf = &ctl->pipeline.intf;
- pipeline->start_mask = mdp_ctl_flush_mask_lm(ctl->lm); + pipeline->start_mask = mdp_ctl_flush_mask_lm(ctl->mixer->lm);
/* * Writeback encoder needs to program & flush @@ -285,24 +285,24 @@ int mdp5_ctl_set_cursor(struct mdp5_ctl *ctl, int cursor_id, bool enable) struct mdp5_ctl_manager *ctl_mgr = ctl->ctlm; unsigned long flags; u32 blend_cfg; - int lm = ctl->lm; + struct mdp5_hw_mixer *mixer = ctl->mixer;
- if (unlikely(WARN_ON(lm < 0))) { - dev_err(ctl_mgr->dev->dev, "CTL %d cannot find LM: %d", - ctl->id, lm); + if (unlikely(WARN_ON(!mixer))) { + dev_err(ctl_mgr->dev->dev, "CTL %d cannot find LM", + ctl->id); return -EINVAL; }
spin_lock_irqsave(&ctl->hw_lock, flags);
- blend_cfg = ctl_read(ctl, REG_MDP5_CTL_LAYER_REG(ctl->id, lm)); + blend_cfg = ctl_read(ctl, REG_MDP5_CTL_LAYER_REG(ctl->id, mixer->lm));
if (enable) blend_cfg |= MDP5_CTL_LAYER_REG_CURSOR_OUT; else blend_cfg &= ~MDP5_CTL_LAYER_REG_CURSOR_OUT;
- ctl_write(ctl, REG_MDP5_CTL_LAYER_REG(ctl->id, lm), blend_cfg); + ctl_write(ctl, REG_MDP5_CTL_LAYER_REG(ctl->id, mixer->lm), blend_cfg); ctl->cursor_on = enable;
spin_unlock_irqrestore(&ctl->hw_lock, flags); @@ -358,6 +358,7 @@ static u32 mdp_ctl_blend_ext_mask(enum mdp5_pipe pipe, int mdp5_ctl_blend(struct mdp5_ctl *ctl, enum mdp5_pipe *stage, u32 stage_cnt, u32 ctl_blend_op_flags) { + struct mdp5_hw_mixer *mixer = ctl->mixer; unsigned long flags; u32 blend_cfg = 0, blend_ext_cfg = 0; int i, start_stage; @@ -378,13 +379,14 @@ int mdp5_ctl_blend(struct mdp5_ctl *ctl, enum mdp5_pipe *stage, u32 stage_cnt, if (ctl->cursor_on) blend_cfg |= MDP5_CTL_LAYER_REG_CURSOR_OUT;
- ctl_write(ctl, REG_MDP5_CTL_LAYER_REG(ctl->id, ctl->lm), blend_cfg); - ctl_write(ctl, REG_MDP5_CTL_LAYER_EXT_REG(ctl->id, ctl->lm), blend_ext_cfg); + ctl_write(ctl, REG_MDP5_CTL_LAYER_REG(ctl->id, mixer->lm), blend_cfg); + ctl_write(ctl, REG_MDP5_CTL_LAYER_EXT_REG(ctl->id, mixer->lm), + blend_ext_cfg); spin_unlock_irqrestore(&ctl->hw_lock, flags);
- ctl->pending_ctl_trigger = mdp_ctl_flush_mask_lm(ctl->lm); + ctl->pending_ctl_trigger = mdp_ctl_flush_mask_lm(mixer->lm);
- DBG("lm%d: blend config = 0x%08x. ext_cfg = 0x%08x", ctl->lm, + DBG("lm%d: blend config = 0x%08x. ext_cfg = 0x%08x", mixer->lm, blend_cfg, blend_ext_cfg);
return 0; @@ -452,7 +454,7 @@ static u32 fix_sw_flush(struct mdp5_ctl *ctl, u32 flush_mask)
/* for some targets, cursor bit is the same as LM bit */ if (BIT_NEEDS_SW_FIX(MDP5_CTL_FLUSH_CURSOR_0)) - sw_mask |= mdp_ctl_flush_mask_lm(ctl->lm); + sw_mask |= mdp_ctl_flush_mask_lm(ctl->mixer->lm);
return sw_mask; } @@ -620,7 +622,7 @@ struct mdp5_ctl *mdp5_ctlm_request(struct mdp5_ctl_manager *ctl_mgr, found: ctl = &ctl_mgr->ctls[c]; ctl->pipeline.intf.num = intf_num; - ctl->lm = -1; + ctl->mixer = NULL; ctl->status |= CTL_STAT_BUSY; ctl->pending_ctl_trigger = 0; DBG("CTL %d allocated", ctl->id); diff --git a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_ctl.h b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_ctl.h index fda00d33e4db..882c9d2be365 100644 --- a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_ctl.h +++ b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_ctl.h @@ -38,7 +38,7 @@ int mdp5_ctl_get_ctl_id(struct mdp5_ctl *ctl);
struct mdp5_interface; int mdp5_ctl_set_pipeline(struct mdp5_ctl *ctl, struct mdp5_interface *intf, - int lm); + struct mdp5_hw_mixer *lm); int mdp5_ctl_set_encoder_state(struct mdp5_ctl *ctl, bool enabled);
int mdp5_ctl_set_cursor(struct mdp5_ctl *ctl, int cursor_id, bool enable); diff --git a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_encoder.c b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_encoder.c index 80fa482ae8ed..68d048f040f0 100644 --- a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_encoder.c +++ b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_encoder.c @@ -215,7 +215,7 @@ static void mdp5_vid_encoder_disable(struct drm_encoder *encoder) struct mdp5_encoder *mdp5_encoder = to_mdp5_encoder(encoder); struct mdp5_kms *mdp5_kms = get_kms(encoder); struct mdp5_ctl *ctl = mdp5_encoder->ctl; - int lm = mdp5_crtc_get_lm(encoder->crtc); + struct mdp5_hw_mixer *mixer = mdp5_crtc_get_mixer(encoder->crtc); struct mdp5_interface *intf = &mdp5_encoder->intf; int intfn = mdp5_encoder->intf.num; unsigned long flags; @@ -238,7 +238,7 @@ static void mdp5_vid_encoder_disable(struct drm_encoder *encoder) * the settings changes for the new modeset (like new * scanout buffer) don't latch properly.. */ - mdp_irq_wait(&mdp5_kms->base, intf2vblank(lm, intf)); + mdp_irq_wait(&mdp5_kms->base, intf2vblank(mixer->lm, intf));
bs_set(mdp5_encoder, 0);
diff --git a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.c b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.c index c55ba6e8387c..1c6fbe7bc0ae 100644 --- a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.c +++ b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.c @@ -465,7 +465,7 @@ static int modeset_init(struct mdp5_kms *mdp5_kms) * the MDP5 interfaces) than the number of layer mixers present in HW, * but let's be safe here anyway */ - num_crtcs = min(priv->num_encoders, mdp5_cfg->lm.count); + num_crtcs = min(priv->num_encoders, mdp5_kms->num_hwmixers);
/* * Construct planes equaling the number of hw pipes, and CRTCs for the @@ -890,6 +890,10 @@ static int hwmixer_init(struct mdp5_kms *mdp5_kms) return ret; }
+ /* Don't create LMs connected to WB for now */ + if (!mixer) + continue; + mixer->idx = mdp5_kms->num_hwmixers; mdp5_kms->hwmixers[mdp5_kms->num_hwmixers++] = mixer; } diff --git a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.h b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.h index f3a3b767e35c..406e78390e74 100644 --- a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.h +++ b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.h @@ -23,9 +23,9 @@ #include "mdp/mdp_kms.h" #include "mdp5_cfg.h" /* must be included before mdp5.xml.h */ #include "mdp5.xml.h" -#include "mdp5_ctl.h" #include "mdp5_pipe.h" #include "mdp5_mixer.h" +#include "mdp5_ctl.h" #include "mdp5_smp.h"
struct mdp5_state; @@ -259,7 +259,7 @@ struct drm_plane *mdp5_plane_init(struct drm_device *dev, struct mdp5_ctl *mdp5_crtc_get_ctl(struct drm_crtc *crtc); uint32_t mdp5_crtc_vblank(struct drm_crtc *crtc);
-int mdp5_crtc_get_lm(struct drm_crtc *crtc); +struct mdp5_hw_mixer *mdp5_crtc_get_mixer(struct drm_crtc *crtc); void mdp5_crtc_set_pipeline(struct drm_crtc *crtc, struct mdp5_interface *intf, struct mdp5_ctl *ctl); void mdp5_crtc_wait_for_commit_done(struct drm_crtc *crtc); diff --git a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_mixer.c b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_mixer.c index dd38f0bc2494..032dc0a9638f 100644 --- a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_mixer.c +++ b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_mixer.c @@ -29,6 +29,10 @@ struct mdp5_hw_mixer *mdp5_mixer_init(const struct mdp5_lm_instance *lm) { struct mdp5_hw_mixer *mixer;
+ /* ignore WB bound mixers for now */ + if (lm->caps & MDP_LM_CAP_WB) + return NULL; + mixer = kzalloc(sizeof(*mixer), GFP_KERNEL); if (!mixer) return ERR_PTR(-ENOMEM);
PingPong ID for a Layer Mixer is already contained in mdp5_hw_mixer.
This avoids the need to retrieve PP ID using macros
Signed-off-by: Archit Taneja architt@codeaurora.org --- drivers/gpu/drm/msm/mdp/mdp5/mdp5_cmd_encoder.c | 6 +++--- drivers/gpu/drm/msm/mdp/mdp5/mdp5_crtc.c | 5 ++--- drivers/gpu/drm/msm/mdp/mdp5/mdp5_encoder.c | 2 +- drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.h | 10 +++++----- 4 files changed, 11 insertions(+), 12 deletions(-)
diff --git a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_cmd_encoder.c b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_cmd_encoder.c index 2e6ceafbd3e2..fd3cb45976f3 100644 --- a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_cmd_encoder.c +++ b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_cmd_encoder.c @@ -52,7 +52,7 @@ static int pingpong_tearcheck_setup(struct drm_encoder *encoder, u32 total_lines_x100, vclks_line, cfg; long vsync_clk_speed; struct mdp5_hw_mixer *mixer = mdp5_crtc_get_mixer(encoder->crtc); - int pp_id = GET_PING_PONG_ID(mixer->lm); + int pp_id = mixer->pp;
if (IS_ERR_OR_NULL(mdp5_kms->vsync_clk)) { dev_err(dev, "vsync_clk is not initialized\n"); @@ -96,7 +96,7 @@ static int pingpong_tearcheck_enable(struct drm_encoder *encoder) { struct mdp5_kms *mdp5_kms = get_kms(encoder); struct mdp5_hw_mixer *mixer = mdp5_crtc_get_mixer(encoder->crtc); - int pp_id = GET_PING_PONG_ID(mixer->lm); + int pp_id = mixer->pp; int ret;
ret = clk_set_rate(mdp5_kms->vsync_clk, @@ -122,7 +122,7 @@ static void pingpong_tearcheck_disable(struct drm_encoder *encoder) { struct mdp5_kms *mdp5_kms = get_kms(encoder); struct mdp5_hw_mixer *mixer = mdp5_crtc_get_mixer(encoder->crtc); - int pp_id = GET_PING_PONG_ID(mixer->lm); + int pp_id = mixer->pp;
mdp5_write(mdp5_kms, REG_MDP5_PP_TEAR_CHECK_EN(pp_id), 0); clk_disable_unprepare(mdp5_kms->vsync_clk); diff --git a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_crtc.c b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_crtc.c index 09f947fc81b9..47a880d71ee1 100644 --- a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_crtc.c +++ b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_crtc.c @@ -761,15 +761,14 @@ void mdp5_crtc_set_pipeline(struct drm_crtc *crtc, struct mdp5_crtc *mdp5_crtc = to_mdp5_crtc(crtc); struct mdp5_kms *mdp5_kms = get_kms(crtc); struct mdp5_hw_mixer *mixer = mdp5_crtc->mixer; - uint32_t lm = mixer->lm;
/* now that we know what irq's we want: */ mdp5_crtc->err.irqmask = intf2err(intf->num); - mdp5_crtc->vblank.irqmask = intf2vblank(lm, intf); + mdp5_crtc->vblank.irqmask = intf2vblank(mixer, intf);
if ((intf->type == INTF_DSI) && (intf->mode == MDP5_INTF_DSI_MODE_COMMAND)) { - mdp5_crtc->pp_done.irqmask = lm2ppdone(lm); + mdp5_crtc->pp_done.irqmask = lm2ppdone(mixer); mdp5_crtc->pp_done.irq = mdp5_crtc_pp_done_irq; mdp5_crtc->cmd_mode = true; } else { diff --git a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_encoder.c b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_encoder.c index 68d048f040f0..c2c204e6ea3b 100644 --- a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_encoder.c +++ b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_encoder.c @@ -238,7 +238,7 @@ static void mdp5_vid_encoder_disable(struct drm_encoder *encoder) * the settings changes for the new modeset (like new * scanout buffer) don't latch properly.. */ - mdp_irq_wait(&mdp5_kms->base, intf2vblank(mixer->lm, intf)); + mdp_irq_wait(&mdp5_kms->base, intf2vblank(mixer, intf));
bs_set(mdp5_encoder, 0);
diff --git a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.h b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.h index 406e78390e74..041c131ae563 100644 --- a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.h +++ b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.h @@ -207,8 +207,8 @@ static inline uint32_t intf2err(int intf_num) } }
-#define GET_PING_PONG_ID(layer_mixer) ((layer_mixer == 5) ? 3 : layer_mixer) -static inline uint32_t intf2vblank(int lm, struct mdp5_interface *intf) +static inline uint32_t intf2vblank(struct mdp5_hw_mixer *mixer, + struct mdp5_interface *intf) { /* * In case of DSI Command Mode, the Ping Pong's read pointer IRQ @@ -218,7 +218,7 @@ static inline uint32_t intf2vblank(int lm, struct mdp5_interface *intf)
if ((intf->type == INTF_DSI) && (intf->mode == MDP5_INTF_DSI_MODE_COMMAND)) - return MDP5_IRQ_PING_PONG_0_RD_PTR << GET_PING_PONG_ID(lm); + return MDP5_IRQ_PING_PONG_0_RD_PTR << mixer->pp;
if (intf->type == INTF_WB) return MDP5_IRQ_WB_2_DONE; @@ -232,9 +232,9 @@ static inline uint32_t intf2vblank(int lm, struct mdp5_interface *intf) } }
-static inline uint32_t lm2ppdone(int lm) +static inline uint32_t lm2ppdone(struct mdp5_hw_mixer *mixer) { - return MDP5_IRQ_PING_PONG_0_DONE << GET_PING_PONG_ID(lm); + return MDP5_IRQ_PING_PONG_0_DONE << mixer->pp; }
int mdp5_disable(struct mdp5_kms *mdp5_kms);
mdp5_interface struct contains data corresponding to a INTF instance in MDP5 hardware. This sturct is memcpy'd to the mdp5_encoder struct, and then later to the mdp5_ctl struct.
Instead of copying around interface data, create mdp5_interface instances in mdp5_init, like how it's done currently done for pipes and layer mixers. Pass around the interface pointers to mdp5_encoder and mdp5_ctl. This simplifies the code, and allows us to decouple encoders from INTFs in the future if needed.
Signed-off-by: Archit Taneja architt@codeaurora.org --- drivers/gpu/drm/msm/mdp/mdp5/mdp5_cmd_encoder.c | 8 +-- drivers/gpu/drm/msm/mdp/mdp5/mdp5_ctl.c | 21 ++---- drivers/gpu/drm/msm/mdp/mdp5/mdp5_encoder.c | 37 +++++------ drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.c | 85 +++++++++++++++++-------- drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.h | 6 +- 5 files changed, 93 insertions(+), 64 deletions(-)
diff --git a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_cmd_encoder.c b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_cmd_encoder.c index fd3cb45976f3..18c967107bc4 100644 --- a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_cmd_encoder.c +++ b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_cmd_encoder.c @@ -145,7 +145,7 @@ void mdp5_cmd_encoder_mode_set(struct drm_encoder *encoder, mode->vsync_end, mode->vtotal, mode->type, mode->flags); pingpong_tearcheck_setup(encoder, mode); - mdp5_crtc_set_pipeline(encoder->crtc, &mdp5_cmd_enc->intf, + mdp5_crtc_set_pipeline(encoder->crtc, mdp5_cmd_enc->intf, mdp5_cmd_enc->ctl); }
@@ -153,7 +153,7 @@ void mdp5_cmd_encoder_disable(struct drm_encoder *encoder) { struct mdp5_encoder *mdp5_cmd_enc = to_mdp5_encoder(encoder); struct mdp5_ctl *ctl = mdp5_cmd_enc->ctl; - struct mdp5_interface *intf = &mdp5_cmd_enc->intf; + struct mdp5_interface *intf = mdp5_cmd_enc->intf;
if (WARN_ON(!mdp5_cmd_enc->enabled)) return; @@ -172,7 +172,7 @@ void mdp5_cmd_encoder_enable(struct drm_encoder *encoder) { struct mdp5_encoder *mdp5_cmd_enc = to_mdp5_encoder(encoder); struct mdp5_ctl *ctl = mdp5_cmd_enc->ctl; - struct mdp5_interface *intf = &mdp5_cmd_enc->intf; + struct mdp5_interface *intf = mdp5_cmd_enc->intf;
if (WARN_ON(mdp5_cmd_enc->enabled)) return; @@ -200,7 +200,7 @@ int mdp5_cmd_encoder_set_split_display(struct drm_encoder *encoder, return -EINVAL;
mdp5_kms = get_kms(encoder); - intf_num = mdp5_cmd_enc->intf.num; + intf_num = mdp5_cmd_enc->intf->num;
/* Switch slave encoder's trigger MUX, to use the master's * start signal for the slave encoder diff --git a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_ctl.c b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_ctl.c index fa4f27a1a551..ed184e5491b4 100644 --- a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_ctl.c +++ b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_ctl.c @@ -33,7 +33,7 @@ #define CTL_STAT_BOOKED 0x2
struct op_mode { - struct mdp5_interface intf; + struct mdp5_interface *intf;
bool encoder_enabled; uint32_t start_mask; @@ -180,16 +180,8 @@ int mdp5_ctl_set_pipeline(struct mdp5_ctl *ctl, struct mdp5_interface *intf, struct mdp5_ctl_manager *ctl_mgr = ctl->ctlm; struct mdp5_kms *mdp5_kms = get_kms(ctl_mgr);
- if (unlikely(WARN_ON(intf->num != ctl->pipeline.intf.num))) { - dev_err(mdp5_kms->dev->dev, - "CTL %d is allocated by INTF %d, but used by INTF %d\n", - ctl->id, ctl->pipeline.intf.num, intf->num); - return -EINVAL; - } - ctl->mixer = mixer; - - memcpy(&ctl->pipeline.intf, intf, sizeof(*intf)); + ctl->pipeline.intf = intf;
ctl->pipeline.start_mask = mdp_ctl_flush_mask_lm(mixer->lm) | mdp_ctl_flush_mask_encoder(intf); @@ -210,11 +202,11 @@ static bool start_signal_needed(struct mdp5_ctl *ctl) if (!pipeline->encoder_enabled || pipeline->start_mask != 0) return false;
- switch (pipeline->intf.type) { + switch (pipeline->intf->type) { case INTF_WB: return true; case INTF_DSI: - return pipeline->intf.mode == MDP5_INTF_DSI_MODE_COMMAND; + return pipeline->intf->mode == MDP5_INTF_DSI_MODE_COMMAND; default: return false; } @@ -239,7 +231,7 @@ static void send_start_signal(struct mdp5_ctl *ctl) static void refill_start_mask(struct mdp5_ctl *ctl) { struct op_mode *pipeline = &ctl->pipeline; - struct mdp5_interface *intf = &ctl->pipeline.intf; + struct mdp5_interface *intf = pipeline->intf;
pipeline->start_mask = mdp_ctl_flush_mask_lm(ctl->mixer->lm);
@@ -265,7 +257,7 @@ int mdp5_ctl_set_encoder_state(struct mdp5_ctl *ctl, bool enabled) return -EINVAL;
ctl->pipeline.encoder_enabled = enabled; - DBG("intf_%d: %s", ctl->pipeline.intf.num, enabled ? "on" : "off"); + DBG("intf_%d: %s", ctl->pipeline.intf->num, enabled ? "on" : "off");
if (start_signal_needed(ctl)) { send_start_signal(ctl); @@ -621,7 +613,6 @@ struct mdp5_ctl *mdp5_ctlm_request(struct mdp5_ctl_manager *ctl_mgr,
found: ctl = &ctl_mgr->ctls[c]; - ctl->pipeline.intf.num = intf_num; ctl->mixer = NULL; ctl->status |= CTL_STAT_BUSY; ctl->pending_ctl_trigger = 0; diff --git a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_encoder.c b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_encoder.c index c2c204e6ea3b..2994841eb00c 100644 --- a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_encoder.c +++ b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_encoder.c @@ -109,7 +109,7 @@ static void mdp5_vid_encoder_mode_set(struct drm_encoder *encoder, struct mdp5_kms *mdp5_kms = get_kms(encoder); struct drm_device *dev = encoder->dev; struct drm_connector *connector; - int intf = mdp5_encoder->intf.num; + int intf = mdp5_encoder->intf->num; uint32_t dtv_hsync_skew, vsync_period, vsync_len, ctrl_pol; uint32_t display_v_start, display_v_end; uint32_t hsync_start_x, hsync_end_x; @@ -130,7 +130,7 @@ static void mdp5_vid_encoder_mode_set(struct drm_encoder *encoder, ctrl_pol = 0;
/* DSI controller cannot handle active-low sync signals. */ - if (mdp5_encoder->intf.type != INTF_DSI) { + if (mdp5_encoder->intf->type != INTF_DSI) { if (mode->flags & DRM_MODE_FLAG_NHSYNC) ctrl_pol |= MDP5_INTF_POLARITY_CTL_HSYNC_LOW; if (mode->flags & DRM_MODE_FLAG_NVSYNC) @@ -175,7 +175,7 @@ static void mdp5_vid_encoder_mode_set(struct drm_encoder *encoder, * DISPLAY_V_START = (VBP * HCYCLE) + HBP * DISPLAY_V_END = (VBP + VACTIVE) * HCYCLE - 1 - HFP */ - if (mdp5_encoder->intf.type == INTF_eDP) { + if (mdp5_encoder->intf->type == INTF_eDP) { display_v_start += mode->htotal - mode->hsync_start; display_v_end -= mode->hsync_start - mode->hdisplay; } @@ -206,8 +206,8 @@ static void mdp5_vid_encoder_mode_set(struct drm_encoder *encoder,
spin_unlock_irqrestore(&mdp5_encoder->intf_lock, flags);
- mdp5_crtc_set_pipeline(encoder->crtc, &mdp5_encoder->intf, - mdp5_encoder->ctl); + mdp5_crtc_set_pipeline(encoder->crtc, mdp5_encoder->intf, + mdp5_encoder->ctl); }
static void mdp5_vid_encoder_disable(struct drm_encoder *encoder) @@ -216,8 +216,8 @@ static void mdp5_vid_encoder_disable(struct drm_encoder *encoder) struct mdp5_kms *mdp5_kms = get_kms(encoder); struct mdp5_ctl *ctl = mdp5_encoder->ctl; struct mdp5_hw_mixer *mixer = mdp5_crtc_get_mixer(encoder->crtc); - struct mdp5_interface *intf = &mdp5_encoder->intf; - int intfn = mdp5_encoder->intf.num; + struct mdp5_interface *intf = mdp5_encoder->intf; + int intfn = mdp5_encoder->intf->num; unsigned long flags;
if (WARN_ON(!mdp5_encoder->enabled)) @@ -250,8 +250,8 @@ static void mdp5_vid_encoder_enable(struct drm_encoder *encoder) struct mdp5_encoder *mdp5_encoder = to_mdp5_encoder(encoder); struct mdp5_kms *mdp5_kms = get_kms(encoder); struct mdp5_ctl *ctl = mdp5_encoder->ctl; - struct mdp5_interface *intf = &mdp5_encoder->intf; - int intfn = mdp5_encoder->intf.num; + struct mdp5_interface *intf = mdp5_encoder->intf; + int intfn = intf->num; unsigned long flags;
if (WARN_ON(mdp5_encoder->enabled)) @@ -273,7 +273,7 @@ static void mdp5_encoder_mode_set(struct drm_encoder *encoder, struct drm_display_mode *adjusted_mode) { struct mdp5_encoder *mdp5_encoder = to_mdp5_encoder(encoder); - struct mdp5_interface *intf = &mdp5_encoder->intf; + struct mdp5_interface *intf = mdp5_encoder->intf;
if (intf->mode == MDP5_INTF_DSI_MODE_COMMAND) mdp5_cmd_encoder_mode_set(encoder, mode, adjusted_mode); @@ -284,7 +284,7 @@ static void mdp5_encoder_mode_set(struct drm_encoder *encoder, static void mdp5_encoder_disable(struct drm_encoder *encoder) { struct mdp5_encoder *mdp5_encoder = to_mdp5_encoder(encoder); - struct mdp5_interface *intf = &mdp5_encoder->intf; + struct mdp5_interface *intf = mdp5_encoder->intf;
if (intf->mode == MDP5_INTF_DSI_MODE_COMMAND) mdp5_cmd_encoder_disable(encoder); @@ -295,7 +295,7 @@ static void mdp5_encoder_disable(struct drm_encoder *encoder) static void mdp5_encoder_enable(struct drm_encoder *encoder) { struct mdp5_encoder *mdp5_encoder = to_mdp5_encoder(encoder); - struct mdp5_interface *intf = &mdp5_encoder->intf; + struct mdp5_interface *intf = mdp5_encoder->intf;
if (intf->mode == MDP5_INTF_DSI_MODE_COMMAND) mdp5_cmd_encoder_disable(encoder); @@ -313,7 +313,7 @@ int mdp5_encoder_get_linecount(struct drm_encoder *encoder) { struct mdp5_encoder *mdp5_encoder = to_mdp5_encoder(encoder); struct mdp5_kms *mdp5_kms = get_kms(encoder); - int intf = mdp5_encoder->intf.num; + int intf = mdp5_encoder->intf->num;
return mdp5_read(mdp5_kms, REG_MDP5_INTF_LINE_COUNT(intf)); } @@ -322,7 +322,7 @@ u32 mdp5_encoder_get_framecount(struct drm_encoder *encoder) { struct mdp5_encoder *mdp5_encoder = to_mdp5_encoder(encoder); struct mdp5_kms *mdp5_kms = get_kms(encoder); - int intf = mdp5_encoder->intf.num; + int intf = mdp5_encoder->intf->num;
return mdp5_read(mdp5_kms, REG_MDP5_INTF_FRAME_COUNT(intf)); } @@ -340,7 +340,7 @@ int mdp5_vid_encoder_set_split_display(struct drm_encoder *encoder, return -EINVAL;
mdp5_kms = get_kms(encoder); - intf_num = mdp5_encoder->intf.num; + intf_num = mdp5_encoder->intf->num;
/* Switch slave encoder's TimingGen Sync mode, * to use the master's enable signal for the slave encoder. @@ -369,7 +369,7 @@ int mdp5_vid_encoder_set_split_display(struct drm_encoder *encoder, void mdp5_encoder_set_intf_mode(struct drm_encoder *encoder, bool cmd_mode) { struct mdp5_encoder *mdp5_encoder = to_mdp5_encoder(encoder); - struct mdp5_interface *intf = &mdp5_encoder->intf; + struct mdp5_interface *intf = mdp5_encoder->intf;
/* TODO: Expand this to set writeback modes too */ if (cmd_mode) { @@ -385,7 +385,8 @@ void mdp5_encoder_set_intf_mode(struct drm_encoder *encoder, bool cmd_mode)
/* initialize encoder */ struct drm_encoder *mdp5_encoder_init(struct drm_device *dev, - struct mdp5_interface *intf, struct mdp5_ctl *ctl) + struct mdp5_interface *intf, + struct mdp5_ctl *ctl) { struct drm_encoder *encoder = NULL; struct mdp5_encoder *mdp5_encoder; @@ -399,9 +400,9 @@ struct drm_encoder *mdp5_encoder_init(struct drm_device *dev, goto fail; }
- memcpy(&mdp5_encoder->intf, intf, sizeof(mdp5_encoder->intf)); encoder = &mdp5_encoder->base; mdp5_encoder->ctl = ctl; + mdp5_encoder->intf = intf;
spin_lock_init(&mdp5_encoder->intf_lock);
diff --git a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.c b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.c index 1c6fbe7bc0ae..9dcd893a5c06 100644 --- a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.c +++ b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.c @@ -308,19 +308,14 @@ int mdp5_enable(struct mdp5_kms *mdp5_kms) }
static struct drm_encoder *construct_encoder(struct mdp5_kms *mdp5_kms, - enum mdp5_intf_type intf_type, int intf_num, - struct mdp5_ctl *ctl) + struct mdp5_interface *intf, + struct mdp5_ctl *ctl) { struct drm_device *dev = mdp5_kms->dev; struct msm_drm_private *priv = dev->dev_private; struct drm_encoder *encoder; - struct mdp5_interface intf = { - .num = intf_num, - .type = intf_type, - .mode = MDP5_INTF_MODE_NONE, - };
- encoder = mdp5_encoder_init(dev, &intf, ctl); + encoder = mdp5_encoder_init(dev, intf, ctl); if (IS_ERR(encoder)) { dev_err(dev->dev, "failed to construct encoder\n"); return encoder; @@ -349,32 +344,28 @@ static int get_dsi_id_from_intf(const struct mdp5_cfg_hw *hw_cfg, int intf_num) return -EINVAL; }
-static int modeset_init_intf(struct mdp5_kms *mdp5_kms, int intf_num) +static int modeset_init_intf(struct mdp5_kms *mdp5_kms, + struct mdp5_interface *intf) { struct drm_device *dev = mdp5_kms->dev; struct msm_drm_private *priv = dev->dev_private; - const struct mdp5_cfg_hw *hw_cfg = - mdp5_cfg_get_hw_config(mdp5_kms->cfg); - enum mdp5_intf_type intf_type = hw_cfg->intf.connect[intf_num]; struct mdp5_ctl_manager *ctlm = mdp5_kms->ctlm; struct mdp5_ctl *ctl; struct drm_encoder *encoder; int ret = 0;
- switch (intf_type) { - case INTF_DISABLED: - break; + switch (intf->type) { case INTF_eDP: if (!priv->edp) break;
- ctl = mdp5_ctlm_request(ctlm, intf_num); + ctl = mdp5_ctlm_request(ctlm, intf->num); if (!ctl) { ret = -EINVAL; break; }
- encoder = construct_encoder(mdp5_kms, INTF_eDP, intf_num, ctl); + encoder = construct_encoder(mdp5_kms, intf, ctl); if (IS_ERR(encoder)) { ret = PTR_ERR(encoder); break; @@ -386,13 +377,13 @@ static int modeset_init_intf(struct mdp5_kms *mdp5_kms, int intf_num) if (!priv->hdmi) break;
- ctl = mdp5_ctlm_request(ctlm, intf_num); + ctl = mdp5_ctlm_request(ctlm, intf->num); if (!ctl) { ret = -EINVAL; break; }
- encoder = construct_encoder(mdp5_kms, INTF_HDMI, intf_num, ctl); + encoder = construct_encoder(mdp5_kms, intf, ctl); if (IS_ERR(encoder)) { ret = PTR_ERR(encoder); break; @@ -402,11 +393,13 @@ static int modeset_init_intf(struct mdp5_kms *mdp5_kms, int intf_num) break; case INTF_DSI: { - int dsi_id = get_dsi_id_from_intf(hw_cfg, intf_num); + const struct mdp5_cfg_hw *hw_cfg = + mdp5_cfg_get_hw_config(mdp5_kms->cfg); + int dsi_id = get_dsi_id_from_intf(hw_cfg, intf->num);
if ((dsi_id >= ARRAY_SIZE(priv->dsi)) || (dsi_id < 0)) { dev_err(dev->dev, "failed to find dsi from intf %d\n", - intf_num); + intf->num); ret = -EINVAL; break; } @@ -414,13 +407,13 @@ static int modeset_init_intf(struct mdp5_kms *mdp5_kms, int intf_num) if (!priv->dsi[dsi_id]) break;
- ctl = mdp5_ctlm_request(ctlm, intf_num); + ctl = mdp5_ctlm_request(ctlm, intf->num); if (!ctl) { ret = -EINVAL; break; }
- encoder = construct_encoder(mdp5_kms, INTF_DSI, intf_num, ctl); + encoder = construct_encoder(mdp5_kms, intf, ctl); if (IS_ERR(encoder)) { ret = PTR_ERR(encoder); break; @@ -430,7 +423,7 @@ static int modeset_init_intf(struct mdp5_kms *mdp5_kms, int intf_num) break; } default: - dev_err(dev->dev, "unknown intf: %d\n", intf_type); + dev_err(dev->dev, "unknown intf: %d\n", intf->type); ret = -EINVAL; break; } @@ -454,8 +447,8 @@ static int modeset_init(struct mdp5_kms *mdp5_kms) * Construct encoders and modeset initialize connector devices * for each external display interface. */ - for (i = 0; i < ARRAY_SIZE(hw_cfg->intf.connect); i++) { - ret = modeset_init_intf(mdp5_kms, i); + for (i = 0; i < mdp5_kms->num_intfs; i++) { + ret = modeset_init_intf(mdp5_kms, mdp5_kms->intfs[i]); if (ret) goto fail; } @@ -786,6 +779,7 @@ struct msm_kms *mdp5_kms_init(struct drm_device *dev) static void mdp5_destroy(struct platform_device *pdev) { struct mdp5_kms *mdp5_kms = platform_get_drvdata(pdev); + int i;
if (mdp5_kms->ctlm) mdp5_ctlm_destroy(mdp5_kms->ctlm); @@ -794,6 +788,9 @@ static void mdp5_destroy(struct platform_device *pdev) if (mdp5_kms->cfg) mdp5_cfg_destroy(mdp5_kms->cfg);
+ for (i = 0; i < mdp5_kms->num_intfs; i++) + kfree(mdp5_kms->intfs[i]); + if (mdp5_kms->rpm_enabled) pm_runtime_disable(&pdev->dev);
@@ -901,6 +898,38 @@ static int hwmixer_init(struct mdp5_kms *mdp5_kms) return 0; }
+static int interface_init(struct mdp5_kms *mdp5_kms) +{ + struct drm_device *dev = mdp5_kms->dev; + const struct mdp5_cfg_hw *hw_cfg; + const enum mdp5_intf_type *intf_types; + int i; + + hw_cfg = mdp5_cfg_get_hw_config(mdp5_kms->cfg); + intf_types = hw_cfg->intf.connect; + + for (i = 0; i < ARRAY_SIZE(hw_cfg->intf.connect); i++) { + struct mdp5_interface *intf; + + if (intf_types[i] == INTF_DISABLED) + continue; + + intf = kzalloc(sizeof(*intf), GFP_KERNEL); + if (!intf) { + dev_err(dev->dev, "failed to construct INTF%d\n", i); + return -ENOMEM; + } + + intf->num = i; + intf->type = intf_types[i]; + intf->mode = MDP5_INTF_MODE_NONE; + intf->idx = mdp5_kms->num_intfs; + mdp5_kms->intfs[mdp5_kms->num_intfs++] = intf; + } + + return 0; +} + static int mdp5_init(struct platform_device *pdev, struct drm_device *dev) { struct msm_drm_private *priv = dev->dev_private; @@ -1017,6 +1046,10 @@ static int mdp5_init(struct platform_device *pdev, struct drm_device *dev) if (ret) goto fail;
+ ret = interface_init(mdp5_kms); + if (ret) + goto fail; + /* set uninit-ed kms */ priv->kms = &mdp5_kms->base.base;
diff --git a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.h b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.h index 041c131ae563..6452fc115030 100644 --- a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.h +++ b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.h @@ -43,6 +43,9 @@ struct mdp5_kms { unsigned num_hwmixers; struct mdp5_hw_mixer *hwmixers[8];
+ unsigned num_intfs; + struct mdp5_interface *intfs[5]; + struct mdp5_cfg_handler *cfg; uint32_t caps; /* MDP capabilities (MDP_CAP_XXX bits) */
@@ -131,6 +134,7 @@ enum mdp5_intf_mode { };
struct mdp5_interface { + int idx; int num; /* display interface number */ enum mdp5_intf_type type; enum mdp5_intf_mode mode; @@ -138,11 +142,11 @@ struct mdp5_interface {
struct mdp5_encoder { struct drm_encoder base; - struct mdp5_interface intf; spinlock_t intf_lock; /* protect REG_MDP5_INTF_* registers */ bool enabled; uint32_t bsc;
+ struct mdp5_interface *intf; struct mdp5_ctl *ctl; }; #define to_mdp5_encoder(x) container_of(x, struct mdp5_encoder, base)
The mdp5_ctl has an 'op_mode' struct which contains info on the downstream pipeline.
Grouping these params together in a struct doesn't serve much purpose in the code. Maybe there was a plan to expand this further that never happened.
Remove the op_mode struct, and place its members directly in mdp5_ctl. This will help avoid confusion later when I introduce my own verion of a mdp5 pipeline :)
Signed-off-by: Archit Taneja architt@codeaurora.org --- drivers/gpu/drm/msm/mdp/mdp5/mdp5_ctl.c | 42 +++++++++++++-------------------- 1 file changed, 17 insertions(+), 25 deletions(-)
diff --git a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_ctl.c b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_ctl.c index ed184e5491b4..a86f1fd359c3 100644 --- a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_ctl.c +++ b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_ctl.c @@ -32,13 +32,6 @@ #define CTL_STAT_BUSY 0x1 #define CTL_STAT_BOOKED 0x2
-struct op_mode { - struct mdp5_interface *intf; - - bool encoder_enabled; - uint32_t start_mask; -}; - struct mdp5_ctl { struct mdp5_ctl_manager *ctlm;
@@ -49,7 +42,10 @@ struct mdp5_ctl { u32 status;
/* Operation Mode Configuration for the Pipeline */ - struct op_mode pipeline; + struct mdp5_interface *intf; + + bool encoder_enabled; + uint32_t start_mask;
/* REG_MDP5_CTL_*(<id>) registers access info + lock: */ spinlock_t hw_lock; @@ -181,10 +177,10 @@ int mdp5_ctl_set_pipeline(struct mdp5_ctl *ctl, struct mdp5_interface *intf, struct mdp5_kms *mdp5_kms = get_kms(ctl_mgr);
ctl->mixer = mixer; - ctl->pipeline.intf = intf; + ctl->intf = intf;
- ctl->pipeline.start_mask = mdp_ctl_flush_mask_lm(mixer->lm) | - mdp_ctl_flush_mask_encoder(intf); + ctl->start_mask = mdp_ctl_flush_mask_lm(mixer->lm) | + mdp_ctl_flush_mask_encoder(intf);
/* Virtual interfaces need not set a display intf (e.g.: Writeback) */ if (!mdp5_cfg_intf_is_virtual(intf->type)) @@ -197,16 +193,14 @@ int mdp5_ctl_set_pipeline(struct mdp5_ctl *ctl, struct mdp5_interface *intf,
static bool start_signal_needed(struct mdp5_ctl *ctl) { - struct op_mode *pipeline = &ctl->pipeline; - - if (!pipeline->encoder_enabled || pipeline->start_mask != 0) + if (!ctl->encoder_enabled || ctl->start_mask != 0) return false;
- switch (pipeline->intf->type) { + switch (ctl->intf->type) { case INTF_WB: return true; case INTF_DSI: - return pipeline->intf->mode == MDP5_INTF_DSI_MODE_COMMAND; + return ctl->intf->mode == MDP5_INTF_DSI_MODE_COMMAND; default: return false; } @@ -230,17 +224,16 @@ static void send_start_signal(struct mdp5_ctl *ctl)
static void refill_start_mask(struct mdp5_ctl *ctl) { - struct op_mode *pipeline = &ctl->pipeline; - struct mdp5_interface *intf = pipeline->intf; + struct mdp5_interface *intf = ctl->intf;
- pipeline->start_mask = mdp_ctl_flush_mask_lm(ctl->mixer->lm); + ctl->start_mask = mdp_ctl_flush_mask_lm(ctl->mixer->lm);
/* * Writeback encoder needs to program & flush * address registers for each page flip.. */ if (intf->type == INTF_WB) - pipeline->start_mask |= mdp_ctl_flush_mask_encoder(intf); + ctl->start_mask |= mdp_ctl_flush_mask_encoder(intf); }
/** @@ -256,8 +249,8 @@ int mdp5_ctl_set_encoder_state(struct mdp5_ctl *ctl, bool enabled) if (WARN_ON(!ctl)) return -EINVAL;
- ctl->pipeline.encoder_enabled = enabled; - DBG("intf_%d: %s", ctl->pipeline.intf->num, enabled ? "on" : "off"); + ctl->encoder_enabled = enabled; + DBG("intf_%d: %s", ctl->intf->num, enabled ? "on" : "off");
if (start_signal_needed(ctl)) { send_start_signal(ctl); @@ -495,15 +488,14 @@ static void fix_for_single_flush(struct mdp5_ctl *ctl, u32 *flush_mask, u32 mdp5_ctl_commit(struct mdp5_ctl *ctl, u32 flush_mask) { struct mdp5_ctl_manager *ctl_mgr = ctl->ctlm; - struct op_mode *pipeline = &ctl->pipeline; unsigned long flags; u32 flush_id = ctl->id; u32 curr_ctl_flush_mask;
- pipeline->start_mask &= ~flush_mask; + ctl->start_mask &= ~flush_mask;
VERB("flush_mask=%x, start_mask=%x, trigger=%x", flush_mask, - pipeline->start_mask, ctl->pending_ctl_trigger); + ctl->start_mask, ctl->pending_ctl_trigger);
if (ctl->pending_ctl_trigger & flush_mask) { flush_mask |= MDP5_CTL_FLUSH_CTL;
Subclass drm_crtc_state so that we can maintain additional state for our CRTCs.
Add mdp5_pipeline and mdp5_ctl pointers in the subclassed state. mdp5_pipeline is a grouping of the HW entities that forms the downstream pipeline for a particular CRTC. It currently contains pointers to mdp5_interface and mdp5_hw_mixer tied to this CRTC. Later, we will have 2 hwmixers in this struct. (We could also have 2 intfs if we want to support dual DSI with Source Split enabled. Implementing that feature isn't planned at the moment).
The mdp5_pipeline state isn't used at the moment. For now, we just introduce mdp5_crtc_state and the crtc funcs needed to manage the subclassed state.
Signed-off-by: Archit Taneja architt@codeaurora.org --- drivers/gpu/drm/msm/mdp/mdp5/mdp5_crtc.c | 72 +++++++++++++++++++++++++++++--- drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.h | 14 +++++++ 2 files changed, 80 insertions(+), 6 deletions(-)
diff --git a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_crtc.c b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_crtc.c index 47a880d71ee1..09ea7c3d8951 100644 --- a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_crtc.c +++ b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_crtc.c @@ -645,16 +645,75 @@ static int mdp5_crtc_cursor_move(struct drm_crtc *crtc, int x, int y) return 0; }
+static void +mdp5_crtc_atomic_print_state(struct drm_printer *p, + const struct drm_crtc_state *state) +{ + struct mdp5_crtc_state *mdp5_cstate = to_mdp5_crtc_state(state); + struct mdp5_pipeline *pipeline = &mdp5_cstate->pipeline; + + if (WARN_ON(!pipeline)) + return; + + drm_printf(p, "\thwmixer=%s\n", pipeline->mixer ? + pipeline->mixer->name : "(null)"); +} + +static void mdp5_crtc_reset(struct drm_crtc *crtc) +{ + struct mdp5_crtc_state *mdp5_cstate; + + if (crtc->state) { + __drm_atomic_helper_crtc_destroy_state(crtc->state); + kfree(to_mdp5_crtc_state(crtc->state)); + } + + mdp5_cstate = kzalloc(sizeof(*mdp5_cstate), GFP_KERNEL); + + if (mdp5_cstate) { + mdp5_cstate->base.crtc = crtc; + crtc->state = &mdp5_cstate->base; + } +} + +static struct drm_crtc_state * +mdp5_crtc_duplicate_state(struct drm_crtc *crtc) +{ + struct mdp5_crtc_state *mdp5_cstate; + + if (WARN_ON(!crtc->state)) + return NULL; + + mdp5_cstate = kmemdup(to_mdp5_crtc_state(crtc->state), + sizeof(*mdp5_cstate), GFP_KERNEL); + if (!mdp5_cstate) + return NULL; + + __drm_atomic_helper_crtc_duplicate_state(crtc, &mdp5_cstate->base); + + return &mdp5_cstate->base; +} + +static void mdp5_crtc_destroy_state(struct drm_crtc *crtc, struct drm_crtc_state *state) +{ + struct mdp5_crtc_state *mdp5_cstate = to_mdp5_crtc_state(state); + + __drm_atomic_helper_crtc_destroy_state(state); + + kfree(mdp5_cstate); +} + static const struct drm_crtc_funcs mdp5_crtc_funcs = { .set_config = drm_atomic_helper_set_config, .destroy = mdp5_crtc_destroy, .page_flip = drm_atomic_helper_page_flip, .set_property = drm_atomic_helper_crtc_set_property, - .reset = drm_atomic_helper_crtc_reset, - .atomic_duplicate_state = drm_atomic_helper_crtc_duplicate_state, - .atomic_destroy_state = drm_atomic_helper_crtc_destroy_state, + .reset = mdp5_crtc_reset, + .atomic_duplicate_state = mdp5_crtc_duplicate_state, + .atomic_destroy_state = mdp5_crtc_destroy_state, .cursor_set = mdp5_crtc_cursor_set, .cursor_move = mdp5_crtc_cursor_move, + .atomic_print_state = mdp5_crtc_atomic_print_state, };
static const struct drm_crtc_funcs mdp5_crtc_no_lm_cursor_funcs = { @@ -662,9 +721,10 @@ static const struct drm_crtc_funcs mdp5_crtc_no_lm_cursor_funcs = { .destroy = mdp5_crtc_destroy, .page_flip = drm_atomic_helper_page_flip, .set_property = drm_atomic_helper_crtc_set_property, - .reset = drm_atomic_helper_crtc_reset, - .atomic_duplicate_state = drm_atomic_helper_crtc_duplicate_state, - .atomic_destroy_state = drm_atomic_helper_crtc_destroy_state, + .reset = mdp5_crtc_reset, + .atomic_duplicate_state = mdp5_crtc_duplicate_state, + .atomic_destroy_state = mdp5_crtc_destroy_state, + .atomic_print_state = mdp5_crtc_atomic_print_state, };
static const struct drm_crtc_helper_funcs mdp5_crtc_helper_funcs = { diff --git a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.h b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.h index 6452fc115030..b875f453b930 100644 --- a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.h +++ b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.h @@ -121,6 +121,20 @@ struct mdp5_plane_state { #define to_mdp5_plane_state(x) \ container_of(x, struct mdp5_plane_state, base)
+struct mdp5_pipeline { + struct mdp5_interface *intf; + struct mdp5_hw_mixer *mixer; +}; + +struct mdp5_crtc_state { + struct drm_crtc_state base; + + struct mdp5_ctl *ctl; + struct mdp5_pipeline pipeline; +}; +#define to_mdp5_crtc_state(x) \ + container_of(x, struct mdp5_crtc_state, base) + enum mdp5_intf_mode { MDP5_INTF_MODE_NONE = 0,
Add the stuff needed to allow dynamically assigning a mixer to a CRTC.
Since mixers are a resource that can be shared across multiple CRTCs, we need to maintain a 'hwmixer_to_crtc' map in the global atomic state, acquire the mdp5_kms.state_lock modeset lock and so on.
The mixer is assigned in the CRTC's atomic_check() func, a failure will result in the new state being cleanly rolled back.
The mixer assignment itself is straightforward, and almost identical to what we do for hwpipes. We don't need to grab the old hwmixer_to_crtc state like we do in hwpipes since we don't need to compare anything with the old state at the moment.
The only LM capability we care about at the moment is whether the mixer instance can be used to display stuff (i.e, connect to an INTF downstream).
Signed-off-by: Archit Taneja architt@codeaurora.org --- drivers/gpu/drm/msm/mdp/mdp5/mdp5_crtc.c | 31 +++++++++++++++++ drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.c | 1 + drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.h | 1 + drivers/gpu/drm/msm/mdp/mdp5/mdp5_mixer.c | 55 +++++++++++++++++++++++++++++++ drivers/gpu/drm/msm/mdp/mdp5/mdp5_mixer.h | 9 +++++ 5 files changed, 97 insertions(+)
diff --git a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_crtc.c b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_crtc.c index 09ea7c3d8951..0f6e545d74ba 100644 --- a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_crtc.c +++ b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_crtc.c @@ -373,6 +373,30 @@ static void mdp5_crtc_enable(struct drm_crtc *crtc) mdp5_crtc->enabled = true; }
+int mdp5_crtc_setup_pipeline(struct drm_crtc *crtc, + struct drm_crtc_state *new_crtc_state) +{ + struct mdp5_crtc_state *mdp5_cstate = + to_mdp5_crtc_state(new_crtc_state); + struct mdp5_pipeline *pipeline = &mdp5_cstate->pipeline; + bool new_mixer = false; + + new_mixer = !pipeline->mixer; + + if (new_mixer) { + struct mdp5_hw_mixer *old_mixer = pipeline->mixer; + + pipeline->mixer = mdp5_mixer_assign(new_crtc_state->state, crtc, + MDP_LM_CAP_DISPLAY); + if (IS_ERR(pipeline->mixer)) + return PTR_ERR(pipeline->mixer); + + mdp5_mixer_release(new_crtc_state->state, old_mixer); + } + + return 0; +} + struct plane_state { struct drm_plane *plane; struct mdp5_plane_state *state; @@ -405,6 +429,7 @@ static int mdp5_crtc_atomic_check(struct drm_crtc *crtc, const struct drm_plane_state *pstate; bool cursor_plane = false; int cnt = 0, base = 0, i; + int ret;
DBG("%s: check", crtc->name);
@@ -418,6 +443,12 @@ static int mdp5_crtc_atomic_check(struct drm_crtc *crtc, cursor_plane = true; }
+ ret = mdp5_crtc_setup_pipeline(crtc, state); + if (ret) { + dev_err(dev->dev, "couldn't assign mixers %d\n", ret); + return ret; + } + /* assign a stage based on sorted zpos property */ sort(pstates, cnt, sizeof(pstates[0]), pstate_cmp, NULL);
diff --git a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.c b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.c index 9dcd893a5c06..9525c990f2dc 100644 --- a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.c +++ b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.c @@ -90,6 +90,7 @@ struct mdp5_state *mdp5_get_state(struct drm_atomic_state *s)
/* Copy state: */ new_state->hwpipe = mdp5_kms->state->hwpipe; + new_state->hwmixer = mdp5_kms->state->hwmixer; if (mdp5_kms->smp) new_state->smp = mdp5_kms->state->smp;
diff --git a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.h b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.h index b875f453b930..535848e93f55 100644 --- a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.h +++ b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.h @@ -96,6 +96,7 @@ struct mdp5_kms { */ struct mdp5_state { struct mdp5_hw_pipe_state hwpipe; + struct mdp5_hw_mixer_state hwmixer; struct mdp5_smp_state smp; };
diff --git a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_mixer.c b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_mixer.c index 032dc0a9638f..6f353c4886e4 100644 --- a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_mixer.c +++ b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_mixer.c @@ -16,6 +16,61 @@
#include "mdp5_kms.h"
+struct mdp5_hw_mixer *mdp5_mixer_assign(struct drm_atomic_state *s, + struct drm_crtc *crtc, uint32_t caps) +{ + struct msm_drm_private *priv = s->dev->dev_private; + struct mdp5_kms *mdp5_kms = to_mdp5_kms(to_mdp_kms(priv->kms)); + struct mdp5_state *state = mdp5_get_state(s); + struct mdp5_hw_mixer_state *new_state; + struct mdp5_hw_mixer *mixer = NULL; + int i; + + if (IS_ERR(state)) + return ERR_CAST(state); + + new_state = &state->hwmixer; + + for (i = 0; i < mdp5_kms->num_hwmixers; i++) { + struct mdp5_hw_mixer *cur = mdp5_kms->hwmixers[i]; + + /* skip if already in-use */ + if (new_state->hwmixer_to_crtc[cur->idx]) + continue; + + /* skip if doesn't support some required caps: */ + if (caps & ~cur->caps) + continue; + + if (!mixer) + mixer = cur; + } + + if (!mixer) + return ERR_PTR(-ENOMEM); + + new_state->hwmixer_to_crtc[mixer->idx] = crtc; + + return mixer; +} + +void mdp5_mixer_release(struct drm_atomic_state *s, struct mdp5_hw_mixer *mixer) +{ + struct mdp5_state *state = mdp5_get_state(s); + struct mdp5_hw_mixer_state *new_state = &state->hwmixer; + + if (!mixer) + return; + + if (WARN_ON(!new_state->hwmixer_to_crtc[mixer->idx])) + return; + + DBG("%s: release from crtc %s", mixer->name, + new_state->hwmixer_to_crtc[mixer->idx]->name); + + new_state->hwmixer_to_crtc[mixer->idx] = NULL; +} + void mdp5_mixer_destroy(struct mdp5_hw_mixer *mixer) { kfree(mixer); diff --git a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_mixer.h b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_mixer.h index 08d2173703bf..18ed933ae574 100644 --- a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_mixer.h +++ b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_mixer.h @@ -31,7 +31,16 @@ struct mdp5_hw_mixer { uint32_t flush_mask; /* used to commit LM registers */ };
+/* global atomic state of assignment between CRTCs and Layer Mixers: */ +struct mdp5_hw_mixer_state { + struct drm_crtc *hwmixer_to_crtc[8]; +}; + struct mdp5_hw_mixer *mdp5_mixer_init(const struct mdp5_lm_instance *lm); void mdp5_mixer_destroy(struct mdp5_hw_mixer *lm); +struct mdp5_hw_mixer *mdp5_mixer_assign(struct drm_atomic_state *s, + struct drm_crtc *crtc, uint32_t caps); +void mdp5_mixer_release(struct drm_atomic_state *s, + struct mdp5_hw_mixer *mixer);
#endif /* __MDP5_LM_H__ */
The INTF and CTL used in a display pipeline are going to be maintained as a part of the CRTC state (i.e, in mdp5_crtc_state).
These entities, however, are currently statically assigned to drm_encoders (i.e. mdp5_encoder). Since these aren't directly visible to the CRTC, we assign them to the CRTC state in the encoder's atomic_check() op.
With this approach, we assign portions of CRTC state in two different places: the layer mixer in CRTC's atomic_check(), and the INTF and CTL pieces in the encoder's atomic_check() op.
We'd have more options here if the drm core maintained encoder state too, but the current approach of clubbing everything in CRTC's state works just fine.
Unlike hwpipes and mixers, we don't need to keep a track of INTF/CTL assignments in the global atomic state. This is because they're currently not sharable resources. For example, INTF0 and CTL0 will always be assigned to one drm_encoder. This can change later when we implement writeback and want a CRTC to use a CTL for a while, and then release it for others to use it. Or, when a drm_encoder can switch between using a single INTF vs 2 INTFs.
Signed-off-by: Archit Taneja architt@codeaurora.org --- drivers/gpu/drm/msm/mdp/mdp5/mdp5_encoder.c | 16 ++++++++++++++++ 1 file changed, 16 insertions(+)
diff --git a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_encoder.c b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_encoder.c index 2994841eb00c..6d535140ef21 100644 --- a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_encoder.c +++ b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_encoder.c @@ -303,10 +303,26 @@ static void mdp5_encoder_enable(struct drm_encoder *encoder) mdp5_vid_encoder_enable(encoder); }
+static int mdp5_encoder_atomic_check(struct drm_encoder *encoder, + struct drm_crtc_state *crtc_state, + struct drm_connector_state *conn_state) +{ + struct mdp5_encoder *mdp5_encoder = to_mdp5_encoder(encoder); + struct mdp5_crtc_state *mdp5_cstate = to_mdp5_crtc_state(crtc_state); + struct mdp5_interface *intf = mdp5_encoder->intf; + struct mdp5_ctl *ctl = mdp5_encoder->ctl; + + mdp5_cstate->ctl = ctl; + mdp5_cstate->pipeline.intf = intf; + + return 0; +} + static const struct drm_encoder_helper_funcs mdp5_encoder_helper_funcs = { .mode_set = mdp5_encoder_mode_set, .disable = mdp5_encoder_disable, .enable = mdp5_encoder_enable, + .atomic_check = mdp5_encoder_atomic_check, };
int mdp5_encoder_get_linecount(struct drm_encoder *encoder)
Things like vblank/err irq masks, mode of operation (command mode or not) are derivative of the interface and mixer state. Therefore, they need to be a part of the CRTC state too.
Add them to mdp5_crtc_state, and assign them in the CRTC's atomic_check() func, so that it can be rolled back to a clean state.
Signed-off-by: Archit Taneja architt@codeaurora.org --- drivers/gpu/drm/msm/mdp/mdp5/mdp5_crtc.c | 19 +++++++++++++++++++ drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.h | 7 +++++++ 2 files changed, 26 insertions(+)
diff --git a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_crtc.c b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_crtc.c index 0f6e545d74ba..c58cf44c058d 100644 --- a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_crtc.c +++ b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_crtc.c @@ -379,6 +379,7 @@ int mdp5_crtc_setup_pipeline(struct drm_crtc *crtc, struct mdp5_crtc_state *mdp5_cstate = to_mdp5_crtc_state(new_crtc_state); struct mdp5_pipeline *pipeline = &mdp5_cstate->pipeline; + struct mdp5_interface *intf; bool new_mixer = false;
new_mixer = !pipeline->mixer; @@ -394,6 +395,24 @@ int mdp5_crtc_setup_pipeline(struct drm_crtc *crtc, mdp5_mixer_release(new_crtc_state->state, old_mixer); }
+ /* + * these should have been already set up in the encoder's atomic + * check (called by drm_atomic_helper_check_modeset) + */ + intf = pipeline->intf; + + mdp5_cstate->err_irqmask = intf2err(intf->num); + mdp5_cstate->vblank_irqmask = intf2vblank(pipeline->mixer, intf); + + if ((intf->type == INTF_DSI) && + (intf->mode == MDP5_INTF_DSI_MODE_COMMAND)) { + mdp5_cstate->pp_done_irqmask = lm2ppdone(pipeline->mixer); + mdp5_cstate->cmd_mode = true; + } else { + mdp5_cstate->pp_done_irqmask = 0; + mdp5_cstate->cmd_mode = false; + } + return 0; }
diff --git a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.h b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.h index 535848e93f55..c31c193e59f2 100644 --- a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.h +++ b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.h @@ -132,6 +132,13 @@ struct mdp5_crtc_state {
struct mdp5_ctl *ctl; struct mdp5_pipeline pipeline; + + /* these are derivatives of intf/mixer state in mdp5_pipeline */ + u32 vblank_irqmask; + u32 err_irqmask; + u32 pp_done_irqmask; + + bool cmd_mode; }; #define to_mdp5_crtc_state(x) \ container_of(x, struct mdp5_crtc_state, base)
In the last few commits, we've been adding params to mdp5_crtc_state, and assigning them in the atomic_check() funcs. Now it's time to actually start using them.
Remove the duplicated params from the mdp5_crtc struct, and start using them in the mdp5_crtc code. The majority of the references to these params is in code that executes after the atomic swap has occurred, so it's okay to use crtc->state in them. There are a couple of legacy LM cursor ops that may not use the updated state, but (I think) it's okay to live with that.
Now that we dynamically allocate a mixer to the CRTC, we can also remove the static assignment to it in mdp5_crtc_init, and also drop the code that skipped init-ing WB bound mixers (those will now be rejected by mdp5_mixer_assign()).
Signed-off-by: Archit Taneja architt@codeaurora.org --- drivers/gpu/drm/msm/mdp/mdp5/mdp5_crtc.c | 122 ++++++++++++++++-------------- drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.c | 4 - drivers/gpu/drm/msm/mdp/mdp5/mdp5_mixer.c | 4 - 3 files changed, 64 insertions(+), 66 deletions(-)
diff --git a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_crtc.c b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_crtc.c index c58cf44c058d..0c9fb602f93f 100644 --- a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_crtc.c +++ b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_crtc.c @@ -32,12 +32,8 @@ struct mdp5_crtc { int id; bool enabled;
- struct mdp5_hw_mixer *mixer; spinlock_t lm_lock; /* protect REG_MDP5_LM_* registers */
- /* CTL used for this CRTC: */ - struct mdp5_ctl *ctl; - /* if there is a pending flip, these will be non-null: */ struct drm_pending_vblank_event *event;
@@ -59,8 +55,6 @@ struct mdp5_crtc {
struct completion pp_completion;
- bool cmd_mode; - struct { /* protect REG_MDP5_LM_CURSOR* registers and cursor scanout_bo*/ spinlock_t lock; @@ -95,10 +89,11 @@ static void request_pp_done_pending(struct drm_crtc *crtc)
static u32 crtc_flush(struct drm_crtc *crtc, u32 flush_mask) { - struct mdp5_crtc *mdp5_crtc = to_mdp5_crtc(crtc); + struct mdp5_crtc_state *mdp5_cstate = to_mdp5_crtc_state(crtc->state); + struct mdp5_ctl *ctl = mdp5_cstate->ctl;
DBG("%s: flush=%08x", crtc->name, flush_mask); - return mdp5_ctl_commit(mdp5_crtc->ctl, flush_mask); + return mdp5_ctl_commit(ctl, flush_mask); }
/* @@ -108,20 +103,20 @@ static u32 crtc_flush(struct drm_crtc *crtc, u32 flush_mask) */ static u32 crtc_flush_all(struct drm_crtc *crtc) { - struct mdp5_crtc *mdp5_crtc = to_mdp5_crtc(crtc); + struct mdp5_crtc_state *mdp5_cstate = to_mdp5_crtc_state(crtc->state); struct mdp5_hw_mixer *mixer; struct drm_plane *plane; uint32_t flush_mask = 0;
/* this should not happen: */ - if (WARN_ON(!mdp5_crtc->ctl)) + if (WARN_ON(!mdp5_cstate->ctl)) return 0;
drm_atomic_crtc_for_each_plane(plane, crtc) { flush_mask |= mdp5_plane_get_flush(plane); }
- mixer = mdp5_crtc->mixer; + mixer = mdp5_cstate->pipeline.mixer; flush_mask |= mdp_ctl_flush_mask_lm(mixer->lm);
return crtc_flush(crtc, flush_mask); @@ -130,7 +125,9 @@ static u32 crtc_flush_all(struct drm_crtc *crtc) /* if file!=NULL, this is preclose potential cancel-flip path */ static void complete_flip(struct drm_crtc *crtc, struct drm_file *file) { + struct mdp5_crtc_state *mdp5_cstate = to_mdp5_crtc_state(crtc->state); struct mdp5_crtc *mdp5_crtc = to_mdp5_crtc(crtc); + struct mdp5_ctl *ctl = mdp5_cstate->ctl; struct drm_device *dev = crtc->dev; struct drm_pending_vblank_event *event; unsigned long flags; @@ -150,10 +147,11 @@ static void complete_flip(struct drm_crtc *crtc, struct drm_file *file) } spin_unlock_irqrestore(&dev->event_lock, flags);
- if (mdp5_crtc->ctl && !crtc->state->enable) { + if (ctl && !crtc->state->enable) { /* set STAGE_UNUSED for all layers */ - mdp5_ctl_blend(mdp5_crtc->ctl, NULL, 0, 0); - mdp5_crtc->ctl = NULL; + mdp5_ctl_blend(ctl, NULL, 0, 0); + /* XXX: What to do here? */ + /* mdp5_crtc->ctl = NULL; */ } }
@@ -202,13 +200,15 @@ static inline u32 mdp5_lm_use_fg_alpha_mask(enum mdp_mixer_stage_id stage) static void blend_setup(struct drm_crtc *crtc) { struct mdp5_crtc *mdp5_crtc = to_mdp5_crtc(crtc); + struct mdp5_crtc_state *mdp5_cstate = to_mdp5_crtc_state(crtc->state); struct mdp5_kms *mdp5_kms = get_kms(crtc); struct drm_plane *plane; const struct mdp5_cfg_hw *hw_cfg; struct mdp5_plane_state *pstate, *pstates[STAGE_MAX + 1] = {NULL}; const struct mdp_format *format; - struct mdp5_hw_mixer *mixer = mdp5_crtc->mixer; + struct mdp5_hw_mixer *mixer = mdp5_cstate->pipeline.mixer; uint32_t lm = mixer->lm; + struct mdp5_ctl *ctl = mdp5_cstate->ctl; uint32_t blend_op, fg_alpha, bg_alpha, ctl_blend_flags = 0; unsigned long flags; enum mdp5_pipe stage[STAGE_MAX + 1] = { SSPP_NONE }; @@ -222,7 +222,8 @@ static void blend_setup(struct drm_crtc *crtc) spin_lock_irqsave(&mdp5_crtc->lm_lock, flags);
/* ctl could be released already when we are shutting down: */ - if (!mdp5_crtc->ctl) + /* XXX: Can this happen now? */ + if (!ctl) goto out;
/* Collect all plane information */ @@ -299,7 +300,7 @@ static void blend_setup(struct drm_crtc *crtc)
mdp5_write(mdp5_kms, REG_MDP5_LM_BLEND_COLOR_OUT(lm), mixer_op_mode);
- mdp5_ctl_blend(mdp5_crtc->ctl, stage, plane_cnt, ctl_blend_flags); + mdp5_ctl_blend(ctl, stage, plane_cnt, ctl_blend_flags);
out: spin_unlock_irqrestore(&mdp5_crtc->lm_lock, flags); @@ -308,8 +309,9 @@ static void blend_setup(struct drm_crtc *crtc) static void mdp5_crtc_mode_set_nofb(struct drm_crtc *crtc) { struct mdp5_crtc *mdp5_crtc = to_mdp5_crtc(crtc); + struct mdp5_crtc_state *mdp5_cstate = to_mdp5_crtc_state(crtc->state); struct mdp5_kms *mdp5_kms = get_kms(crtc); - struct mdp5_hw_mixer *mixer = mdp5_crtc->mixer; + struct mdp5_hw_mixer *mixer = mdp5_cstate->pipeline.mixer; uint32_t lm = mixer->lm; unsigned long flags; struct drm_display_mode *mode; @@ -338,6 +340,7 @@ static void mdp5_crtc_mode_set_nofb(struct drm_crtc *crtc) static void mdp5_crtc_disable(struct drm_crtc *crtc) { struct mdp5_crtc *mdp5_crtc = to_mdp5_crtc(crtc); + struct mdp5_crtc_state *mdp5_cstate = to_mdp5_crtc_state(crtc->state); struct mdp5_kms *mdp5_kms = get_kms(crtc);
DBG("%s", crtc->name); @@ -345,7 +348,7 @@ static void mdp5_crtc_disable(struct drm_crtc *crtc) if (WARN_ON(!mdp5_crtc->enabled)) return;
- if (mdp5_crtc->cmd_mode) + if (mdp5_cstate->cmd_mode) mdp_irq_unregister(&mdp5_kms->base, &mdp5_crtc->pp_done);
mdp_irq_unregister(&mdp5_kms->base, &mdp5_crtc->err); @@ -357,6 +360,7 @@ static void mdp5_crtc_disable(struct drm_crtc *crtc) static void mdp5_crtc_enable(struct drm_crtc *crtc) { struct mdp5_crtc *mdp5_crtc = to_mdp5_crtc(crtc); + struct mdp5_crtc_state *mdp5_cstate = to_mdp5_crtc_state(crtc->state); struct mdp5_kms *mdp5_kms = get_kms(crtc);
DBG("%s", crtc->name); @@ -367,7 +371,7 @@ static void mdp5_crtc_enable(struct drm_crtc *crtc) mdp5_enable(mdp5_kms); mdp_irq_register(&mdp5_kms->base, &mdp5_crtc->err);
- if (mdp5_crtc->cmd_mode) + if (mdp5_cstate->cmd_mode) mdp_irq_register(&mdp5_kms->base, &mdp5_crtc->pp_done);
mdp5_crtc->enabled = true; @@ -514,6 +518,7 @@ static void mdp5_crtc_atomic_flush(struct drm_crtc *crtc, struct drm_crtc_state *old_crtc_state) { struct mdp5_crtc *mdp5_crtc = to_mdp5_crtc(crtc); + struct mdp5_crtc_state *mdp5_cstate = to_mdp5_crtc_state(crtc->state); struct drm_device *dev = crtc->dev; unsigned long flags;
@@ -530,7 +535,8 @@ static void mdp5_crtc_atomic_flush(struct drm_crtc *crtc, * it means we are trying to flush a CRTC whose state is disabled: * nothing else needs to be done. */ - if (unlikely(!mdp5_crtc->ctl)) + /* XXX: Can this happen now ? */ + if (unlikely(!mdp5_cstate->ctl)) return;
blend_setup(crtc); @@ -541,11 +547,16 @@ static void mdp5_crtc_atomic_flush(struct drm_crtc *crtc, * This is safe because no pp_done will happen before SW trigger * in command mode. */ - if (mdp5_crtc->cmd_mode) + if (mdp5_cstate->cmd_mode) request_pp_done_pending(crtc);
mdp5_crtc->flushed_mask = crtc_flush_all(crtc);
+ /* XXX are we leaking out state here? */ + mdp5_crtc->vblank.irqmask = mdp5_cstate->vblank_irqmask; + mdp5_crtc->err.irqmask = mdp5_cstate->err_irqmask; + mdp5_crtc->pp_done.irqmask = mdp5_cstate->pp_done_irqmask; + request_pending(crtc, PENDING_FLIP); }
@@ -580,11 +591,13 @@ static int mdp5_crtc_cursor_set(struct drm_crtc *crtc, uint32_t width, uint32_t height) { struct mdp5_crtc *mdp5_crtc = to_mdp5_crtc(crtc); + struct mdp5_crtc_state *mdp5_cstate = to_mdp5_crtc_state(crtc->state); struct drm_device *dev = crtc->dev; struct mdp5_kms *mdp5_kms = get_kms(crtc); struct drm_gem_object *cursor_bo, *old_bo = NULL; uint32_t blendcfg, stride; uint64_t cursor_addr; + struct mdp5_ctl *ctl; int ret, lm; enum mdp5_cursor_alpha cur_alpha = CURSOR_ALPHA_PER_PIXEL; uint32_t flush_mask = mdp_ctl_flush_mask_cursor(0); @@ -597,7 +610,8 @@ static int mdp5_crtc_cursor_set(struct drm_crtc *crtc, return -EINVAL; }
- if (NULL == mdp5_crtc->ctl) + ctl = mdp5_cstate->ctl; + if (!ctl) return -EINVAL;
if (!handle) { @@ -614,7 +628,7 @@ static int mdp5_crtc_cursor_set(struct drm_crtc *crtc, if (ret) return -EINVAL;
- lm = mdp5_crtc->mixer->lm; + lm = mdp5_cstate->pipeline.mixer->lm; stride = width * drm_format_plane_cpp(DRM_FORMAT_ARGB8888, 0);
spin_lock_irqsave(&mdp5_crtc->cursor.lock, flags); @@ -644,7 +658,7 @@ static int mdp5_crtc_cursor_set(struct drm_crtc *crtc, spin_unlock_irqrestore(&mdp5_crtc->cursor.lock, flags);
set_cursor: - ret = mdp5_ctl_set_cursor(mdp5_crtc->ctl, 0, cursor_enable); + ret = mdp5_ctl_set_cursor(ctl, 0, cursor_enable); if (ret) { dev_err(dev->dev, "failed to %sable cursor: %d\n", cursor_enable ? "en" : "dis", ret); @@ -666,7 +680,8 @@ static int mdp5_crtc_cursor_move(struct drm_crtc *crtc, int x, int y) { struct mdp5_kms *mdp5_kms = get_kms(crtc); struct mdp5_crtc *mdp5_crtc = to_mdp5_crtc(crtc); - uint32_t lm = mdp5_crtc->mixer->lm; + struct mdp5_crtc_state *mdp5_cstate = to_mdp5_crtc_state(crtc->state); + uint32_t lm = mdp5_cstate->pipeline.mixer->lm; uint32_t flush_mask = mdp_ctl_flush_mask_cursor(0); uint32_t roi_w; uint32_t roi_h; @@ -824,23 +839,26 @@ static void mdp5_crtc_wait_for_pp_done(struct drm_crtc *crtc) { struct drm_device *dev = crtc->dev; struct mdp5_crtc *mdp5_crtc = to_mdp5_crtc(crtc); + struct mdp5_crtc_state *mdp5_cstate = to_mdp5_crtc_state(crtc->state); int ret;
ret = wait_for_completion_timeout(&mdp5_crtc->pp_completion, msecs_to_jiffies(50)); if (ret == 0) dev_warn(dev->dev, "pp done time out, lm=%d\n", - mdp5_crtc->mixer->lm); + mdp5_cstate->pipeline.mixer->lm); }
static void mdp5_crtc_wait_for_flush_done(struct drm_crtc *crtc) { struct drm_device *dev = crtc->dev; struct mdp5_crtc *mdp5_crtc = to_mdp5_crtc(crtc); + struct mdp5_crtc_state *mdp5_cstate = to_mdp5_crtc_state(crtc->state); + struct mdp5_ctl *ctl = mdp5_cstate->ctl; int ret;
/* Should not call this function if crtc is disabled. */ - if (!mdp5_crtc->ctl) + if (!ctl) return;
ret = drm_crtc_vblank_get(crtc); @@ -848,7 +866,7 @@ static void mdp5_crtc_wait_for_flush_done(struct drm_crtc *crtc) return;
ret = wait_event_timeout(dev->vblank[drm_crtc_index(crtc)].queue, - ((mdp5_ctl_get_commit_status(mdp5_crtc->ctl) & + ((mdp5_ctl_get_commit_status(ctl) & mdp5_crtc->flushed_mask) == 0), msecs_to_jiffies(50)); if (ret <= 0) @@ -868,50 +886,41 @@ uint32_t mdp5_crtc_vblank(struct drm_crtc *crtc) void mdp5_crtc_set_pipeline(struct drm_crtc *crtc, struct mdp5_interface *intf, struct mdp5_ctl *ctl) { - struct mdp5_crtc *mdp5_crtc = to_mdp5_crtc(crtc); + struct mdp5_crtc_state *mdp5_cstate = to_mdp5_crtc_state(crtc->state); + struct mdp5_hw_mixer *mixer = mdp5_cstate->pipeline.mixer; struct mdp5_kms *mdp5_kms = get_kms(crtc); - struct mdp5_hw_mixer *mixer = mdp5_crtc->mixer; - - /* now that we know what irq's we want: */ - mdp5_crtc->err.irqmask = intf2err(intf->num); - mdp5_crtc->vblank.irqmask = intf2vblank(mixer, intf); - - if ((intf->type == INTF_DSI) && - (intf->mode == MDP5_INTF_DSI_MODE_COMMAND)) { - mdp5_crtc->pp_done.irqmask = lm2ppdone(mixer); - mdp5_crtc->pp_done.irq = mdp5_crtc_pp_done_irq; - mdp5_crtc->cmd_mode = true; - } else { - mdp5_crtc->pp_done.irqmask = 0; - mdp5_crtc->pp_done.irq = NULL; - mdp5_crtc->cmd_mode = false; - }
+ /* should this be done elsewhere ? */ mdp_irq_update(&mdp5_kms->base);
- mdp5_crtc->ctl = ctl; mdp5_ctl_set_pipeline(ctl, intf, mixer); }
struct mdp5_ctl *mdp5_crtc_get_ctl(struct drm_crtc *crtc) { - struct mdp5_crtc *mdp5_crtc = to_mdp5_crtc(crtc); + struct mdp5_crtc_state *mdp5_cstate = to_mdp5_crtc_state(crtc->state);
- return mdp5_crtc->ctl; + return mdp5_cstate->ctl; }
struct mdp5_hw_mixer *mdp5_crtc_get_mixer(struct drm_crtc *crtc) { - struct mdp5_crtc *mdp5_crtc = to_mdp5_crtc(crtc); - return WARN_ON(!crtc) || WARN_ON(!mdp5_crtc->mixer) ? - ERR_PTR(-EINVAL) : mdp5_crtc->mixer; + struct mdp5_crtc_state *mdp5_cstate; + + if (WARN_ON(!crtc)) + return ERR_PTR(-EINVAL); + + mdp5_cstate = to_mdp5_crtc_state(crtc->state); + + return WARN_ON(!mdp5_cstate->pipeline.mixer) ? + ERR_PTR(-EINVAL) : mdp5_cstate->pipeline.mixer; }
void mdp5_crtc_wait_for_commit_done(struct drm_crtc *crtc) { - struct mdp5_crtc *mdp5_crtc = to_mdp5_crtc(crtc); + struct mdp5_crtc_state *mdp5_cstate = to_mdp5_crtc_state(crtc->state);
- if (mdp5_crtc->cmd_mode) + if (mdp5_cstate->cmd_mode) mdp5_crtc_wait_for_pp_done(crtc); else mdp5_crtc_wait_for_flush_done(crtc); @@ -924,7 +933,6 @@ struct drm_crtc *mdp5_crtc_init(struct drm_device *dev, { struct drm_crtc *crtc = NULL; struct mdp5_crtc *mdp5_crtc; - struct mdp5_kms *mdp5_kms;
mdp5_crtc = kzalloc(sizeof(*mdp5_crtc), GFP_KERNEL); if (!mdp5_crtc) @@ -940,6 +948,7 @@ struct drm_crtc *mdp5_crtc_init(struct drm_device *dev,
mdp5_crtc->vblank.irq = mdp5_crtc_vblank_irq; mdp5_crtc->err.irq = mdp5_crtc_err_irq; + mdp5_crtc->pp_done.irq = mdp5_crtc_pp_done_irq;
if (cursor_plane) drm_crtc_init_with_planes(dev, crtc, plane, cursor_plane, @@ -954,8 +963,5 @@ struct drm_crtc *mdp5_crtc_init(struct drm_device *dev, drm_crtc_helper_add(crtc, &mdp5_crtc_helper_funcs); plane->crtc = crtc;
- mdp5_kms = get_kms(crtc); - mdp5_crtc->mixer = mdp5_kms->hwmixers[id]; - return crtc; } diff --git a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.c b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.c index 9525c990f2dc..13e6c9721c4c 100644 --- a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.c +++ b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.c @@ -888,10 +888,6 @@ static int hwmixer_init(struct mdp5_kms *mdp5_kms) return ret; }
- /* Don't create LMs connected to WB for now */ - if (!mixer) - continue; - mixer->idx = mdp5_kms->num_hwmixers; mdp5_kms->hwmixers[mdp5_kms->num_hwmixers++] = mixer; } diff --git a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_mixer.c b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_mixer.c index 6f353c4886e4..9bb1f824b2a9 100644 --- a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_mixer.c +++ b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_mixer.c @@ -84,10 +84,6 @@ struct mdp5_hw_mixer *mdp5_mixer_init(const struct mdp5_lm_instance *lm) { struct mdp5_hw_mixer *mixer;
- /* ignore WB bound mixers for now */ - if (lm->caps & MDP_LM_CAP_WB) - return NULL; - mixer = kzalloc(sizeof(*mixer), GFP_KERNEL); if (!mixer) return ERR_PTR(-ENOMEM);
These are a part of CRTC state, it doesn't feel nice to leave them hanging in mdp5_ctl struct. Pass mdp5_pipeline pointer instead wherever it is needed.
We still have some params in mdp5_ctl like start_mask etc which are derivative of atomic state, and should be rolled back if a commit fails, but it doesn't seem to cause much trouble.
Signed-off-by: Archit Taneja architt@codeaurora.org --- drivers/gpu/drm/msm/mdp/mdp5/mdp5_cmd_encoder.c | 15 +++--- drivers/gpu/drm/msm/mdp/mdp5/mdp5_crtc.c | 32 ++++++++---- drivers/gpu/drm/msm/mdp/mdp5/mdp5_ctl.c | 68 ++++++++++++++----------- drivers/gpu/drm/msm/mdp/mdp5/mdp5_ctl.h | 16 +++--- drivers/gpu/drm/msm/mdp/mdp5/mdp5_encoder.c | 13 ++--- drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.h | 4 +- drivers/gpu/drm/msm/mdp/mdp5/mdp5_plane.c | 3 +- 7 files changed, 88 insertions(+), 63 deletions(-)
diff --git a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_cmd_encoder.c b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_cmd_encoder.c index 18c967107bc4..8dafc7bdba48 100644 --- a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_cmd_encoder.c +++ b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_cmd_encoder.c @@ -132,8 +132,6 @@ void mdp5_cmd_encoder_mode_set(struct drm_encoder *encoder, struct drm_display_mode *mode, struct drm_display_mode *adjusted_mode) { - struct mdp5_encoder *mdp5_cmd_enc = to_mdp5_encoder(encoder); - mode = adjusted_mode;
DBG("set mode: %d:"%s" %d %d %d %d %d %d %d %d %d %d 0x%x 0x%x", @@ -145,8 +143,7 @@ void mdp5_cmd_encoder_mode_set(struct drm_encoder *encoder, mode->vsync_end, mode->vtotal, mode->type, mode->flags); pingpong_tearcheck_setup(encoder, mode); - mdp5_crtc_set_pipeline(encoder->crtc, mdp5_cmd_enc->intf, - mdp5_cmd_enc->ctl); + mdp5_crtc_set_pipeline(encoder->crtc); }
void mdp5_cmd_encoder_disable(struct drm_encoder *encoder) @@ -154,14 +151,15 @@ void mdp5_cmd_encoder_disable(struct drm_encoder *encoder) struct mdp5_encoder *mdp5_cmd_enc = to_mdp5_encoder(encoder); struct mdp5_ctl *ctl = mdp5_cmd_enc->ctl; struct mdp5_interface *intf = mdp5_cmd_enc->intf; + struct mdp5_pipeline *pipeline = mdp5_crtc_get_pipeline(encoder->crtc);
if (WARN_ON(!mdp5_cmd_enc->enabled)) return;
pingpong_tearcheck_disable(encoder);
- mdp5_ctl_set_encoder_state(ctl, false); - mdp5_ctl_commit(ctl, mdp_ctl_flush_mask_encoder(intf)); + mdp5_ctl_set_encoder_state(ctl, pipeline, false); + mdp5_ctl_commit(ctl, pipeline, mdp_ctl_flush_mask_encoder(intf));
bs_set(mdp5_cmd_enc, 0);
@@ -173,6 +171,7 @@ void mdp5_cmd_encoder_enable(struct drm_encoder *encoder) struct mdp5_encoder *mdp5_cmd_enc = to_mdp5_encoder(encoder); struct mdp5_ctl *ctl = mdp5_cmd_enc->ctl; struct mdp5_interface *intf = mdp5_cmd_enc->intf; + struct mdp5_pipeline *pipeline = mdp5_crtc_get_pipeline(encoder->crtc);
if (WARN_ON(mdp5_cmd_enc->enabled)) return; @@ -181,9 +180,9 @@ void mdp5_cmd_encoder_enable(struct drm_encoder *encoder) if (pingpong_tearcheck_enable(encoder)) return;
- mdp5_ctl_commit(ctl, mdp_ctl_flush_mask_encoder(intf)); + mdp5_ctl_commit(ctl, pipeline, mdp_ctl_flush_mask_encoder(intf));
- mdp5_ctl_set_encoder_state(ctl, true); + mdp5_ctl_set_encoder_state(ctl, pipeline, true);
mdp5_cmd_enc->enabled = true; } diff --git a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_crtc.c b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_crtc.c index 0c9fb602f93f..f382c3405786 100644 --- a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_crtc.c +++ b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_crtc.c @@ -91,9 +91,10 @@ static u32 crtc_flush(struct drm_crtc *crtc, u32 flush_mask) { struct mdp5_crtc_state *mdp5_cstate = to_mdp5_crtc_state(crtc->state); struct mdp5_ctl *ctl = mdp5_cstate->ctl; + struct mdp5_pipeline *pipeline = &mdp5_cstate->pipeline;
DBG("%s: flush=%08x", crtc->name, flush_mask); - return mdp5_ctl_commit(ctl, flush_mask); + return mdp5_ctl_commit(ctl, pipeline, flush_mask); }
/* @@ -126,6 +127,7 @@ static u32 crtc_flush_all(struct drm_crtc *crtc) static void complete_flip(struct drm_crtc *crtc, struct drm_file *file) { struct mdp5_crtc_state *mdp5_cstate = to_mdp5_crtc_state(crtc->state); + struct mdp5_pipeline *pipeline = &mdp5_cstate->pipeline; struct mdp5_crtc *mdp5_crtc = to_mdp5_crtc(crtc); struct mdp5_ctl *ctl = mdp5_cstate->ctl; struct drm_device *dev = crtc->dev; @@ -149,7 +151,7 @@ static void complete_flip(struct drm_crtc *crtc, struct drm_file *file)
if (ctl && !crtc->state->enable) { /* set STAGE_UNUSED for all layers */ - mdp5_ctl_blend(ctl, NULL, 0, 0); + mdp5_ctl_blend(ctl, pipeline, NULL, 0, 0); /* XXX: What to do here? */ /* mdp5_crtc->ctl = NULL; */ } @@ -201,12 +203,13 @@ static void blend_setup(struct drm_crtc *crtc) { struct mdp5_crtc *mdp5_crtc = to_mdp5_crtc(crtc); struct mdp5_crtc_state *mdp5_cstate = to_mdp5_crtc_state(crtc->state); + struct mdp5_pipeline *pipeline = &mdp5_cstate->pipeline; struct mdp5_kms *mdp5_kms = get_kms(crtc); struct drm_plane *plane; const struct mdp5_cfg_hw *hw_cfg; struct mdp5_plane_state *pstate, *pstates[STAGE_MAX + 1] = {NULL}; const struct mdp_format *format; - struct mdp5_hw_mixer *mixer = mdp5_cstate->pipeline.mixer; + struct mdp5_hw_mixer *mixer = pipeline->mixer; uint32_t lm = mixer->lm; struct mdp5_ctl *ctl = mdp5_cstate->ctl; uint32_t blend_op, fg_alpha, bg_alpha, ctl_blend_flags = 0; @@ -300,7 +303,7 @@ static void blend_setup(struct drm_crtc *crtc)
mdp5_write(mdp5_kms, REG_MDP5_LM_BLEND_COLOR_OUT(lm), mixer_op_mode);
- mdp5_ctl_blend(ctl, stage, plane_cnt, ctl_blend_flags); + mdp5_ctl_blend(ctl, pipeline, stage, plane_cnt, ctl_blend_flags);
out: spin_unlock_irqrestore(&mdp5_crtc->lm_lock, flags); @@ -592,6 +595,7 @@ static int mdp5_crtc_cursor_set(struct drm_crtc *crtc, { struct mdp5_crtc *mdp5_crtc = to_mdp5_crtc(crtc); struct mdp5_crtc_state *mdp5_cstate = to_mdp5_crtc_state(crtc->state); + struct mdp5_pipeline *pipeline = &mdp5_cstate->pipeline; struct drm_device *dev = crtc->dev; struct mdp5_kms *mdp5_kms = get_kms(crtc); struct drm_gem_object *cursor_bo, *old_bo = NULL; @@ -658,7 +662,7 @@ static int mdp5_crtc_cursor_set(struct drm_crtc *crtc, spin_unlock_irqrestore(&mdp5_crtc->cursor.lock, flags);
set_cursor: - ret = mdp5_ctl_set_cursor(ctl, 0, cursor_enable); + ret = mdp5_ctl_set_cursor(ctl, pipeline, 0, cursor_enable); if (ret) { dev_err(dev->dev, "failed to %sable cursor: %d\n", cursor_enable ? "en" : "dis", ret); @@ -883,17 +887,15 @@ uint32_t mdp5_crtc_vblank(struct drm_crtc *crtc) return mdp5_crtc->vblank.irqmask; }
-void mdp5_crtc_set_pipeline(struct drm_crtc *crtc, - struct mdp5_interface *intf, struct mdp5_ctl *ctl) +void mdp5_crtc_set_pipeline(struct drm_crtc *crtc) { struct mdp5_crtc_state *mdp5_cstate = to_mdp5_crtc_state(crtc->state); - struct mdp5_hw_mixer *mixer = mdp5_cstate->pipeline.mixer; struct mdp5_kms *mdp5_kms = get_kms(crtc);
/* should this be done elsewhere ? */ mdp_irq_update(&mdp5_kms->base);
- mdp5_ctl_set_pipeline(ctl, intf, mixer); + mdp5_ctl_set_pipeline(mdp5_cstate->ctl, &mdp5_cstate->pipeline); }
struct mdp5_ctl *mdp5_crtc_get_ctl(struct drm_crtc *crtc) @@ -916,6 +918,18 @@ struct mdp5_hw_mixer *mdp5_crtc_get_mixer(struct drm_crtc *crtc) ERR_PTR(-EINVAL) : mdp5_cstate->pipeline.mixer; }
+struct mdp5_pipeline *mdp5_crtc_get_pipeline(struct drm_crtc *crtc) +{ + struct mdp5_crtc_state *mdp5_cstate; + + if (WARN_ON(!crtc)) + return ERR_PTR(-EINVAL); + + mdp5_cstate = to_mdp5_crtc_state(crtc->state); + + return &mdp5_cstate->pipeline; +} + void mdp5_crtc_wait_for_commit_done(struct drm_crtc *crtc) { struct mdp5_crtc_state *mdp5_cstate = to_mdp5_crtc_state(crtc->state); diff --git a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_ctl.c b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_ctl.c index a86f1fd359c3..9a0109410974 100644 --- a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_ctl.c +++ b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_ctl.c @@ -36,14 +36,10 @@ struct mdp5_ctl { struct mdp5_ctl_manager *ctlm;
u32 id; - struct mdp5_hw_mixer *mixer;
/* CTL status bitmask */ u32 status;
- /* Operation Mode Configuration for the Pipeline */ - struct mdp5_interface *intf; - bool encoder_enabled; uint32_t start_mask;
@@ -170,14 +166,12 @@ static void set_ctl_op(struct mdp5_ctl *ctl, struct mdp5_interface *intf) spin_unlock_irqrestore(&ctl->hw_lock, flags); }
-int mdp5_ctl_set_pipeline(struct mdp5_ctl *ctl, struct mdp5_interface *intf, - struct mdp5_hw_mixer *mixer) +int mdp5_ctl_set_pipeline(struct mdp5_ctl *ctl, struct mdp5_pipeline *pipeline) { struct mdp5_ctl_manager *ctl_mgr = ctl->ctlm; struct mdp5_kms *mdp5_kms = get_kms(ctl_mgr); - - ctl->mixer = mixer; - ctl->intf = intf; + struct mdp5_interface *intf = pipeline->intf; + struct mdp5_hw_mixer *mixer = pipeline->mixer;
ctl->start_mask = mdp_ctl_flush_mask_lm(mixer->lm) | mdp_ctl_flush_mask_encoder(intf); @@ -191,16 +185,19 @@ int mdp5_ctl_set_pipeline(struct mdp5_ctl *ctl, struct mdp5_interface *intf, return 0; }
-static bool start_signal_needed(struct mdp5_ctl *ctl) +static bool start_signal_needed(struct mdp5_ctl *ctl, + struct mdp5_pipeline *pipeline) { + struct mdp5_interface *intf = pipeline->intf; + if (!ctl->encoder_enabled || ctl->start_mask != 0) return false;
- switch (ctl->intf->type) { + switch (intf->type) { case INTF_WB: return true; case INTF_DSI: - return ctl->intf->mode == MDP5_INTF_DSI_MODE_COMMAND; + return intf->mode == MDP5_INTF_DSI_MODE_COMMAND; default: return false; } @@ -222,11 +219,13 @@ static void send_start_signal(struct mdp5_ctl *ctl) spin_unlock_irqrestore(&ctl->hw_lock, flags); }
-static void refill_start_mask(struct mdp5_ctl *ctl) +static void refill_start_mask(struct mdp5_ctl *ctl, + struct mdp5_pipeline *pipeline) { - struct mdp5_interface *intf = ctl->intf; + struct mdp5_interface *intf = pipeline->intf; + struct mdp5_hw_mixer *mixer = pipeline->mixer;
- ctl->start_mask = mdp_ctl_flush_mask_lm(ctl->mixer->lm); + ctl->start_mask = mdp_ctl_flush_mask_lm(mixer->lm);
/* * Writeback encoder needs to program & flush @@ -244,17 +243,21 @@ static void refill_start_mask(struct mdp5_ctl *ctl) * Note: * This encoder state is needed to trigger START signal (data path kickoff). */ -int mdp5_ctl_set_encoder_state(struct mdp5_ctl *ctl, bool enabled) +int mdp5_ctl_set_encoder_state(struct mdp5_ctl *ctl, + struct mdp5_pipeline *pipeline, + bool enabled) { + struct mdp5_interface *intf = pipeline->intf; + if (WARN_ON(!ctl)) return -EINVAL;
ctl->encoder_enabled = enabled; - DBG("intf_%d: %s", ctl->intf->num, enabled ? "on" : "off"); + DBG("intf_%d: %s", intf->num, enabled ? "on" : "off");
- if (start_signal_needed(ctl)) { + if (start_signal_needed(ctl, pipeline)) { send_start_signal(ctl); - refill_start_mask(ctl); + refill_start_mask(ctl, pipeline); }
return 0; @@ -265,12 +268,13 @@ int mdp5_ctl_set_encoder_state(struct mdp5_ctl *ctl, bool enabled) * CTL registers need to be flushed after calling this function * (call mdp5_ctl_commit() with mdp_ctl_flush_mask_ctl() mask) */ -int mdp5_ctl_set_cursor(struct mdp5_ctl *ctl, int cursor_id, bool enable) +int mdp5_ctl_set_cursor(struct mdp5_ctl *ctl, struct mdp5_pipeline *pipeline, + int cursor_id, bool enable) { struct mdp5_ctl_manager *ctl_mgr = ctl->ctlm; unsigned long flags; u32 blend_cfg; - struct mdp5_hw_mixer *mixer = ctl->mixer; + struct mdp5_hw_mixer *mixer = pipeline->mixer;
if (unlikely(WARN_ON(!mixer))) { dev_err(ctl_mgr->dev->dev, "CTL %d cannot find LM", @@ -340,10 +344,10 @@ static u32 mdp_ctl_blend_ext_mask(enum mdp5_pipe pipe, } }
-int mdp5_ctl_blend(struct mdp5_ctl *ctl, enum mdp5_pipe *stage, u32 stage_cnt, - u32 ctl_blend_op_flags) +int mdp5_ctl_blend(struct mdp5_ctl *ctl, struct mdp5_pipeline *pipeline, + enum mdp5_pipe *stage, u32 stage_cnt, u32 ctl_blend_op_flags) { - struct mdp5_hw_mixer *mixer = ctl->mixer; + struct mdp5_hw_mixer *mixer = pipeline->mixer; unsigned long flags; u32 blend_cfg = 0, blend_ext_cfg = 0; int i, start_stage; @@ -430,7 +434,8 @@ u32 mdp_ctl_flush_mask_lm(int lm) } }
-static u32 fix_sw_flush(struct mdp5_ctl *ctl, u32 flush_mask) +static u32 fix_sw_flush(struct mdp5_ctl *ctl, struct mdp5_pipeline *pipeline, + u32 flush_mask) { struct mdp5_ctl_manager *ctl_mgr = ctl->ctlm; u32 sw_mask = 0; @@ -439,7 +444,7 @@ static u32 fix_sw_flush(struct mdp5_ctl *ctl, u32 flush_mask)
/* for some targets, cursor bit is the same as LM bit */ if (BIT_NEEDS_SW_FIX(MDP5_CTL_FLUSH_CURSOR_0)) - sw_mask |= mdp_ctl_flush_mask_lm(ctl->mixer->lm); + sw_mask |= mdp_ctl_flush_mask_lm(pipeline->mixer->lm);
return sw_mask; } @@ -485,7 +490,9 @@ static void fix_for_single_flush(struct mdp5_ctl *ctl, u32 *flush_mask, * * Return H/W flushed bit mask. */ -u32 mdp5_ctl_commit(struct mdp5_ctl *ctl, u32 flush_mask) +u32 mdp5_ctl_commit(struct mdp5_ctl *ctl, + struct mdp5_pipeline *pipeline, + u32 flush_mask) { struct mdp5_ctl_manager *ctl_mgr = ctl->ctlm; unsigned long flags; @@ -502,7 +509,7 @@ u32 mdp5_ctl_commit(struct mdp5_ctl *ctl, u32 flush_mask) ctl->pending_ctl_trigger = 0; }
- flush_mask |= fix_sw_flush(ctl, flush_mask); + flush_mask |= fix_sw_flush(ctl, pipeline, flush_mask);
flush_mask &= ctl_mgr->flush_hw_mask;
@@ -516,9 +523,9 @@ u32 mdp5_ctl_commit(struct mdp5_ctl *ctl, u32 flush_mask) spin_unlock_irqrestore(&ctl->hw_lock, flags); }
- if (start_signal_needed(ctl)) { + if (start_signal_needed(ctl, pipeline)) { send_start_signal(ctl); - refill_start_mask(ctl); + refill_start_mask(ctl, pipeline); }
return curr_ctl_flush_mask; @@ -605,7 +612,6 @@ struct mdp5_ctl *mdp5_ctlm_request(struct mdp5_ctl_manager *ctl_mgr,
found: ctl = &ctl_mgr->ctls[c]; - ctl->mixer = NULL; ctl->status |= CTL_STAT_BUSY; ctl->pending_ctl_trigger = 0; DBG("CTL %d allocated", ctl->id); diff --git a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_ctl.h b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_ctl.h index 882c9d2be365..751dd861cfd8 100644 --- a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_ctl.h +++ b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_ctl.h @@ -37,11 +37,13 @@ struct mdp5_ctl *mdp5_ctlm_request(struct mdp5_ctl_manager *ctlm, int intf_num); int mdp5_ctl_get_ctl_id(struct mdp5_ctl *ctl);
struct mdp5_interface; -int mdp5_ctl_set_pipeline(struct mdp5_ctl *ctl, struct mdp5_interface *intf, - struct mdp5_hw_mixer *lm); -int mdp5_ctl_set_encoder_state(struct mdp5_ctl *ctl, bool enabled); +struct mdp5_pipeline; +int mdp5_ctl_set_pipeline(struct mdp5_ctl *ctl, struct mdp5_pipeline *p); +int mdp5_ctl_set_encoder_state(struct mdp5_ctl *ctl, struct mdp5_pipeline *p, + bool enabled);
-int mdp5_ctl_set_cursor(struct mdp5_ctl *ctl, int cursor_id, bool enable); +int mdp5_ctl_set_cursor(struct mdp5_ctl *ctl, struct mdp5_pipeline *pipeline, + int cursor_id, bool enable); int mdp5_ctl_pair(struct mdp5_ctl *ctlx, struct mdp5_ctl *ctly, bool enable);
/* @@ -56,7 +58,8 @@ int mdp5_ctl_pair(struct mdp5_ctl *ctlx, struct mdp5_ctl *ctly, bool enable); * (call mdp5_ctl_commit() with mdp_ctl_flush_mask_ctl() mask) */ #define MDP5_CTL_BLEND_OP_FLAG_BORDER_OUT BIT(0) -int mdp5_ctl_blend(struct mdp5_ctl *ctl, enum mdp5_pipe *stage, u32 stage_cnt, +int mdp5_ctl_blend(struct mdp5_ctl *ctl, struct mdp5_pipeline *pipeline, + enum mdp5_pipe *stage, u32 stage_cnt, u32 ctl_blend_op_flags);
/** @@ -71,7 +74,8 @@ u32 mdp_ctl_flush_mask_cursor(int cursor_id); u32 mdp_ctl_flush_mask_encoder(struct mdp5_interface *intf);
/* @flush_mask: see CTL flush masks definitions below */ -u32 mdp5_ctl_commit(struct mdp5_ctl *ctl, u32 flush_mask); +u32 mdp5_ctl_commit(struct mdp5_ctl *ctl, struct mdp5_pipeline *pipeline, + u32 flush_mask); u32 mdp5_ctl_get_commit_status(struct mdp5_ctl *ctl);
diff --git a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_encoder.c b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_encoder.c index 6d535140ef21..c2ab0f033031 100644 --- a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_encoder.c +++ b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_encoder.c @@ -206,8 +206,7 @@ static void mdp5_vid_encoder_mode_set(struct drm_encoder *encoder,
spin_unlock_irqrestore(&mdp5_encoder->intf_lock, flags);
- mdp5_crtc_set_pipeline(encoder->crtc, mdp5_encoder->intf, - mdp5_encoder->ctl); + mdp5_crtc_set_pipeline(encoder->crtc); }
static void mdp5_vid_encoder_disable(struct drm_encoder *encoder) @@ -215,6 +214,7 @@ static void mdp5_vid_encoder_disable(struct drm_encoder *encoder) struct mdp5_encoder *mdp5_encoder = to_mdp5_encoder(encoder); struct mdp5_kms *mdp5_kms = get_kms(encoder); struct mdp5_ctl *ctl = mdp5_encoder->ctl; + struct mdp5_pipeline *pipeline = mdp5_crtc_get_pipeline(encoder->crtc); struct mdp5_hw_mixer *mixer = mdp5_crtc_get_mixer(encoder->crtc); struct mdp5_interface *intf = mdp5_encoder->intf; int intfn = mdp5_encoder->intf->num; @@ -223,12 +223,12 @@ static void mdp5_vid_encoder_disable(struct drm_encoder *encoder) if (WARN_ON(!mdp5_encoder->enabled)) return;
- mdp5_ctl_set_encoder_state(ctl, false); + mdp5_ctl_set_encoder_state(ctl, pipeline, false);
spin_lock_irqsave(&mdp5_encoder->intf_lock, flags); mdp5_write(mdp5_kms, REG_MDP5_INTF_TIMING_ENGINE_EN(intfn), 0); spin_unlock_irqrestore(&mdp5_encoder->intf_lock, flags); - mdp5_ctl_commit(ctl, mdp_ctl_flush_mask_encoder(intf)); + mdp5_ctl_commit(ctl, pipeline, mdp_ctl_flush_mask_encoder(intf));
/* * Wait for a vsync so we know the ENABLE=0 latched before @@ -251,6 +251,7 @@ static void mdp5_vid_encoder_enable(struct drm_encoder *encoder) struct mdp5_kms *mdp5_kms = get_kms(encoder); struct mdp5_ctl *ctl = mdp5_encoder->ctl; struct mdp5_interface *intf = mdp5_encoder->intf; + struct mdp5_pipeline *pipeline = mdp5_crtc_get_pipeline(encoder->crtc); int intfn = intf->num; unsigned long flags;
@@ -261,9 +262,9 @@ static void mdp5_vid_encoder_enable(struct drm_encoder *encoder) spin_lock_irqsave(&mdp5_encoder->intf_lock, flags); mdp5_write(mdp5_kms, REG_MDP5_INTF_TIMING_ENGINE_EN(intfn), 1); spin_unlock_irqrestore(&mdp5_encoder->intf_lock, flags); - mdp5_ctl_commit(ctl, mdp_ctl_flush_mask_encoder(intf)); + mdp5_ctl_commit(ctl, pipeline, mdp_ctl_flush_mask_encoder(intf));
- mdp5_ctl_set_encoder_state(ctl, true); + mdp5_ctl_set_encoder_state(ctl, pipeline, true);
mdp5_encoder->enabled = true; } diff --git a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.h b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.h index c31c193e59f2..7c899cc7388a 100644 --- a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.h +++ b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.h @@ -286,8 +286,8 @@ struct mdp5_ctl *mdp5_crtc_get_ctl(struct drm_crtc *crtc); uint32_t mdp5_crtc_vblank(struct drm_crtc *crtc);
struct mdp5_hw_mixer *mdp5_crtc_get_mixer(struct drm_crtc *crtc); -void mdp5_crtc_set_pipeline(struct drm_crtc *crtc, - struct mdp5_interface *intf, struct mdp5_ctl *ctl); +struct mdp5_pipeline *mdp5_crtc_get_pipeline(struct drm_crtc *crtc); +void mdp5_crtc_set_pipeline(struct drm_crtc *crtc); void mdp5_crtc_wait_for_commit_done(struct drm_crtc *crtc); struct drm_crtc *mdp5_crtc_init(struct drm_device *dev, struct drm_plane *plane, diff --git a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_plane.c b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_plane.c index 964b2e3089c3..3dca3c07770f 100644 --- a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_plane.c +++ b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_plane.c @@ -935,6 +935,7 @@ static int mdp5_update_cursor_plane_legacy(struct drm_plane *plane,
if (new_plane_state->visible) { struct mdp5_ctl *ctl; + struct mdp5_pipeline *pipeline = mdp5_crtc_get_pipeline(crtc);
ret = mdp5_plane_mode_set(plane, crtc, fb, &new_plane_state->src, @@ -943,7 +944,7 @@ static int mdp5_update_cursor_plane_legacy(struct drm_plane *plane,
ctl = mdp5_crtc_get_ctl(crtc);
- mdp5_ctl_commit(ctl, mdp5_plane_get_flush(plane)); + mdp5_ctl_commit(ctl, pipeline, mdp5_plane_get_flush(plane)); }
*to_mdp5_plane_state(plane_state) =
Some of the newer MDP5 versions support Source Split of SSPPs. It is a feature that allows us to route the output of a hwpipe to 2 Layer Mixers. This is required to achieve the following use cases:
- Dual DSI: For high res DSI panels (such as 2560x1600 etc), a single DSI interface doesn't have the bandwidth to drive the required pixel clock. We use 2 DSI interfaces to drive the left and right halves of the panel (i.e, 1280x1600 each). The MDP5 pipeline here would look like:
LM0 -- DSPP0 -- INTF1 -- DSI1 / hwpipe-- \ LM1 -- DSPP1 -- INTF2 -- DSI2
A single hwpipe is used to scan out the left and right halves to DSI1 and DSI2 respectively. In order to do this, we need to configure the 2 Layer Mixers in Source Split mode.
- HDMI 4K: In order to support resolutions with width higher than the max width supported by a hwpipe, we club 2 hwpipes together:
hwpipe1 --- LM0 -- DSPP0 - - \ - -- 3D Mux -- INTF0 -- HDMI - - / hwpipe2 --- LM1 -- DSPP1
hwpipe1 is staged on the 'left' Layer Mixer, and hwpipe2 is staged on the 'right' Layer Mixer. An additional block called the '3D Mux' is used to merge the output of the 2 DSPPs to a single interface. In this use case, it is possible that a 4K surface is downscaled and placed completely within one of the halves. In order to support such scenarios (and keep the programming simple), Layer Mixers with Source Split can be assigned 2 hw pipes per stage. While scanning out, the HW takes care of fetching the pixels fom the correct pipe.
Add a MDP cap to tell whether the HW supports source split or not. Add a MDP LM cap that tells whether a LM instance can operate in source split mode (and generate the 'left' part of the display output).
Signed-off-by: Archit Taneja architt@codeaurora.org --- drivers/gpu/drm/msm/mdp/mdp5/mdp5_cfg.c | 21 +++++++++++++++------ drivers/gpu/drm/msm/mdp/mdp_kms.h | 2 ++ 2 files changed, 17 insertions(+), 6 deletions(-)
diff --git a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_cfg.c b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_cfg.c index e7b15846457c..735232765723 100644 --- a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_cfg.c +++ b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_cfg.c @@ -191,6 +191,7 @@ const struct mdp5_cfg_hw apq8084_config = { .mdp = { .count = 1, .caps = MDP_CAP_SMP | + MDP_CAP_SRC_SPLIT | 0, }, .smp = { @@ -237,11 +238,13 @@ const struct mdp5_cfg_hw apq8084_config = { .base = { 0x03900, 0x03d00, 0x04100, 0x04500, 0x04900, 0x04d00 }, .instances = { { .id = 0, .pp = 0, .dspp = 0, - .caps = MDP_LM_CAP_DISPLAY, }, + .caps = MDP_LM_CAP_DISPLAY | + MDP_LM_CAP_PAIR, }, { .id = 1, .pp = 1, .dspp = 1, .caps = MDP_LM_CAP_DISPLAY, }, { .id = 2, .pp = 2, .dspp = 2, - .caps = MDP_LM_CAP_DISPLAY, }, + .caps = MDP_LM_CAP_DISPLAY | + MDP_LM_CAP_PAIR, }, { .id = 3, .pp = -1, .dspp = -1, .caps = MDP_LM_CAP_WB, }, { .id = 4, .pp = -1, .dspp = -1, @@ -350,6 +353,7 @@ const struct mdp5_cfg_hw msm8x94_config = { .mdp = { .count = 1, .caps = MDP_CAP_SMP | + MDP_CAP_SRC_SPLIT | 0, }, .smp = { @@ -396,11 +400,13 @@ const struct mdp5_cfg_hw msm8x94_config = { .base = { 0x44000, 0x45000, 0x46000, 0x47000, 0x48000, 0x49000 }, .instances = { { .id = 0, .pp = 0, .dspp = 0, - .caps = MDP_LM_CAP_DISPLAY, }, + .caps = MDP_LM_CAP_DISPLAY | + MDP_LM_CAP_PAIR, }, { .id = 1, .pp = 1, .dspp = 1, .caps = MDP_LM_CAP_DISPLAY, }, { .id = 2, .pp = 2, .dspp = 2, - .caps = MDP_LM_CAP_DISPLAY, }, + .caps = MDP_LM_CAP_DISPLAY | + MDP_LM_CAP_PAIR, }, { .id = 3, .pp = -1, .dspp = -1, .caps = MDP_LM_CAP_WB, }, { .id = 4, .pp = -1, .dspp = -1, @@ -443,6 +449,7 @@ const struct mdp5_cfg_hw msm8x96_config = { .count = 1, .caps = MDP_CAP_DSC | MDP_CAP_CDM | + MDP_CAP_SRC_SPLIT | 0, }, .ctl = { @@ -494,11 +501,13 @@ const struct mdp5_cfg_hw msm8x96_config = { .base = { 0x44000, 0x45000, 0x46000, 0x47000, 0x48000, 0x49000 }, .instances = { { .id = 0, .pp = 0, .dspp = 0, - .caps = MDP_LM_CAP_DISPLAY }, + .caps = MDP_LM_CAP_DISPLAY | + MDP_LM_CAP_PAIR, }, { .id = 1, .pp = 1, .dspp = 1, .caps = MDP_LM_CAP_DISPLAY, }, { .id = 2, .pp = 2, .dspp = -1, - .caps = MDP_LM_CAP_DISPLAY }, + .caps = MDP_LM_CAP_DISPLAY | + MDP_LM_CAP_PAIR, }, { .id = 3, .pp = -1, .dspp = -1, .caps = MDP_LM_CAP_WB, }, { .id = 4, .pp = -1, .dspp = -1, diff --git a/drivers/gpu/drm/msm/mdp/mdp_kms.h b/drivers/gpu/drm/msm/mdp/mdp_kms.h index bf4db664ee86..1185487e7e5e 100644 --- a/drivers/gpu/drm/msm/mdp/mdp_kms.h +++ b/drivers/gpu/drm/msm/mdp/mdp_kms.h @@ -104,6 +104,7 @@ const struct msm_format *mdp_get_format(struct msm_kms *kms, uint32_t format); #define MDP_CAP_SMP BIT(0) /* Shared Memory Pool */ #define MDP_CAP_DSC BIT(1) /* VESA Display Stream Compression */ #define MDP_CAP_CDM BIT(2) /* Chroma Down Module (HDMI 2.0 YUV) */ +#define MDP_CAP_SRC_SPLIT BIT(3) /* Source Split of SSPPs */
/* MDP pipe capabilities */ #define MDP_PIPE_CAP_HFLIP BIT(0) @@ -117,6 +118,7 @@ const struct msm_format *mdp_get_format(struct msm_kms *kms, uint32_t format); /* MDP layer mixer caps */ #define MDP_LM_CAP_DISPLAY BIT(0) #define MDP_LM_CAP_WB BIT(1) +#define MDP_LM_CAP_PAIR BIT(2)
static inline bool pipe_supports_yuv(uint32_t pipe_caps) {
Add another mdp5_hw_mixer pointer (r_mixer) in mdp5_crtc_state. This mixer will be used to generate the right half of the scanout.
With Source Split, a SSPP can now be connected to 2 Layer Mixers, but has to be at the same blend level (stage #) on both Layer Mixers.
A drm_plane that has a lesser width than the max width supported, will comprise of a single SSPP/hwpipe, staged on both the Layer Mixers at the same blend level. A plane that is greater than max width will comprise of 2 SSPPs, with the 'left' SSPP staged on the left LM, and the 'right' SSPP staged on the right LM at the same blend level.
For now, the drm_plane consists of only one SSPP, therefore, it needs to be staged on both the LMs in blend_setup() and mdp5_ctl_blend(). We'll extend this logic to support 2 hwpipes per plane later.
The crtc cursor ops (using the LM cursors, not SSPP cursors) simply return an error if they're called when the right mixer is assigned to the CRTC state. With source split is enabled, we're expected to only SSPP cursors.
This commit adds code that configures the right mixer, but the r_mixer itself isn't assigned at the moment.
Signed-off-by: Archit Taneja architt@codeaurora.org --- drivers/gpu/drm/msm/mdp/mdp5/mdp5_crtc.c | 62 ++++++++++++++++++++++++++++---- drivers/gpu/drm/msm/mdp/mdp5/mdp5_ctl.c | 44 +++++++++++++++++++++-- drivers/gpu/drm/msm/mdp/mdp5/mdp5_ctl.h | 7 ++-- drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.h | 1 + 4 files changed, 103 insertions(+), 11 deletions(-)
diff --git a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_crtc.c b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_crtc.c index f382c3405786..5bffef6ec9dc 100644 --- a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_crtc.c +++ b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_crtc.c @@ -105,7 +105,7 @@ static u32 crtc_flush(struct drm_crtc *crtc, u32 flush_mask) static u32 crtc_flush_all(struct drm_crtc *crtc) { struct mdp5_crtc_state *mdp5_cstate = to_mdp5_crtc_state(crtc->state); - struct mdp5_hw_mixer *mixer; + struct mdp5_hw_mixer *mixer, *r_mixer; struct drm_plane *plane; uint32_t flush_mask = 0;
@@ -120,6 +120,10 @@ static u32 crtc_flush_all(struct drm_crtc *crtc) mixer = mdp5_cstate->pipeline.mixer; flush_mask |= mdp_ctl_flush_mask_lm(mixer->lm);
+ r_mixer = mdp5_cstate->pipeline.r_mixer; + if (r_mixer) + flush_mask |= mdp_ctl_flush_mask_lm(r_mixer->lm); + return crtc_flush(crtc, flush_mask); }
@@ -151,7 +155,7 @@ static void complete_flip(struct drm_crtc *crtc, struct drm_file *file)
if (ctl && !crtc->state->enable) { /* set STAGE_UNUSED for all layers */ - mdp5_ctl_blend(ctl, pipeline, NULL, 0, 0); + mdp5_ctl_blend(ctl, pipeline, NULL, NULL, 0, 0); /* XXX: What to do here? */ /* mdp5_crtc->ctl = NULL; */ } @@ -193,6 +197,12 @@ static inline u32 mdp5_lm_use_fg_alpha_mask(enum mdp_mixer_stage_id stage) }
/* + * left/right pipe offsets for the stage array used in blend_setup() + */ +#define PIPE_LEFT 0 +#define PIPE_RIGHT 1 + +/* * blend_setup() - blend all the planes of a CRTC * * If no base layer is available, border will be enabled as the base layer. @@ -211,10 +221,13 @@ static void blend_setup(struct drm_crtc *crtc) const struct mdp_format *format; struct mdp5_hw_mixer *mixer = pipeline->mixer; uint32_t lm = mixer->lm; + struct mdp5_hw_mixer *r_mixer = pipeline->r_mixer; + uint32_t r_lm = r_mixer ? r_mixer->lm : 0; struct mdp5_ctl *ctl = mdp5_cstate->ctl; uint32_t blend_op, fg_alpha, bg_alpha, ctl_blend_flags = 0; unsigned long flags; - enum mdp5_pipe stage[STAGE_MAX + 1] = { SSPP_NONE }; + enum mdp5_pipe stage[STAGE_MAX + 1][MAX_PIPE_STAGE] = { SSPP_NONE }; + enum mdp5_pipe r_stage[STAGE_MAX + 1][MAX_PIPE_STAGE] = { SSPP_NONE }; int i, plane_cnt = 0; bool bg_alpha_enabled = false; u32 mixer_op_mode = 0; @@ -233,7 +246,15 @@ static void blend_setup(struct drm_crtc *crtc) drm_atomic_crtc_for_each_plane(plane, crtc) { pstate = to_mdp5_plane_state(plane->state); pstates[pstate->stage] = pstate; - stage[pstate->stage] = mdp5_plane_pipe(plane); + stage[pstate->stage][PIPE_LEFT] = mdp5_plane_pipe(plane); + /* + * if we have a right mixer, stage the same pipe as we + * have on the left mixer + */ + if (r_mixer) + r_stage[pstate->stage][PIPE_LEFT] = + mdp5_plane_pipe(plane); + plane_cnt++; }
@@ -299,12 +320,23 @@ static void blend_setup(struct drm_crtc *crtc) blender(i)), fg_alpha); mdp5_write(mdp5_kms, REG_MDP5_LM_BLEND_BG_ALPHA(lm, blender(i)), bg_alpha); + if (r_mixer) { + mdp5_write(mdp5_kms, REG_MDP5_LM_BLEND_OP_MODE(r_lm, + blender(i)), blend_op); + mdp5_write(mdp5_kms, REG_MDP5_LM_BLEND_FG_ALPHA(r_lm, + blender(i)), fg_alpha); + mdp5_write(mdp5_kms, REG_MDP5_LM_BLEND_BG_ALPHA(r_lm, + blender(i)), bg_alpha); + } }
mdp5_write(mdp5_kms, REG_MDP5_LM_BLEND_COLOR_OUT(lm), mixer_op_mode); + if (r_mixer) + mdp5_write(mdp5_kms, REG_MDP5_LM_BLEND_COLOR_OUT(r_lm), + mixer_op_mode);
- mdp5_ctl_blend(ctl, pipeline, stage, plane_cnt, ctl_blend_flags); - + mdp5_ctl_blend(ctl, pipeline, stage, r_stage, plane_cnt, + ctl_blend_flags); out: spin_unlock_irqrestore(&mdp5_crtc->lm_lock, flags); } @@ -315,6 +347,7 @@ static void mdp5_crtc_mode_set_nofb(struct drm_crtc *crtc) struct mdp5_crtc_state *mdp5_cstate = to_mdp5_crtc_state(crtc->state); struct mdp5_kms *mdp5_kms = get_kms(crtc); struct mdp5_hw_mixer *mixer = mdp5_cstate->pipeline.mixer; + struct mdp5_hw_mixer *r_mixer = mdp5_cstate->pipeline.r_mixer; uint32_t lm = mixer->lm; unsigned long flags; struct drm_display_mode *mode; @@ -337,6 +370,10 @@ static void mdp5_crtc_mode_set_nofb(struct drm_crtc *crtc) mdp5_write(mdp5_kms, REG_MDP5_LM_OUT_SIZE(lm), MDP5_LM_OUT_SIZE_WIDTH(mode->hdisplay) | MDP5_LM_OUT_SIZE_HEIGHT(mode->vdisplay)); + if (r_mixer) + mdp5_write(mdp5_kms, REG_MDP5_LM_OUT_SIZE(r_mixer->lm), + MDP5_LM_OUT_SIZE_WIDTH(mode->hdisplay) | + MDP5_LM_OUT_SIZE_HEIGHT(mode->vdisplay)); spin_unlock_irqrestore(&mdp5_crtc->lm_lock, flags); }
@@ -618,6 +655,10 @@ static int mdp5_crtc_cursor_set(struct drm_crtc *crtc, if (!ctl) return -EINVAL;
+ /* don't support LM cursors when we we have source split enabled */ + if (mdp5_cstate->pipeline.r_mixer) + return -EINVAL; + if (!handle) { DBG("Cursor off"); cursor_enable = false; @@ -691,6 +732,10 @@ static int mdp5_crtc_cursor_move(struct drm_crtc *crtc, int x, int y) uint32_t roi_h; unsigned long flags;
+ /* don't support LM cursors when we we have source split enabled */ + if (mdp5_cstate->pipeline.r_mixer) + return -EINVAL; + /* In case the CRTC is disabled, just drop the cursor update */ if (unlikely(!crtc->state->enable)) return 0; @@ -720,12 +765,17 @@ mdp5_crtc_atomic_print_state(struct drm_printer *p, { struct mdp5_crtc_state *mdp5_cstate = to_mdp5_crtc_state(state); struct mdp5_pipeline *pipeline = &mdp5_cstate->pipeline; + struct mdp5_kms *mdp5_kms = get_kms(state->crtc);
if (WARN_ON(!pipeline)) return;
drm_printf(p, "\thwmixer=%s\n", pipeline->mixer ? pipeline->mixer->name : "(null)"); + + if (mdp5_kms->caps & MDP_CAP_SRC_SPLIT) + drm_printf(p, "\tright hwmixer=%s\n", pipeline->r_mixer ? + pipeline->r_mixer->name : "(null)"); }
static void mdp5_crtc_reset(struct drm_crtc *crtc) diff --git a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_ctl.c b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_ctl.c index 9a0109410974..1fdbb936877f 100644 --- a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_ctl.c +++ b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_ctl.c @@ -172,9 +172,12 @@ int mdp5_ctl_set_pipeline(struct mdp5_ctl *ctl, struct mdp5_pipeline *pipeline) struct mdp5_kms *mdp5_kms = get_kms(ctl_mgr); struct mdp5_interface *intf = pipeline->intf; struct mdp5_hw_mixer *mixer = pipeline->mixer; + struct mdp5_hw_mixer *r_mixer = pipeline->r_mixer;
ctl->start_mask = mdp_ctl_flush_mask_lm(mixer->lm) | mdp_ctl_flush_mask_encoder(intf); + if (r_mixer) + ctl->start_mask |= mdp_ctl_flush_mask_lm(r_mixer->lm);
/* Virtual interfaces need not set a display intf (e.g.: Writeback) */ if (!mdp5_cfg_intf_is_virtual(intf->type)) @@ -224,8 +227,11 @@ static void refill_start_mask(struct mdp5_ctl *ctl, { struct mdp5_interface *intf = pipeline->intf; struct mdp5_hw_mixer *mixer = pipeline->mixer; + struct mdp5_hw_mixer *r_mixer = pipeline->r_mixer;
ctl->start_mask = mdp_ctl_flush_mask_lm(mixer->lm); + if (r_mixer) + ctl->start_mask |= mdp_ctl_flush_mask_lm(r_mixer->lm);
/* * Writeback encoder needs to program & flush @@ -282,6 +288,11 @@ int mdp5_ctl_set_cursor(struct mdp5_ctl *ctl, struct mdp5_pipeline *pipeline, return -EINVAL; }
+ if (pipeline->r_mixer) { + dev_err(ctl_mgr->dev->dev, "unsupported configuration"); + return -EINVAL; + } + spin_lock_irqsave(&ctl->hw_lock, flags);
blend_cfg = ctl_read(ctl, REG_MDP5_CTL_LAYER_REG(ctl->id, mixer->lm)); @@ -344,24 +355,40 @@ static u32 mdp_ctl_blend_ext_mask(enum mdp5_pipe pipe, } }
+#define PIPE_LEFT 0 +#define PIPE_RIGHT 1 int mdp5_ctl_blend(struct mdp5_ctl *ctl, struct mdp5_pipeline *pipeline, - enum mdp5_pipe *stage, u32 stage_cnt, u32 ctl_blend_op_flags) + enum mdp5_pipe stage[][MAX_PIPE_STAGE], + enum mdp5_pipe r_stage[][MAX_PIPE_STAGE], + u32 stage_cnt, u32 ctl_blend_op_flags) { struct mdp5_hw_mixer *mixer = pipeline->mixer; + struct mdp5_hw_mixer *r_mixer = pipeline->r_mixer; unsigned long flags; u32 blend_cfg = 0, blend_ext_cfg = 0; + u32 r_blend_cfg = 0, r_blend_ext_cfg = 0; int i, start_stage;
if (ctl_blend_op_flags & MDP5_CTL_BLEND_OP_FLAG_BORDER_OUT) { start_stage = STAGE0; blend_cfg |= MDP5_CTL_LAYER_REG_BORDER_COLOR; + if (r_mixer) + r_blend_cfg |= MDP5_CTL_LAYER_REG_BORDER_COLOR; } else { start_stage = STAGE_BASE; }
for (i = start_stage; stage_cnt && i <= STAGE_MAX; i++) { - blend_cfg |= mdp_ctl_blend_mask(stage[i], i); - blend_ext_cfg |= mdp_ctl_blend_ext_mask(stage[i], i); + blend_cfg |= + mdp_ctl_blend_mask(stage[i][PIPE_LEFT], i); + blend_ext_cfg |= + mdp_ctl_blend_ext_mask(stage[i][PIPE_LEFT], i); + if (r_mixer) { + r_blend_cfg |= + mdp_ctl_blend_mask(r_stage[i][PIPE_LEFT], i); + r_blend_ext_cfg |= + mdp_ctl_blend_ext_mask(r_stage[i][PIPE_LEFT], i); + } }
spin_lock_irqsave(&ctl->hw_lock, flags); @@ -371,12 +398,23 @@ int mdp5_ctl_blend(struct mdp5_ctl *ctl, struct mdp5_pipeline *pipeline, ctl_write(ctl, REG_MDP5_CTL_LAYER_REG(ctl->id, mixer->lm), blend_cfg); ctl_write(ctl, REG_MDP5_CTL_LAYER_EXT_REG(ctl->id, mixer->lm), blend_ext_cfg); + if (r_mixer) { + ctl_write(ctl, REG_MDP5_CTL_LAYER_REG(ctl->id, r_mixer->lm), + r_blend_cfg); + ctl_write(ctl, REG_MDP5_CTL_LAYER_EXT_REG(ctl->id, r_mixer->lm), + r_blend_ext_cfg); + } spin_unlock_irqrestore(&ctl->hw_lock, flags);
ctl->pending_ctl_trigger = mdp_ctl_flush_mask_lm(mixer->lm); + if (r_mixer) + ctl->pending_ctl_trigger |= mdp_ctl_flush_mask_lm(r_mixer->lm);
DBG("lm%d: blend config = 0x%08x. ext_cfg = 0x%08x", mixer->lm, blend_cfg, blend_ext_cfg); + if (r_mixer) + DBG("lm%d: blend config = 0x%08x. ext_cfg = 0x%08x", + r_mixer->lm, r_blend_cfg, r_blend_ext_cfg);
return 0; } diff --git a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_ctl.h b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_ctl.h index 751dd861cfd8..b63120388dc6 100644 --- a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_ctl.h +++ b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_ctl.h @@ -46,6 +46,8 @@ int mdp5_ctl_set_cursor(struct mdp5_ctl *ctl, struct mdp5_pipeline *pipeline, int cursor_id, bool enable); int mdp5_ctl_pair(struct mdp5_ctl *ctlx, struct mdp5_ctl *ctly, bool enable);
+#define MAX_PIPE_STAGE 2 + /* * mdp5_ctl_blend() - Blend multiple layers on a Layer Mixer (LM) * @@ -59,8 +61,9 @@ int mdp5_ctl_pair(struct mdp5_ctl *ctlx, struct mdp5_ctl *ctly, bool enable); */ #define MDP5_CTL_BLEND_OP_FLAG_BORDER_OUT BIT(0) int mdp5_ctl_blend(struct mdp5_ctl *ctl, struct mdp5_pipeline *pipeline, - enum mdp5_pipe *stage, u32 stage_cnt, - u32 ctl_blend_op_flags); + enum mdp5_pipe stage[][MAX_PIPE_STAGE], + enum mdp5_pipe r_stage[][MAX_PIPE_STAGE], + u32 stage_cnt, u32 ctl_blend_op_flags);
/** * mdp_ctl_flush_mask...() - Register FLUSH masks diff --git a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.h b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.h index 7c899cc7388a..759cf069f2ab 100644 --- a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.h +++ b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.h @@ -125,6 +125,7 @@ struct mdp5_plane_state { struct mdp5_pipeline { struct mdp5_interface *intf; struct mdp5_hw_mixer *mixer; + struct mdp5_hw_mixer *r_mixer; /* right mixer */ };
struct mdp5_crtc_state {
Refactor mdp5_plane_mode_set to call mdp5_hwpipe_mode_set. The latter func takes in only the hwpipe and the parameters that need to be programmed into the hwpipe registers. All the code that calculates these parameters is left as is in mdp5_plane_mode_set.
In the future, when we let drm_plane be comprised of 2 hwpipes, this func allow us to configure each pipe without adding redundant code.
Signed-off-by: Archit Taneja architt@codeaurora.org --- drivers/gpu/drm/msm/mdp/mdp5/mdp5_plane.c | 227 +++++++++++++++++------------- 1 file changed, 130 insertions(+), 97 deletions(-)
diff --git a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_plane.c b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_plane.c index 3dca3c07770f..e52bee877198 100644 --- a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_plane.c +++ b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_plane.c @@ -41,9 +41,6 @@ static int mdp5_update_cursor_plane_legacy(struct drm_plane *plane, uint32_t src_x, uint32_t src_y, uint32_t src_w, uint32_t src_h);
-static void set_scanout_locked(struct drm_plane *plane, - struct drm_framebuffer *fb); - static struct mdp5_kms *get_kms(struct drm_plane *plane) { struct msm_drm_private *priv = plane->dev->dev_private; @@ -438,13 +435,10 @@ static const struct drm_plane_helper_funcs mdp5_plane_helper_funcs = { .atomic_update = mdp5_plane_atomic_update, };
-static void set_scanout_locked(struct drm_plane *plane, - struct drm_framebuffer *fb) +static void set_scanout_locked(struct mdp5_kms *mdp5_kms, + enum mdp5_pipe pipe, + struct drm_framebuffer *fb) { - struct mdp5_kms *mdp5_kms = get_kms(plane); - struct mdp5_hw_pipe *hwpipe = to_mdp5_plane_state(plane->state)->hwpipe; - enum mdp5_pipe pipe = hwpipe->pipe; - mdp5_write(mdp5_kms, REG_MDP5_PIPE_SRC_STRIDE_A(pipe), MDP5_PIPE_SRC_STRIDE_A_P0(fb->pitches[0]) | MDP5_PIPE_SRC_STRIDE_A_P1(fb->pitches[1])); @@ -461,8 +455,6 @@ static void set_scanout_locked(struct drm_plane *plane, msm_framebuffer_iova(fb, mdp5_kms->id, 2)); mdp5_write(mdp5_kms, REG_MDP5_PIPE_SRC3_ADDR(pipe), msm_framebuffer_iova(fb, mdp5_kms->id, 3)); - - plane->fb = fb; }
/* Note: mdp5_plane->pipe_lock must be locked */ @@ -715,6 +707,114 @@ static void mdp5_write_pixel_ext(struct mdp5_kms *mdp5_kms, enum mdp5_pipe pipe, } }
+struct pixel_ext { + int left[COMP_MAX]; + int right[COMP_MAX]; + int top[COMP_MAX]; + int bottom[COMP_MAX]; +}; + +struct phase_step { + u32 x[COMP_MAX]; + u32 y[COMP_MAX]; +}; + +static void mdp5_hwpipe_mode_set(struct mdp5_kms *mdp5_kms, + struct mdp5_hw_pipe *hwpipe, + struct drm_framebuffer *fb, + struct phase_step *step, + struct pixel_ext *pe, + u32 scale_config, u32 hdecm, u32 vdecm, + bool hflip, bool vflip, + int crtc_x, int crtc_y, + unsigned int crtc_w, unsigned int crtc_h, + u32 src_img_w, u32 src_img_h, + u32 src_x, u32 src_y, + u32 src_w, u32 src_h) +{ + enum mdp5_pipe pipe = hwpipe->pipe; + bool has_pe = hwpipe->caps & MDP_PIPE_CAP_SW_PIX_EXT; + const struct mdp_format *format = + to_mdp_format(msm_framebuffer_format(fb)); + + mdp5_write(mdp5_kms, REG_MDP5_PIPE_SRC_IMG_SIZE(pipe), + MDP5_PIPE_SRC_IMG_SIZE_WIDTH(src_img_w) | + MDP5_PIPE_SRC_IMG_SIZE_HEIGHT(src_img_h)); + + mdp5_write(mdp5_kms, REG_MDP5_PIPE_SRC_SIZE(pipe), + MDP5_PIPE_SRC_SIZE_WIDTH(src_w) | + MDP5_PIPE_SRC_SIZE_HEIGHT(src_h)); + + mdp5_write(mdp5_kms, REG_MDP5_PIPE_SRC_XY(pipe), + MDP5_PIPE_SRC_XY_X(src_x) | + MDP5_PIPE_SRC_XY_Y(src_y)); + + mdp5_write(mdp5_kms, REG_MDP5_PIPE_OUT_SIZE(pipe), + MDP5_PIPE_OUT_SIZE_WIDTH(crtc_w) | + MDP5_PIPE_OUT_SIZE_HEIGHT(crtc_h)); + + mdp5_write(mdp5_kms, REG_MDP5_PIPE_OUT_XY(pipe), + MDP5_PIPE_OUT_XY_X(crtc_x) | + MDP5_PIPE_OUT_XY_Y(crtc_y)); + + mdp5_write(mdp5_kms, REG_MDP5_PIPE_SRC_FORMAT(pipe), + MDP5_PIPE_SRC_FORMAT_A_BPC(format->bpc_a) | + MDP5_PIPE_SRC_FORMAT_R_BPC(format->bpc_r) | + MDP5_PIPE_SRC_FORMAT_G_BPC(format->bpc_g) | + MDP5_PIPE_SRC_FORMAT_B_BPC(format->bpc_b) | + COND(format->alpha_enable, MDP5_PIPE_SRC_FORMAT_ALPHA_ENABLE) | + MDP5_PIPE_SRC_FORMAT_CPP(format->cpp - 1) | + MDP5_PIPE_SRC_FORMAT_UNPACK_COUNT(format->unpack_count - 1) | + COND(format->unpack_tight, MDP5_PIPE_SRC_FORMAT_UNPACK_TIGHT) | + MDP5_PIPE_SRC_FORMAT_FETCH_TYPE(format->fetch_type) | + MDP5_PIPE_SRC_FORMAT_CHROMA_SAMP(format->chroma_sample)); + + mdp5_write(mdp5_kms, REG_MDP5_PIPE_SRC_UNPACK(pipe), + MDP5_PIPE_SRC_UNPACK_ELEM0(format->unpack[0]) | + MDP5_PIPE_SRC_UNPACK_ELEM1(format->unpack[1]) | + MDP5_PIPE_SRC_UNPACK_ELEM2(format->unpack[2]) | + MDP5_PIPE_SRC_UNPACK_ELEM3(format->unpack[3])); + + mdp5_write(mdp5_kms, REG_MDP5_PIPE_SRC_OP_MODE(pipe), + (hflip ? MDP5_PIPE_SRC_OP_MODE_FLIP_LR : 0) | + (vflip ? MDP5_PIPE_SRC_OP_MODE_FLIP_UD : 0) | + COND(has_pe, MDP5_PIPE_SRC_OP_MODE_SW_PIX_EXT_OVERRIDE) | + MDP5_PIPE_SRC_OP_MODE_BWC(BWC_LOSSLESS)); + + /* not using secure mode: */ + mdp5_write(mdp5_kms, REG_MDP5_PIPE_SRC_ADDR_SW_STATUS(pipe), 0); + + if (hwpipe->caps & MDP_PIPE_CAP_SW_PIX_EXT) + mdp5_write_pixel_ext(mdp5_kms, pipe, format, + src_w, pe->left, pe->right, + src_h, pe->top, pe->bottom); + + if (hwpipe->caps & MDP_PIPE_CAP_SCALE) { + mdp5_write(mdp5_kms, REG_MDP5_PIPE_SCALE_PHASE_STEP_X(pipe), + step->x[COMP_0]); + mdp5_write(mdp5_kms, REG_MDP5_PIPE_SCALE_PHASE_STEP_Y(pipe), + step->y[COMP_0]); + mdp5_write(mdp5_kms, REG_MDP5_PIPE_SCALE_CR_PHASE_STEP_X(pipe), + step->x[COMP_1_2]); + mdp5_write(mdp5_kms, REG_MDP5_PIPE_SCALE_CR_PHASE_STEP_Y(pipe), + step->y[COMP_1_2]); + mdp5_write(mdp5_kms, REG_MDP5_PIPE_DECIMATION(pipe), + MDP5_PIPE_DECIMATION_VERT(vdecm) | + MDP5_PIPE_DECIMATION_HORZ(hdecm)); + mdp5_write(mdp5_kms, REG_MDP5_PIPE_SCALE_CONFIG(pipe), + scale_config); + } + + if (hwpipe->caps & MDP_PIPE_CAP_CSC) { + if (MDP_FORMAT_IS_YUV(format)) + csc_enable(mdp5_kms, pipe, + mdp_get_default_csc_cfg(CSC_YUV2RGB)); + else + csc_disable(mdp5_kms, pipe); + } + + set_scanout_locked(mdp5_kms, pipe, fb); +}
static int mdp5_plane_mode_set(struct drm_plane *plane, struct drm_crtc *crtc, struct drm_framebuffer *fb, @@ -727,10 +827,8 @@ static int mdp5_plane_mode_set(struct drm_plane *plane, enum mdp5_pipe pipe = hwpipe->pipe; const struct mdp_format *format; uint32_t nplanes, config = 0; - uint32_t phasex_step[COMP_MAX] = {0,}, phasey_step[COMP_MAX] = {0,}; - bool pe = hwpipe->caps & MDP_PIPE_CAP_SW_PIX_EXT; - int pe_left[COMP_MAX], pe_right[COMP_MAX]; - int pe_top[COMP_MAX], pe_bottom[COMP_MAX]; + struct phase_step step = { 0 }; + struct pixel_ext pe = { 0 }; uint32_t hdecm = 0, vdecm = 0; uint32_t pix_format; unsigned int rotation; @@ -739,6 +837,7 @@ static int mdp5_plane_mode_set(struct drm_plane *plane, unsigned int crtc_w, crtc_h; uint32_t src_x, src_y; uint32_t src_w, src_h; + uint32_t src_img_w, src_img_h; unsigned long flags; int ret;
@@ -767,23 +866,26 @@ static int mdp5_plane_mode_set(struct drm_plane *plane, src_w = src_w >> 16; src_h = src_h >> 16;
+ src_img_w = min(fb->width, src_w); + src_img_h = min(fb->height, src_h); + DBG("%s: FB[%u] %u,%u,%u,%u -> CRTC[%u] %d,%d,%u,%u", plane->name, fb->base.id, src_x, src_y, src_w, src_h, crtc->base.id, crtc_x, crtc_y, crtc_w, crtc_h);
- ret = calc_scalex_steps(plane, pix_format, src_w, crtc_w, phasex_step); + ret = calc_scalex_steps(plane, pix_format, src_w, crtc_w, step.x); if (ret) return ret;
- ret = calc_scaley_steps(plane, pix_format, src_h, crtc_h, phasey_step); + ret = calc_scaley_steps(plane, pix_format, src_h, crtc_h, step.y); if (ret) return ret;
if (hwpipe->caps & MDP_PIPE_CAP_SW_PIX_EXT) { - calc_pixel_ext(format, src_w, crtc_w, phasex_step, - pe_left, pe_right, true); - calc_pixel_ext(format, src_h, crtc_h, phasey_step, - pe_top, pe_bottom, false); + calc_pixel_ext(format, src_w, crtc_w, step.x, + pe.left, pe.right, true); + calc_pixel_ext(format, src_h, crtc_h, step.y, + pe.top, pe.bottom, false); }
/* TODO calc hdecm, vdecm */ @@ -802,85 +904,16 @@ static int mdp5_plane_mode_set(struct drm_plane *plane,
spin_lock_irqsave(&mdp5_plane->pipe_lock, flags);
- mdp5_write(mdp5_kms, REG_MDP5_PIPE_SRC_IMG_SIZE(pipe), - MDP5_PIPE_SRC_IMG_SIZE_WIDTH(min(fb->width, src_w)) | - MDP5_PIPE_SRC_IMG_SIZE_HEIGHT(min(fb->height, src_h))); - - mdp5_write(mdp5_kms, REG_MDP5_PIPE_SRC_SIZE(pipe), - MDP5_PIPE_SRC_SIZE_WIDTH(src_w) | - MDP5_PIPE_SRC_SIZE_HEIGHT(src_h)); - - mdp5_write(mdp5_kms, REG_MDP5_PIPE_SRC_XY(pipe), - MDP5_PIPE_SRC_XY_X(src_x) | - MDP5_PIPE_SRC_XY_Y(src_y)); - - mdp5_write(mdp5_kms, REG_MDP5_PIPE_OUT_SIZE(pipe), - MDP5_PIPE_OUT_SIZE_WIDTH(crtc_w) | - MDP5_PIPE_OUT_SIZE_HEIGHT(crtc_h)); - - mdp5_write(mdp5_kms, REG_MDP5_PIPE_OUT_XY(pipe), - MDP5_PIPE_OUT_XY_X(crtc_x) | - MDP5_PIPE_OUT_XY_Y(crtc_y)); - - mdp5_write(mdp5_kms, REG_MDP5_PIPE_SRC_FORMAT(pipe), - MDP5_PIPE_SRC_FORMAT_A_BPC(format->bpc_a) | - MDP5_PIPE_SRC_FORMAT_R_BPC(format->bpc_r) | - MDP5_PIPE_SRC_FORMAT_G_BPC(format->bpc_g) | - MDP5_PIPE_SRC_FORMAT_B_BPC(format->bpc_b) | - COND(format->alpha_enable, MDP5_PIPE_SRC_FORMAT_ALPHA_ENABLE) | - MDP5_PIPE_SRC_FORMAT_CPP(format->cpp - 1) | - MDP5_PIPE_SRC_FORMAT_UNPACK_COUNT(format->unpack_count - 1) | - COND(format->unpack_tight, MDP5_PIPE_SRC_FORMAT_UNPACK_TIGHT) | - MDP5_PIPE_SRC_FORMAT_FETCH_TYPE(format->fetch_type) | - MDP5_PIPE_SRC_FORMAT_CHROMA_SAMP(format->chroma_sample)); - - mdp5_write(mdp5_kms, REG_MDP5_PIPE_SRC_UNPACK(pipe), - MDP5_PIPE_SRC_UNPACK_ELEM0(format->unpack[0]) | - MDP5_PIPE_SRC_UNPACK_ELEM1(format->unpack[1]) | - MDP5_PIPE_SRC_UNPACK_ELEM2(format->unpack[2]) | - MDP5_PIPE_SRC_UNPACK_ELEM3(format->unpack[3])); - - mdp5_write(mdp5_kms, REG_MDP5_PIPE_SRC_OP_MODE(pipe), - (hflip ? MDP5_PIPE_SRC_OP_MODE_FLIP_LR : 0) | - (vflip ? MDP5_PIPE_SRC_OP_MODE_FLIP_UD : 0) | - COND(pe, MDP5_PIPE_SRC_OP_MODE_SW_PIX_EXT_OVERRIDE) | - MDP5_PIPE_SRC_OP_MODE_BWC(BWC_LOSSLESS)); - - /* not using secure mode: */ - mdp5_write(mdp5_kms, REG_MDP5_PIPE_SRC_ADDR_SW_STATUS(pipe), 0); - - if (hwpipe->caps & MDP_PIPE_CAP_SW_PIX_EXT) - mdp5_write_pixel_ext(mdp5_kms, pipe, format, - src_w, pe_left, pe_right, - src_h, pe_top, pe_bottom); - - if (hwpipe->caps & MDP_PIPE_CAP_SCALE) { - mdp5_write(mdp5_kms, REG_MDP5_PIPE_SCALE_PHASE_STEP_X(pipe), - phasex_step[COMP_0]); - mdp5_write(mdp5_kms, REG_MDP5_PIPE_SCALE_PHASE_STEP_Y(pipe), - phasey_step[COMP_0]); - mdp5_write(mdp5_kms, REG_MDP5_PIPE_SCALE_CR_PHASE_STEP_X(pipe), - phasex_step[COMP_1_2]); - mdp5_write(mdp5_kms, REG_MDP5_PIPE_SCALE_CR_PHASE_STEP_Y(pipe), - phasey_step[COMP_1_2]); - mdp5_write(mdp5_kms, REG_MDP5_PIPE_DECIMATION(pipe), - MDP5_PIPE_DECIMATION_VERT(vdecm) | - MDP5_PIPE_DECIMATION_HORZ(hdecm)); - mdp5_write(mdp5_kms, REG_MDP5_PIPE_SCALE_CONFIG(pipe), config); - } - - if (hwpipe->caps & MDP_PIPE_CAP_CSC) { - if (MDP_FORMAT_IS_YUV(format)) - csc_enable(mdp5_kms, pipe, - mdp_get_default_csc_cfg(CSC_YUV2RGB)); - else - csc_disable(mdp5_kms, pipe); - } - - set_scanout_locked(plane, fb); + mdp5_hwpipe_mode_set(mdp5_kms, hwpipe, fb, &step, &pe, + config, hdecm, vdecm, hflip, vflip, + crtc_x, crtc_y, crtc_w, crtc_h, + src_img_w, src_img_h, + src_x, src_y, src_w, src_h);
spin_unlock_irqrestore(&mdp5_plane->pipe_lock, flags);
+ plane->fb = fb; + return ret; }
If the drm_plane has a source width that's greater than the max width supported by a SSPP (2560 pixels on 8x96), then we assign a 'r_hwpipe' to it in mdp5_plane_atomic_check().
TODO: There are a few scenarios where the hwpipe assignments aren't recommended by HW. For example, an assignment which results in a drm_plane to of two different types of hwpipes (say RGB0 on left and DMA1 on right) is not recommended. Also, hwpipes have a priority mapping, where the higher priority pipe needs to be staged on left LM, and the lower priority needs to be staged on the right LM. For example, the priority order for VIG pipes in decreasing order of priority is VIG0, VIG1, VIG2, and VIG3. So, VIG0 on left and VIG1 on right is a correct configuration, but VIG1 on left and VIG0 on right isn't. These scenarios are ignored for now for the sake of simplicity.
Signed-off-by: Archit Taneja architt@codeaurora.org --- drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.h | 1 + drivers/gpu/drm/msm/mdp/mdp5/mdp5_plane.c | 57 ++++++++++++++++++++++++++++++- 2 files changed, 57 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.h b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.h index 759cf069f2ab..2505632658ad 100644 --- a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.h +++ b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.h @@ -110,6 +110,7 @@ struct mdp5_plane_state { struct drm_plane_state base;
struct mdp5_hw_pipe *hwpipe; + struct mdp5_hw_pipe *r_hwpipe; /* right hwpipe */
/* aligned with property */ uint8_t premultiplied; diff --git a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_plane.c b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_plane.c index e52bee877198..c50b17e54dcb 100644 --- a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_plane.c +++ b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_plane.c @@ -176,9 +176,14 @@ mdp5_plane_atomic_print_state(struct drm_printer *p, const struct drm_plane_state *state) { struct mdp5_plane_state *pstate = to_mdp5_plane_state(state); + struct mdp5_kms *mdp5_kms = get_kms(state->plane);
drm_printf(p, "\thwpipe=%s\n", pstate->hwpipe ? pstate->hwpipe->name : "(null)"); + if (mdp5_kms->caps & MDP_CAP_SRC_SPLIT) + drm_printf(p, "\tright-hwpipe=%s\n", + pstate->r_hwpipe ? pstate->r_hwpipe->name : + "(null)"); drm_printf(p, "\tpremultiplied=%u\n", pstate->premultiplied); drm_printf(p, "\tzpos=%u\n", pstate->zpos); drm_printf(p, "\talpha=%u\n", pstate->alpha); @@ -298,7 +303,9 @@ static int mdp5_plane_atomic_check_with_state(struct drm_crtc_state *crtc_state, struct drm_plane_state *old_state = plane->state; struct mdp5_cfg *config = mdp5_cfg_get_config(get_kms(plane)->cfg); bool new_hwpipe = false; + bool need_right_hwpipe = false; uint32_t max_width, max_height; + bool out_of_bounds = false; uint32_t caps = 0; struct drm_rect clip; int min_scale, max_scale; @@ -311,7 +318,23 @@ static int mdp5_plane_atomic_check_with_state(struct drm_crtc_state *crtc_state, max_height = config->hw->lm.max_height << 16;
/* Make sure source dimensions are within bounds. */ - if ((state->src_w > max_width) || (state->src_h > max_height)) { + if (state->src_h > max_height) + out_of_bounds = true; + + if (state->src_w > max_width) { + /* If source split is supported, we can go up to 2x + * the max LM width, but we'd need to stage another + * hwpipe to the right LM. So, the drm_plane would + * consist of 2 hwpipes. + */ + if (config->hw->mdp.caps & MDP_CAP_SRC_SPLIT && + (state->src_w <= 2 * max_width)) + need_right_hwpipe = true; + else + out_of_bounds = true; + } + + if (out_of_bounds) { struct drm_rect src = drm_plane_state_src(state); DBG("Invalid source size "DRM_RECT_FP_FMT, DRM_RECT_FP_ARG(&src)); @@ -362,6 +385,15 @@ static int mdp5_plane_atomic_check_with_state(struct drm_crtc_state *crtc_state, if (!mdp5_state->hwpipe || (caps & ~mdp5_state->hwpipe->caps)) new_hwpipe = true;
+ /* + * (re)allocte hw pipe if we're either requesting for 2 hw pipes + * or we're switching from 2 hw pipes to 1 hw pipe because the + * new src_w can be supported by 1 hw pipe itself. + */ + if ((need_right_hwpipe && !mdp5_state->r_hwpipe) || + (!need_right_hwpipe && mdp5_state->r_hwpipe)) + new_hwpipe = true; + if (mdp5_kms->smp) { const struct mdp_format *format = to_mdp_format(msm_framebuffer_format(state->fb)); @@ -380,13 +412,36 @@ static int mdp5_plane_atomic_check_with_state(struct drm_crtc_state *crtc_state, * it available for other planes? */ struct mdp5_hw_pipe *old_hwpipe = mdp5_state->hwpipe; + struct mdp5_hw_pipe *old_right_hwpipe = + mdp5_state->r_hwpipe; + mdp5_state->hwpipe = mdp5_pipe_assign(state->state, plane, caps, blkcfg); if (IS_ERR(mdp5_state->hwpipe)) { DBG("%s: failed to assign hwpipe!", plane->name); return PTR_ERR(mdp5_state->hwpipe); } + + if (need_right_hwpipe) { + mdp5_state->r_hwpipe = + mdp5_pipe_assign(state->state, plane, + caps, blkcfg); + if (IS_ERR(mdp5_state->r_hwpipe)) { + DBG("%s: failed to assign right hwpipe", + plane->name); + return PTR_ERR(mdp5_state->r_hwpipe); + } + } else { + /* + * set it to NULL so that the driver knows we + * don't have a right hwpipe when committing a + * new state + */ + mdp5_state->r_hwpipe = NULL; + } + mdp5_pipe_release(state->state, old_hwpipe); + mdp5_pipe_release(state->state, old_right_hwpipe); } }
Now that we have a right hwpipe in mdp5_plane_state, configure it mdp5_plane_mode_set(). The only parameters that vary between the left and right hwpipes are the src_w, src_img_w, src_x and crtc_x as we just even chop the fb into left and right halves.
Add a mdp5_plane_right_pipe() which will be used by the crtc code to set up LM stages.
Signed-off-by: Archit Taneja architt@codeaurora.org --- drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.h | 1 + drivers/gpu/drm/msm/mdp/mdp5/mdp5_plane.c | 46 ++++++++++++++++++++++++++++++- 2 files changed, 46 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.h b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.h index 2505632658ad..a8100c8e9c2f 100644 --- a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.h +++ b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.h @@ -281,6 +281,7 @@ void mdp5_irq_domain_fini(struct mdp5_kms *mdp5_kms);
uint32_t mdp5_plane_get_flush(struct drm_plane *plane); enum mdp5_pipe mdp5_plane_pipe(struct drm_plane *plane); +enum mdp5_pipe mdp5_plane_right_pipe(struct drm_plane *plane); struct drm_plane *mdp5_plane_init(struct drm_device *dev, enum drm_plane_type type);
diff --git a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_plane.c b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_plane.c index c50b17e54dcb..5e116532f543 100644 --- a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_plane.c +++ b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_plane.c @@ -880,6 +880,7 @@ static int mdp5_plane_mode_set(struct drm_plane *plane, struct mdp5_hw_pipe *hwpipe = to_mdp5_plane_state(pstate)->hwpipe; struct mdp5_kms *mdp5_kms = get_kms(plane); enum mdp5_pipe pipe = hwpipe->pipe; + struct mdp5_hw_pipe *right_hwpipe; const struct mdp_format *format; uint32_t nplanes, config = 0; struct phase_step step = { 0 }; @@ -893,6 +894,8 @@ static int mdp5_plane_mode_set(struct drm_plane *plane, uint32_t src_x, src_y; uint32_t src_w, src_h; uint32_t src_img_w, src_img_h; + uint32_t src_x_r; + int crtc_x_r; unsigned long flags; int ret;
@@ -928,6 +931,21 @@ static int mdp5_plane_mode_set(struct drm_plane *plane, fb->base.id, src_x, src_y, src_w, src_h, crtc->base.id, crtc_x, crtc_y, crtc_w, crtc_h);
+ right_hwpipe = to_mdp5_plane_state(pstate)->r_hwpipe; + if (right_hwpipe) { + /* + * if the plane comprises of 2 hw pipes, assume that the width + * is split equally across them. The only parameters that varies + * between the 2 pipes are src_x and crtc_x + */ + crtc_w /= 2; + src_w /= 2; + src_img_w /= 2; + + crtc_x_r = crtc_x + crtc_w; + src_x_r = src_x + src_w; + } + ret = calc_scalex_steps(plane, pix_format, src_w, crtc_w, step.x); if (ret) return ret; @@ -964,6 +982,12 @@ static int mdp5_plane_mode_set(struct drm_plane *plane, crtc_x, crtc_y, crtc_w, crtc_h, src_img_w, src_img_h, src_x, src_y, src_w, src_h); + if (right_hwpipe) + mdp5_hwpipe_mode_set(mdp5_kms, right_hwpipe, fb, &step, &pe, + config, hdecm, vdecm, hflip, vflip, + crtc_x_r, crtc_y, crtc_w, crtc_h, + src_img_w, src_img_h, + src_x_r, src_y, src_w, src_h);
spin_unlock_irqrestore(&mdp5_plane->pipe_lock, flags);
@@ -1049,6 +1073,10 @@ static int mdp5_update_cursor_plane_legacy(struct drm_plane *plane, src_x, src_y, src_w, src_h); }
+/* + * Use this func and the one below only after the atomic state has been + * successfully swapped + */ enum mdp5_pipe mdp5_plane_pipe(struct drm_plane *plane) { struct mdp5_plane_state *pstate = to_mdp5_plane_state(plane->state); @@ -1059,14 +1087,30 @@ enum mdp5_pipe mdp5_plane_pipe(struct drm_plane *plane) return pstate->hwpipe->pipe; }
+enum mdp5_pipe mdp5_plane_right_pipe(struct drm_plane *plane) +{ + struct mdp5_plane_state *pstate = to_mdp5_plane_state(plane->state); + + if (!pstate->r_hwpipe) + return SSPP_NONE; + + return pstate->r_hwpipe->pipe; +} + uint32_t mdp5_plane_get_flush(struct drm_plane *plane) { struct mdp5_plane_state *pstate = to_mdp5_plane_state(plane->state); + u32 mask;
if (WARN_ON(!pstate->hwpipe)) return 0;
- return pstate->hwpipe->flush_mask; + mask = pstate->hwpipe->flush_mask; + + if (pstate->r_hwpipe) + mask |= pstate->r_hwpipe->flush_mask; + + return mask; }
/* initialize plane */
In order to enable Source Split in HW, we need to add/modify a few LM register configurations:
- Configure the LM width to be half the mode width, so that each LM manages one half of the scanout. - Tell the 'right' LM that it is configured to be the 'right' LM in source split mode. - Since we now have 2 places where REG_MDP5_LM_BLEND_COLOR_OUT is configured, do a read-update-store for the register instead of directly writing a value to it.
Signed-off-by: Archit Taneja architt@codeaurora.org --- drivers/gpu/drm/msm/mdp/mdp5/mdp5_crtc.c | 39 ++++++++++++++++++++++++++------ 1 file changed, 32 insertions(+), 7 deletions(-)
diff --git a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_crtc.c b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_crtc.c index 5bffef6ec9dc..4b3bc5fc1006 100644 --- a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_crtc.c +++ b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_crtc.c @@ -231,6 +231,7 @@ static void blend_setup(struct drm_crtc *crtc) int i, plane_cnt = 0; bool bg_alpha_enabled = false; u32 mixer_op_mode = 0; + u32 val; #define blender(stage) ((stage) - STAGE0)
hw_cfg = mdp5_cfg_get_hw_config(mdp5_kms->cfg); @@ -330,10 +331,14 @@ static void blend_setup(struct drm_crtc *crtc) } }
- mdp5_write(mdp5_kms, REG_MDP5_LM_BLEND_COLOR_OUT(lm), mixer_op_mode); - if (r_mixer) + val = mdp5_read(mdp5_kms, REG_MDP5_LM_BLEND_COLOR_OUT(lm)); + mdp5_write(mdp5_kms, REG_MDP5_LM_BLEND_COLOR_OUT(lm), + val | mixer_op_mode); + if (r_mixer) { + val = mdp5_read(mdp5_kms, REG_MDP5_LM_BLEND_COLOR_OUT(r_lm)); mdp5_write(mdp5_kms, REG_MDP5_LM_BLEND_COLOR_OUT(r_lm), - mixer_op_mode); + val | mixer_op_mode); + }
mdp5_ctl_blend(ctl, pipeline, stage, r_stage, plane_cnt, ctl_blend_flags); @@ -349,6 +354,7 @@ static void mdp5_crtc_mode_set_nofb(struct drm_crtc *crtc) struct mdp5_hw_mixer *mixer = mdp5_cstate->pipeline.mixer; struct mdp5_hw_mixer *r_mixer = mdp5_cstate->pipeline.r_mixer; uint32_t lm = mixer->lm; + u32 mixer_width, val; unsigned long flags; struct drm_display_mode *mode;
@@ -366,14 +372,33 @@ static void mdp5_crtc_mode_set_nofb(struct drm_crtc *crtc) mode->vsync_end, mode->vtotal, mode->type, mode->flags);
+ mixer_width = mode->hdisplay; + if (r_mixer) + mixer_width /= 2; + spin_lock_irqsave(&mdp5_crtc->lm_lock, flags); mdp5_write(mdp5_kms, REG_MDP5_LM_OUT_SIZE(lm), - MDP5_LM_OUT_SIZE_WIDTH(mode->hdisplay) | + MDP5_LM_OUT_SIZE_WIDTH(mixer_width) | MDP5_LM_OUT_SIZE_HEIGHT(mode->vdisplay)); - if (r_mixer) - mdp5_write(mdp5_kms, REG_MDP5_LM_OUT_SIZE(r_mixer->lm), - MDP5_LM_OUT_SIZE_WIDTH(mode->hdisplay) | + + /* Assign mixer to LEFT side in source split mode */ + val = mdp5_read(mdp5_kms, REG_MDP5_LM_BLEND_COLOR_OUT(lm)); + val &= ~MDP5_LM_BLEND_COLOR_OUT_SPLIT_LEFT_RIGHT; + mdp5_write(mdp5_kms, REG_MDP5_LM_BLEND_COLOR_OUT(lm), val); + + if (r_mixer) { + u32 r_lm = r_mixer->lm; + + mdp5_write(mdp5_kms, REG_MDP5_LM_OUT_SIZE(r_lm), + MDP5_LM_OUT_SIZE_WIDTH(mixer_width) | MDP5_LM_OUT_SIZE_HEIGHT(mode->vdisplay)); + + /* Assign mixer to RIGHT side in source split mode */ + val = mdp5_read(mdp5_kms, REG_MDP5_LM_BLEND_COLOR_OUT(r_lm)); + val |= MDP5_LM_BLEND_COLOR_OUT_SPLIT_LEFT_RIGHT; + mdp5_write(mdp5_kms, REG_MDP5_LM_BLEND_COLOR_OUT(r_lm), val); + } + spin_unlock_irqrestore(&mdp5_crtc->lm_lock, flags); }
Now that our mdp5_planes can consist of 2 hwpipes, update the blend_setup() code to stage the right hwpipe to the left and right LMs
Signed-off-by: Archit Taneja architt@codeaurora.org --- drivers/gpu/drm/msm/mdp/mdp5/mdp5_crtc.c | 12 ++++++++++++ drivers/gpu/drm/msm/mdp/mdp5/mdp5_ctl.c | 12 ++++++++---- 2 files changed, 20 insertions(+), 4 deletions(-)
diff --git a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_crtc.c b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_crtc.c index 4b3bc5fc1006..cf6d41c9edc7 100644 --- a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_crtc.c +++ b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_crtc.c @@ -245,6 +245,8 @@ static void blend_setup(struct drm_crtc *crtc)
/* Collect all plane information */ drm_atomic_crtc_for_each_plane(plane, crtc) { + enum mdp5_pipe right_pipe; + pstate = to_mdp5_plane_state(plane->state); pstates[pstate->stage] = pstate; stage[pstate->stage][PIPE_LEFT] = mdp5_plane_pipe(plane); @@ -255,6 +257,16 @@ static void blend_setup(struct drm_crtc *crtc) if (r_mixer) r_stage[pstate->stage][PIPE_LEFT] = mdp5_plane_pipe(plane); + /* + * if we have a right pipe (i.e, the plane comprises of 2 + * hwpipes, then stage the right pipe on the right side of both + * the layer mixers + */ + right_pipe = mdp5_plane_right_pipe(plane); + if (right_pipe) { + stage[pstate->stage][PIPE_RIGHT] = right_pipe; + r_stage[pstate->stage][PIPE_RIGHT] = right_pipe; + }
plane_cnt++; } diff --git a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_ctl.c b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_ctl.c index 1fdbb936877f..15d78b218935 100644 --- a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_ctl.c +++ b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_ctl.c @@ -380,14 +380,18 @@ int mdp5_ctl_blend(struct mdp5_ctl *ctl, struct mdp5_pipeline *pipeline,
for (i = start_stage; stage_cnt && i <= STAGE_MAX; i++) { blend_cfg |= - mdp_ctl_blend_mask(stage[i][PIPE_LEFT], i); + mdp_ctl_blend_mask(stage[i][PIPE_LEFT], i) | + mdp_ctl_blend_mask(stage[i][PIPE_RIGHT], i); blend_ext_cfg |= - mdp_ctl_blend_ext_mask(stage[i][PIPE_LEFT], i); + mdp_ctl_blend_ext_mask(stage[i][PIPE_LEFT], i) | + mdp_ctl_blend_ext_mask(stage[i][PIPE_RIGHT], i); if (r_mixer) { r_blend_cfg |= - mdp_ctl_blend_mask(r_stage[i][PIPE_LEFT], i); + mdp_ctl_blend_mask(r_stage[i][PIPE_LEFT], i) | + mdp_ctl_blend_mask(r_stage[i][PIPE_RIGHT], i); r_blend_ext_cfg |= - mdp_ctl_blend_ext_mask(r_stage[i][PIPE_LEFT], i); + mdp_ctl_blend_ext_mask(r_stage[i][PIPE_LEFT], i) | + mdp_ctl_blend_ext_mask(r_stage[i][PIPE_RIGHT], i); } }
If a CRTC comprises of 2 LMs, it is mandatory to enable border out and assign it to the base stage.
We had to enable border out also when the base plane wasn't fullscreen. Club these checks and put them in a separate function called get_start_stage() that returns the starting stage for assigning planes.
Signed-off-by: Archit Taneja architt@codeaurora.org --- drivers/gpu/drm/msm/mdp/mdp5/mdp5_crtc.c | 45 +++++++++++++++++++++++++------- 1 file changed, 35 insertions(+), 10 deletions(-)
diff --git a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_crtc.c b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_crtc.c index cf6d41c9edc7..d0559962f85b 100644 --- a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_crtc.c +++ b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_crtc.c @@ -518,6 +518,29 @@ static bool is_fullscreen(struct drm_crtc_state *cstate, ((pstate->crtc_y + pstate->crtc_h) >= cstate->mode.vdisplay); }
+enum mdp_mixer_stage_id get_start_stage(struct drm_crtc *crtc, + struct drm_crtc_state *new_crtc_state, + struct drm_plane_state *bpstate) +{ + struct mdp5_crtc_state *mdp5_cstate = + to_mdp5_crtc_state(new_crtc_state); + + /* + * if we're in source split mode, it's mandatory to have + * border out on the base stage + */ + if (mdp5_cstate->pipeline.r_mixer) + return STAGE0; + + /* if the bottom-most layer is not fullscreen, we need to use + * it for solid-color: + */ + if (!is_fullscreen(new_crtc_state, bpstate)) + return STAGE0; + + return STAGE_BASE; +} + static int mdp5_crtc_atomic_check(struct drm_crtc *crtc, struct drm_crtc_state *state) { @@ -528,8 +551,9 @@ static int mdp5_crtc_atomic_check(struct drm_crtc *crtc, const struct mdp5_cfg_hw *hw_cfg; const struct drm_plane_state *pstate; bool cursor_plane = false; - int cnt = 0, base = 0, i; + int cnt = 0, i; int ret; + enum mdp_mixer_stage_id start;
DBG("%s: check", crtc->name);
@@ -543,6 +567,10 @@ static int mdp5_crtc_atomic_check(struct drm_crtc *crtc, cursor_plane = true; }
+ /* bail out early if there aren't any planes */ + if (!cnt) + return 0; + ret = mdp5_crtc_setup_pipeline(crtc, state); if (ret) { dev_err(dev->dev, "couldn't assign mixers %d\n", ret); @@ -552,23 +580,20 @@ static int mdp5_crtc_atomic_check(struct drm_crtc *crtc, /* assign a stage based on sorted zpos property */ sort(pstates, cnt, sizeof(pstates[0]), pstate_cmp, NULL);
- /* if the bottom-most layer is not fullscreen, we need to use - * it for solid-color: - */ - if ((cnt > 0) && !is_fullscreen(state, &pstates[0].state->base)) - base++; - /* trigger a warning if cursor isn't the highest zorder */ WARN_ON(cursor_plane && (pstates[cnt - 1].plane->type != DRM_PLANE_TYPE_CURSOR));
+ start = get_start_stage(crtc, state, &pstates[0].state->base); + /* verify that there are not too many planes attached to crtc * and that we don't have conflicting mixer stages: */ hw_cfg = mdp5_cfg_get_hw_config(mdp5_kms->cfg);
- if ((cnt + base) >= hw_cfg->lm.nb_stages) { - dev_err(dev->dev, "too many planes! cnt=%d, base=%d\n", cnt, base); + if ((cnt + start - 1) >= hw_cfg->lm.nb_stages) { + dev_err(dev->dev, "too many planes! cnt=%d, start stage=%d\n", + cnt, start); return -EINVAL; }
@@ -576,7 +601,7 @@ static int mdp5_crtc_atomic_check(struct drm_crtc *crtc, if (cursor_plane && (i == (cnt - 1))) pstates[i].state->stage = hw_cfg->lm.nb_stages; else - pstates[i].state->stage = STAGE_BASE + i + base; + pstates[i].state->stage = start + i; DBG("%s: assign pipe %s on stage=%d", crtc->name, pstates[i].plane->name, pstates[i].state->stage);
Dynamically assign a right mixer to mdp5_crtc_state in the CRTC's atomic_check path. Assigning the right mixer has some constraints, i.e, only a few LMs can be paired together. Update mdp5_mixer_assign to handle these constraints.
Firstly, we need to identify whether we need a right mixer or not. At the moment, there are 2 scenarios where a right mixer might be needed: - If any of the planes connected to this CRTC is too wide (i.e, is comprised of 2 hwpipes). - If the CRTC's mode itself is too wide (i.e, a 4K mode on HDMI).
We implement both these checks in the mdp5_crtc_atomic_check(), and pass 'need_right_mixer' to mdp5_setup_pipeline.
If a CRTC is already assigned a single mixer, and a new atomic commit brings in a drm_plane that needs 2 hwpipes, we can successfully commit this mode without requiring a full modeset, provided that we still use the previously assigned mixer as the left mixer. If such an assignment isn't possible, we'd need to do a full modeset. This scenario has been ignored for now.
The mixer assignment code is a bit messy, considering we have at most 4 LM instances in hardware. This can probably be re-visited later with simplified logic.
Signed-off-by: Archit Taneja architt@codeaurora.org --- drivers/gpu/drm/msm/mdp/mdp5/mdp5_crtc.c | 49 +++++++++++++--- drivers/gpu/drm/msm/mdp/mdp5/mdp5_mixer.c | 97 +++++++++++++++++++++++++++---- drivers/gpu/drm/msm/mdp/mdp5/mdp5_mixer.h | 5 +- 3 files changed, 129 insertions(+), 22 deletions(-)
diff --git a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_crtc.c b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_crtc.c index d0559962f85b..42fef018c04e 100644 --- a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_crtc.c +++ b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_crtc.c @@ -455,7 +455,8 @@ static void mdp5_crtc_enable(struct drm_crtc *crtc) }
int mdp5_crtc_setup_pipeline(struct drm_crtc *crtc, - struct drm_crtc_state *new_crtc_state) + struct drm_crtc_state *new_crtc_state, + bool need_right_mixer) { struct mdp5_crtc_state *mdp5_cstate = to_mdp5_crtc_state(new_crtc_state); @@ -465,15 +466,32 @@ int mdp5_crtc_setup_pipeline(struct drm_crtc *crtc,
new_mixer = !pipeline->mixer;
+ if ((need_right_mixer && !pipeline->r_mixer) || + (!need_right_mixer && pipeline->r_mixer)) + new_mixer = true; + if (new_mixer) { struct mdp5_hw_mixer *old_mixer = pipeline->mixer; + struct mdp5_hw_mixer *old_r_mixer = pipeline->r_mixer; + u32 caps; + int ret; + + caps = MDP_LM_CAP_DISPLAY; + if (need_right_mixer) + caps |= MDP_LM_CAP_PAIR;
- pipeline->mixer = mdp5_mixer_assign(new_crtc_state->state, crtc, - MDP_LM_CAP_DISPLAY); - if (IS_ERR(pipeline->mixer)) - return PTR_ERR(pipeline->mixer); + ret = mdp5_mixer_assign(new_crtc_state->state, crtc, caps, + &pipeline->mixer, need_right_mixer ? + &pipeline->r_mixer : NULL); + if (ret) + return ret;
mdp5_mixer_release(new_crtc_state->state, old_mixer); + if (old_r_mixer) { + mdp5_mixer_release(new_crtc_state->state, old_r_mixer); + if (!need_right_mixer) + pipeline->r_mixer = NULL; + } }
/* @@ -550,7 +568,9 @@ static int mdp5_crtc_atomic_check(struct drm_crtc *crtc, struct plane_state pstates[STAGE_MAX + 1]; const struct mdp5_cfg_hw *hw_cfg; const struct drm_plane_state *pstate; + const struct drm_display_mode *mode = &state->adjusted_mode; bool cursor_plane = false; + bool need_right_mixer = false; int cnt = 0, i; int ret; enum mdp_mixer_stage_id start; @@ -561,6 +581,12 @@ static int mdp5_crtc_atomic_check(struct drm_crtc *crtc, pstates[cnt].plane = plane; pstates[cnt].state = to_mdp5_plane_state(pstate);
+ /* + * if any plane on this crtc uses 2 hwpipes, then we need + * the crtc to have a right hwmixer. + */ + if (pstates[cnt].state->r_hwpipe) + need_right_mixer = true; cnt++;
if (plane->type == DRM_PLANE_TYPE_CURSOR) @@ -571,7 +597,16 @@ static int mdp5_crtc_atomic_check(struct drm_crtc *crtc, if (!cnt) return 0;
- ret = mdp5_crtc_setup_pipeline(crtc, state); + hw_cfg = mdp5_cfg_get_hw_config(mdp5_kms->cfg); + + /* + * we need a right hwmixer if the mode's width is greater than a single + * LM's max width + */ + if (mode->hdisplay > hw_cfg->lm.max_width) + need_right_mixer = true; + + ret = mdp5_crtc_setup_pipeline(crtc, state, need_right_mixer); if (ret) { dev_err(dev->dev, "couldn't assign mixers %d\n", ret); return ret; @@ -589,8 +624,6 @@ static int mdp5_crtc_atomic_check(struct drm_crtc *crtc, /* verify that there are not too many planes attached to crtc * and that we don't have conflicting mixer stages: */ - hw_cfg = mdp5_cfg_get_hw_config(mdp5_kms->cfg); - if ((cnt + start - 1) >= hw_cfg->lm.nb_stages) { dev_err(dev->dev, "too many planes! cnt=%d, start stage=%d\n", cnt, start); diff --git a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_mixer.c b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_mixer.c index 9bb1f824b2a9..8a00991f03c7 100644 --- a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_mixer.c +++ b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_mixer.c @@ -16,42 +16,115 @@
#include "mdp5_kms.h"
-struct mdp5_hw_mixer *mdp5_mixer_assign(struct drm_atomic_state *s, - struct drm_crtc *crtc, uint32_t caps) +/* + * As of now, there are only 2 combinations possible for source split: + * + * Left | Right + * -----|------ + * LM0 | LM1 + * LM2 | LM5 + * + */ +static int lm_right_pair[] = { 1, -1, 5, -1, -1, -1 }; + +static int get_right_pair_idx(struct mdp5_kms *mdp5_kms, int lm) +{ + int i; + int pair_lm; + + pair_lm = lm_right_pair[lm]; + if (pair_lm < 0) + return -EINVAL; + + for (i = 0; i < mdp5_kms->num_hwmixers; i++) { + struct mdp5_hw_mixer *mixer = mdp5_kms->hwmixers[i]; + + if (mixer->lm == pair_lm) + return mixer->idx; + } + + return -1; +} + +int mdp5_mixer_assign(struct drm_atomic_state *s, struct drm_crtc *crtc, + uint32_t caps, struct mdp5_hw_mixer **mixer, + struct mdp5_hw_mixer **r_mixer) { struct msm_drm_private *priv = s->dev->dev_private; struct mdp5_kms *mdp5_kms = to_mdp5_kms(to_mdp_kms(priv->kms)); struct mdp5_state *state = mdp5_get_state(s); struct mdp5_hw_mixer_state *new_state; - struct mdp5_hw_mixer *mixer = NULL; int i;
if (IS_ERR(state)) - return ERR_CAST(state); + return PTR_ERR(state);
new_state = &state->hwmixer;
for (i = 0; i < mdp5_kms->num_hwmixers; i++) { struct mdp5_hw_mixer *cur = mdp5_kms->hwmixers[i];
- /* skip if already in-use */ - if (new_state->hwmixer_to_crtc[cur->idx]) + /* + * skip if already in-use by a different CRTC. If there is a + * mixer already assigned to this CRTC, it means this call is + * a request to get an additional right mixer. Assume that the + * existing mixer is the 'left' one, and try to see if we can + * get its corresponding 'right' pair. + */ + if (new_state->hwmixer_to_crtc[cur->idx] && + new_state->hwmixer_to_crtc[cur->idx] != crtc) continue;
/* skip if doesn't support some required caps: */ if (caps & ~cur->caps) continue;
- if (!mixer) - mixer = cur; + if (r_mixer) { + int pair_idx; + + pair_idx = get_right_pair_idx(mdp5_kms, cur->lm); + if (pair_idx < 0) + return -EINVAL; + + if (new_state->hwmixer_to_crtc[pair_idx]) + continue; + + *r_mixer = mdp5_kms->hwmixers[pair_idx]; + } + + /* + * prefer a pair-able LM over an unpairable one. We can + * switch the CRTC from Normal mode to Source Split mode + * without requiring a full modeset if we had already + * assigned this CRTC a pair-able LM. + * + * TODO: There will be assignment sequences which would + * result in the CRTC requiring a full modeset, even + * if we have the LM resources to prevent it. For a platform + * with a few displays, we don't run out of pair-able LMs + * so easily. For now, ignore the possibility of requiring + * a full modeset. + */ + if (!(*mixer) || cur->caps & MDP_LM_CAP_PAIR) + *mixer = cur; }
- if (!mixer) - return ERR_PTR(-ENOMEM); + if (!(*mixer)) + return -ENOMEM;
- new_state->hwmixer_to_crtc[mixer->idx] = crtc; + if (r_mixer && !(*r_mixer)) + return -ENOMEM;
- return mixer; + DBG("assigning Layer Mixer %d to crtc %s", (*mixer)->lm, crtc->name); + + new_state->hwmixer_to_crtc[(*mixer)->idx] = crtc; + if (r_mixer) { + DBG("assigning Right Layer Mixer %d to crtc %s", (*r_mixer)->lm, + crtc->name); + new_state->hwmixer_to_crtc[(*r_mixer)->idx] = crtc; + } + + return 0; }
void mdp5_mixer_release(struct drm_atomic_state *s, struct mdp5_hw_mixer *mixer) diff --git a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_mixer.h b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_mixer.h index 18ed933ae574..9be94f567fbd 100644 --- a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_mixer.h +++ b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_mixer.h @@ -38,8 +38,9 @@ struct mdp5_hw_mixer_state {
struct mdp5_hw_mixer *mdp5_mixer_init(const struct mdp5_lm_instance *lm); void mdp5_mixer_destroy(struct mdp5_hw_mixer *lm); -struct mdp5_hw_mixer *mdp5_mixer_assign(struct drm_atomic_state *s, - struct drm_crtc *crtc, uint32_t caps); +int mdp5_mixer_assign(struct drm_atomic_state *s, struct drm_crtc *crtc, + uint32_t caps, struct mdp5_hw_mixer **mixer, + struct mdp5_hw_mixer **r_mixer); void mdp5_mixer_release(struct drm_atomic_state *s, struct mdp5_hw_mixer *mixer);
Assigning LMs dynamically to CRTCs results in REG_MDP5_CTL_LAYER_REGs and REG_MDP5_CTL_LAYER_EXT_REGs maintaining old values for a LM that isn't used by our CTL instance anymore.
Clear the ctl's CTL_LAYER_REG and CTL_LAYER_EXT_REGs for all LM instances. The ones that need to be configured are configured later in this func.
Signed-off-by: Archit Taneja architt@codeaurora.org --- drivers/gpu/drm/msm/mdp/mdp5/mdp5_ctl.c | 18 ++++++++++++++++++ 1 file changed, 18 insertions(+)
diff --git a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_ctl.c b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_ctl.c index 15d78b218935..fddde84afdc3 100644 --- a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_ctl.c +++ b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_ctl.c @@ -355,6 +355,22 @@ static u32 mdp_ctl_blend_ext_mask(enum mdp5_pipe pipe, } }
+static void mdp5_ctl_reset_blend_regs(struct mdp5_ctl *ctl) +{ + unsigned long flags; + struct mdp5_ctl_manager *ctl_mgr = ctl->ctlm; + int i; + + spin_lock_irqsave(&ctl->hw_lock, flags); + + for (i = 0; i < ctl_mgr->nlm; i++) { + ctl_write(ctl, REG_MDP5_CTL_LAYER_REG(ctl->id, i), 0x0); + ctl_write(ctl, REG_MDP5_CTL_LAYER_EXT_REG(ctl->id, i), 0x0); + } + + spin_unlock_irqrestore(&ctl->hw_lock, flags); +} + #define PIPE_LEFT 0 #define PIPE_RIGHT 1 int mdp5_ctl_blend(struct mdp5_ctl *ctl, struct mdp5_pipeline *pipeline, @@ -369,6 +385,8 @@ int mdp5_ctl_blend(struct mdp5_ctl *ctl, struct mdp5_pipeline *pipeline, u32 r_blend_cfg = 0, r_blend_ext_cfg = 0; int i, start_stage;
+ mdp5_ctl_reset_blend_regs(ctl); + if (ctl_blend_op_flags & MDP5_CTL_BLEND_OP_FLAG_BORDER_OUT) { start_stage = STAGE0; blend_cfg |= MDP5_CTL_LAYER_REG_BORDER_COLOR;
3D mux is a small block placed after the DSPPs in MDP5. It can merge 2 LM/DSPP outputs and feed it to a single interface.
Enable 3D Mux if our mdp5_pipeline has 2 active LMs. This check will need to be made more specific later when we add Dual DSI support with source split enabled. In that use case, each LM feeds to a separae INTF, so the 3D mux isn't needed.
Signed-off-by: Archit Taneja architt@codeaurora.org --- drivers/gpu/drm/msm/mdp/mdp5/mdp5_ctl.c | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_ctl.c b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_ctl.c index fddde84afdc3..439e0a300e25 100644 --- a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_ctl.c +++ b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_ctl.c @@ -138,9 +138,10 @@ static void set_display_intf(struct mdp5_kms *mdp5_kms, spin_unlock_irqrestore(&mdp5_kms->resource_lock, flags); }
-static void set_ctl_op(struct mdp5_ctl *ctl, struct mdp5_interface *intf) +static void set_ctl_op(struct mdp5_ctl *ctl, struct mdp5_pipeline *pipeline) { unsigned long flags; + struct mdp5_interface *intf = pipeline->intf; u32 ctl_op = 0;
if (!mdp5_cfg_intf_is_virtual(intf->type)) @@ -161,6 +162,10 @@ static void set_ctl_op(struct mdp5_ctl *ctl, struct mdp5_interface *intf) break; }
+ if (pipeline->r_mixer) + ctl_op |= MDP5_CTL_OP_PACK_3D_ENABLE | + MDP5_CTL_OP_PACK_3D(1); + spin_lock_irqsave(&ctl->hw_lock, flags); ctl_write(ctl, REG_MDP5_CTL_OP(ctl->id), ctl_op); spin_unlock_irqrestore(&ctl->hw_lock, flags); @@ -183,7 +188,7 @@ int mdp5_ctl_set_pipeline(struct mdp5_ctl *ctl, struct mdp5_pipeline *pipeline) if (!mdp5_cfg_intf_is_virtual(intf->type)) set_display_intf(mdp5_kms, intf);
- set_ctl_op(ctl, intf); + set_ctl_op(ctl, pipeline);
return 0; }
dri-devel@lists.freedesktop.org