-----Original Message----- From: Sean Paul [mailto:seanpaul@chromium.org] Sent: Tuesday, November 05, 2013 12:44 AM To: Inki Dae Cc: Thierry Reding; Laurent Pinchart; dri-devel; Sylwester Nawrocki; Stéphane Marchesin Subject: Re: [PATCH v2 12/26] drm/exynos: Split manager/display/subdrv
On Mon, Nov 4, 2013 at 6:30 AM, Inki Dae inki.dae@samsung.com wrote:
2013/11/4 Thierry Reding thierry.reding@gmail.com:
On Tue, Oct 29, 2013 at 08:46:03PM -0700, Stéphane Marchesin wrote:
On Tue, Oct 29, 2013 at 1:50 PM, Tomasz Figa tomasz.figa@gmail.com
wrote:
Hi Sean,
On Tuesday 29 of October 2013 16:36:47 Sean Paul wrote:
On Mon, Oct 28, 2013 at 7:13 PM, Tomasz Figa
wrote:
> Hi, > > On Wednesday 23 of October 2013 12:09:06 Sean Paul wrote: >> On Wed, Oct 23, 2013 at 11:53 AM, Dave Airlie
wrote:
>> >>>>> I think we need to start considering a framework where >> >>>>> subdrivers >> >>>>> just >> >>>>> add drm objects themselves, then the toplevel node is >> >>>>> responsible >> >>>>> for >> >>>>> knowing that everything for the current configuration is >> >>>>> loaded. >> >>>> >> >>>> It would be nice to specify the various pieces in dt, then
have
>> >>>> some >> >>>> type of drm notifier to the toplevel node when everything
has
>> >>>> been >> >>>> probed. Doing it in the dt would allow standalone >> >>>> drm_bridge/drm_panel >> >>>> drivers to be transparent as far as the device's drm
driver is
>> >>>> concerned. >> >>>> >> >>>> Sean >> >>>> >> >>>>> I realise we may need to make changes to the core drm to
allow
>> >>>>> this >> >>>>> but we should probably start to create a strategy for
fixing
>> >>>>> the >> >>>>> API >> >>>>> issues that this throws up. >> >>>>> >> >>>>> Note I'm not yet advocating for dynamic addition of nodes
once
>> >>>>> the >> >>>>> device is in use, or removing them. >> >>> >> >>> I do wonder if we had some sort of tag in the device tree
for any
>> >>> nodes >> >>> involved in the display, and the core drm layer would read
that
>> >>> list, >> >>> and when every driver registers tick things off, and when
the
>> >>> last >> >>> one >> >>> joins we get a callback and init the drm layer, we'd of
course
>> >>> have >> >>> the >> >>> basic drm layer setup prior to that so we can add the
objects as
>> >>> the >> >>> drivers load. It might make development a bit trickier as
you'd
>> >>> need >> >>> to make sure someone claimed ownership of all the bits for
init
>> >>> to >> >>> proceed.>> >> >> >> >> Yeah, that's basically what the strawman looked like in my
head.
>> >> >> >> Instead of a property in each node, I was thinking of having
a
>> >> separate gfx pipe nodes that would have dt pointers to the
various
>> >> pieces involved in that pipe. This would allow us to
associate
>> >> standalone entities like bridges and panels with encoders in
dt
>> >> w/o >> >> doing it in the drm code. I *think* this should be Ok with
the dt
>> >> guys >> >> since it is still describing the hardware, but I think we'd
have
>> >> to >> >> make sure it wasn't drm-specific. >> > >> > I suppose the question is how much dynamic pipeline
construction
>> > there >> > is, >> > >> > even on things like radeon and i915 we have dynamic clock
generator
>> > to >> > crtc to encoder setups, so I worry about static lists per-
pipe, so
>> > I >> > still think just stating all these devices are needed for
display
>> > and >> > a list of valid interconnections between them, then we can
have the
>> > generic code model drm crtc/encoders/connectors on that list,
and
>> > construct the possible_crtcs /possible_clones etc at that
stage.
>> >> I'm, without excuse, hopeless at devicetree, so there are
probably
>> some violations, but something like: >> >> display-pipelines { >> >> required-elements = <&bridge-a &panel-a &encoder-x &encoder-y >> >> &crtc-x &crtc-y>; >> >> pipe1 { >> >> bridge = <&bridge-a>; >> encoder = <&encoder-x>; >> crtc = <&crtc-y>; >> >> }; >> pipe2 { >> >> encoder = <&encoder-x>; >> crtc = <&crtc-x>; >> >> }; >> pipe3 { >> >> panel = <&panel-a>; >> encoder = <&encoder-y>; >> crtc = <&crtc-y>; >> >> }; >> >> }; >> >> I'm tempted to add connector to the pipe nodes as well, so it's >> obvious which connector should be used in cases where multiple >> entities in the pipe implement drm_connector. However, I'm not
sure
>> if >> that would be NACKed by dt people. >> >> I'm also not sure if there are too many combinations for i915
and
>> radeon to make this unreasonable. I suppose those devices could
just
>> use required-elements and leave the pipe nodes out. > > Just to put my two cents in, as one of the people involved into
"the
> device tree movement", I'd say that instead of creating
artifical
> entities, such as display-pipelines and all of the pipeX'es,
device
> tree should represent relations between nodes. > > According to the generic DT bindings we already have for > video-interfaces > [1] your example connection layout would look as follows: Hi Tomasz Thanks for sending this along.
I think the general consensus is that each drm driver should be implemented as a singular driver. That is, N:1 binding to driver mapping, where there are N IP blocks. Optional devices (such as bridges, panels) probably make sense to spin off as standalone drivers.
I believe this is a huge step backwards from current kernel design standards, which prefer modularity.
Having multiple IPs being part of the DRM subsystem in a SoC, it
would be
nice to have the possibility to compile just a subset of support for
them
into the kernel and load rest of them as modules. (e.g. basic LCD controller on a mobile phone compiled in and external connectors,
like
HDMI as modules)
Not even saying that from development perspective, a huge single
driver
would be much more difficult to test and debug, than several smaller drivers, which could be developed separately.
This is the opposite of our experience, though. A series of small
drivers
like what's in drm/exynos can become really tedious/difficult to coordinate. If you have separate drivers, everything needs to be synchronized, but also has to account for potentially different
loading
order.
It seems you're only thinking about the basic case, where you only
support
a single resolution, no dpms, no suspend to ram... But when you want
full
fledged functionality, then the issues I described become much more prevalent.
I fail to see how this is relevant here. It's fairly clear that even if a DRM driver is composed of more than a single platform driver, there's still a single point of coordination (the DRM driver). How does that have any impact on what features the driver can support? All of the features will be exposed via DRM, whether you use multiple drivers or a single monolithic one underneath is completely irrelevant.
+1
I think a single drm driver - all sub drivers are controlled by dpms of top level - is definitely what we should go to but it's not clear that a single drm driver should necessary be a huge single driver yet. Even if we use one more sub drivers based on driver model, we can avoid the issues, the loading and power ordering issues.
So I'd like to ask question to Google people. Are there really any cases that the loading and power ordering issues can be incurred in case that a drm driver uses one more sub drivers based on driver model?
I think it makes the code simpler, and more explicit. If you don't support dynamic module loading, what's the point in creating modules?
I think we support dynamic module loading. And please know that Exynos drm has only one integrated module.
To avoid the init and suspend/resume ordering issues, one must register the subdrivers from the main drm driver, and you must not use pm_ops. So why incur the extra complexity of modules when we only use the probe and that could easily be converted to
The main reason we must register the sub-drivers is the probing order issue. And I think that we can mitigate the extra complexity of module if we have a helper that it makes a crtc and a encoder/connector to be connected each other regardless of the probing order. For this, you can refer to the below link, https://git.kernel.org/cgit/linux/kernel/git/daeinki/drm-exynos.git/commit/? h=exynos-bridge-test
First, I have just quick implemented this feature to show you and to test. So there would definitely be some problems and ugly codes.
The reason I implemented this feature is that composing the display pipe line node to be considered for all crtc, encoder/connector drivers and related ip such as mipi and lvds, would make other side more complex instead of sub drivers. Actually, is there specific something to apply right now? how long we should draw the display pipe line node in our head?
So my opinion is let's use non-driver-model based crtc driver and driver-model based a encoder/connector driver. With the helper or similar thing, if we don't need to care the probing order, the DT binding of CRTC devices could be done at top level of drm, and the DT binding of encoder/connector and related devices could be done at separated drivers. So this way I think we could simplify more what we want.
And, why I prefer to use encoder/connector driver based on driver-model is these drivers could be based on i2c and spi busses. I dont see it's reasonable that i2c or spi based driver should be non-driver mode.
However, in case using driver-model based drivers, one thing its not clear to me is that we should force the drivers not to use their own pm_ops to avoid the power ordering issue.
Thanks, Inki Dae
#ifdef CONFIG_SOME_GFX_BLOCK init_some_gfx_block() #endif
Sean
With the re-factoring patch from Sean, I think Exynos drm driver also has no these issues anymore even through Exynos drm driver uses one more sub drivers based on driver model. That is why it's not clear to me yet.
Thanks, Inki Dae
Thierry
dri-devel mailing list dri-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/dri-devel
dri-devel mailing list dri-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/dri-devel