So we had a sessions at kernel summit to discuss the driver model and DT interactions for a display pipeline,
we had good attendance from a few sides and I hope to summarise the recommendations below,
a) Device Tree bindings
We should create a top-level virtual device binding that a top level driver can bind to, like alsa asoc does.
We should separate the CDF device tree model from CDF as a starting point and refine it outside of CDF, and produce a set of bindings that cover the current drivers we have, exynos, imx, tegra, msm, armada etc. This set of bindings should not be tied on CDF being merged or anything else.
Display pipelines should be modelered in the device tree, but the level of detail required for links between objects may be left up to the SoC developer, esp wrt tightly coupled SoCs.
Externally linked devices like bridges and panels should be explicitly linked.
b) Driver Model
The big thing here is that the device tree description we use should not dictate the driver model we use. This is the biggest thing I learned, so what does it mean?
We aren't required to write a device driver per device tree object.
We shouldn't be writing device drivers per device tree object.
For tightly-coupled SoCs where the blocks come from one vendor and are reused a lot, a top level driver should use the DT as configuration information source for the list of blocks it needs to initialise on the card, not as a list of separate drivers. There may be some external drivers required and the code should deal with this, like how alsa asoc does.
To share code between layers we should refactor it into a helper library not a separate driver, the kms/v4l/fbdev can use the library.
This should allow us to move forward a bit clearer esp with new drivers and following these recommendations, and I think porting current drivers to a sane model, especially exynos and imx.
Now I saw we here but I'm only going to be donating my use of a big stick and review abilities to making this happen, but I'm quite willing to enforce some of these rules going forward as I think it will make life easier.
After looking at some of the ordering issues we've had with x86 GPUs (which are really just a tightly coupled SoC) I don't want separate drivers all having their own init, suspend/resume paths in them as I know we'll have to start making special vtable entry points etc to solve some random ordering issues that crop up.
Dave.
On Tue, Oct 29, 2013 at 01:52:57PM +1000, Dave Airlie wrote:
So we had a sessions at kernel summit to discuss the driver model and DT interactions for a display pipeline,
we had good attendance from a few sides and I hope to summarise the recommendations below,
a) Device Tree bindings
We should create a top-level virtual device binding that a top level driver can bind to, like alsa asoc does.
We should separate the CDF device tree model from CDF as a starting point and refine it outside of CDF, and produce a set of bindings that cover the current drivers we have, exynos, imx, tegra, msm, armada etc. This set of bindings should not be tied on CDF being merged or anything else.
Display pipelines should be modelered in the device tree, but the level of detail required for links between objects may be left up to the SoC developer, esp wrt tightly coupled SoCs.
Externally linked devices like bridges and panels should be explicitly linked.
According to the above, the device tree bindings for simple panels that I proposed earlier should be fine. However there was so much controversy involved that I've decided not to make that part of my pull request this cycle. Also they haven't been reviewed by DT bindings maintainers yes, so according to our new rules they cannot be merged.
I think Laurent was more or less fine with them too, although he had some objections to how DSI panels were represented and wanted those to be sub-nodes of the DSI controller. I'll see if I can come up with something to address that.
We should probably aim for a common binding for things like DSI.
b) Driver Model
The big thing here is that the device tree description we use should not dictate the driver model we use. This is the biggest thing I learned, so what does it mean?
We aren't required to write a device driver per device tree object.
We shouldn't be writing device drivers per device tree object.
I may remember this wrongly, but that's the opposite recommendation that I got back when I started to work on Tegra DRM.
For tightly-coupled SoCs where the blocks come from one vendor and are reused a lot, a top level driver should use the DT as configuration information source for the list of blocks it needs to initialise on the card, not as a list of separate drivers. There may be some external drivers required and the code should deal with this, like how alsa asoc does.
To share code between layers we should refactor it into a helper library not a separate driver, the kms/v4l/fbdev can use the library.
This should allow us to move forward a bit clearer esp with new drivers and following these recommendations, and I think porting current drivers to a sane model, especially exynos and imx.
Now I saw we here but I'm only going to be donating my use of a big stick and review abilities to making this happen, but I'm quite willing to enforce some of these rules going forward as I think it will make life easier.
After looking at some of the ordering issues we've had with x86 GPUs (which are really just a tightly coupled SoC) I don't want separate drivers all having their own init, suspend/resume paths in them as I know we'll have to start making special vtable entry points etc to solve some random ordering issues that crop up.
Where does that leave the Tegra driver? I've spent a significant amount of time to get it to some sane state where having multiple subdrivers are handled fairly nicely (in my opinion). Rewriting all of it isn't something that I look forward to at all.
Thierry
On Tue, Oct 29, 2013 at 01:52:57PM +1000, Dave Airlie wrote:
So we had a sessions at kernel summit to discuss the driver model and DT interactions for a display pipeline,
we had good attendance from a few sides and I hope to summarise the recommendations below,
a) Device Tree bindings
We should create a top-level virtual device binding that a top level driver can bind to, like alsa asoc does.
We should separate the CDF device tree model from CDF as a starting point and refine it outside of CDF, and produce a set of bindings that cover the current drivers we have, exynos, imx, tegra, msm, armada etc. This set of bindings should not be tied on CDF being merged or anything else.
Display pipelines should be modelered in the device tree, but the level of detail required for links between objects may be left up to the SoC developer, esp wrt tightly coupled SoCs.
Externally linked devices like bridges and panels should be explicitly linked.
b) Driver Model
The big thing here is that the device tree description we use should not dictate the driver model we use. This is the biggest thing I learned, so what does it mean?
We aren't required to write a device driver per device tree object.
We shouldn't be writing device drivers per device tree object.
For tightly-coupled SoCs where the blocks come from one vendor and are reused a lot, a top level driver should use the DT as configuration information source for the list of blocks it needs to initialise on the card, not as a list of separate drivers. There may be some external drivers required and the code should deal with this, like how alsa asoc does.
To share code between layers we should refactor it into a helper library not a separate driver, the kms/v4l/fbdev can use the library.
This should allow us to move forward a bit clearer esp with new drivers and following these recommendations, and I think porting current drivers to a sane model, especially exynos and imx.
Now I saw we here but I'm only going to be donating my use of a big stick and review abilities to making this happen, but I'm quite willing to enforce some of these rules going forward as I think it will make life easier.
After looking at some of the ordering issues we've had with x86 GPUs (which are really just a tightly coupled SoC) I don't want separate drivers all having their own init, suspend/resume paths in them as I know we'll have to start making special vtable entry points etc to solve some random ordering issues that crop up.
The DRM device has to be initialized/suspended/resumed as a whole, no doubt about that. If that's not the case you indeed open up the door for all kinds of ordering issues.
Still the different components can be multiple devices, just initialize the drm device once all components are probed. Remove it again once a component is removed. Handle suspend in the DRM device, not in the individual component drivers. The suspend in the component drivers would only be called after the DRM device is completely quiesced. Similarly the resume in the component drivers would not reenable the components, this instead would be done in the DRM device when all components are there again.
This way all components could be proper (driver model)devices with proper drivers without DRM even noticing that multiple components are involved.
Side note: We have no choice anyway. All SoCs can (sometimes must) be extended with external I2C devices. On every SoC the I2C bus master is a separate device, so we have a multicomponent device (in the sense of driver model) already in many cases.
Sascha
On Wednesday 30 of October 2013 13:02:29 Sascha Hauer wrote:
On Tue, Oct 29, 2013 at 01:52:57PM +1000, Dave Airlie wrote:
So we had a sessions at kernel summit to discuss the driver model and DT interactions for a display pipeline,
we had good attendance from a few sides and I hope to summarise the recommendations below,
a) Device Tree bindings
We should create a top-level virtual device binding that a top level driver can bind to, like alsa asoc does.
We should separate the CDF device tree model from CDF as a starting point and refine it outside of CDF, and produce a set of bindings that cover the current drivers we have, exynos, imx, tegra, msm, armada etc. This set of bindings should not be tied on CDF being merged or anything else.
Display pipelines should be modelered in the device tree, but the level of detail required for links between objects may be left up to the SoC developer, esp wrt tightly coupled SoCs.
Externally linked devices like bridges and panels should be explicitly linked.
b) Driver Model
The big thing here is that the device tree description we use should not dictate the driver model we use. This is the biggest thing I learned, so what does it mean?
We aren't required to write a device driver per device tree object.
We shouldn't be writing device drivers per device tree object.
For tightly-coupled SoCs where the blocks come from one vendor and are reused a lot, a top level driver should use the DT as configuration information source for the list of blocks it needs to initialise on the card, not as a list of separate drivers. There may be some external drivers required and the code should deal with this, like how alsa asoc does.
To share code between layers we should refactor it into a helper library not a separate driver, the kms/v4l/fbdev can use the library.
This should allow us to move forward a bit clearer esp with new drivers and following these recommendations, and I think porting current drivers to a sane model, especially exynos and imx.
Now I saw we here but I'm only going to be donating my use of a big stick and review abilities to making this happen, but I'm quite willing to enforce some of these rules going forward as I think it will make life easier.
After looking at some of the ordering issues we've had with x86 GPUs (which are really just a tightly coupled SoC) I don't want separate drivers all having their own init, suspend/resume paths in them as I know we'll have to start making special vtable entry points etc to solve some random ordering issues that crop up.
The DRM device has to be initialized/suspended/resumed as a whole, no doubt about that. If that's not the case you indeed open up the door for all kinds of ordering issues.
Still the different components can be multiple devices, just initialize the drm device once all components are probed. Remove it again once a component is removed. Handle suspend in the DRM device, not in the individual component drivers. The suspend in the component drivers would only be called after the DRM device is completely quiesced. Similarly the resume in the component drivers would not reenable the components, this instead would be done in the DRM device when all components are there again.
This way all components could be proper (driver model)devices with proper drivers without DRM even noticing that multiple components are involved.
Side note: We have no choice anyway. All SoCs can (sometimes must) be extended with external I2C devices. On every SoC the I2C bus master is a separate device, so we have a multicomponent device (in the sense of driver model) already in many cases.
+1
Best regards, Tomasz
After looking at some of the ordering issues we've had with x86 GPUs (which are really just a tightly coupled SoC) I don't want separate drivers all having their own init, suspend/resume paths in them as I know we'll have to start making special vtable entry points etc to solve some random ordering issues that crop up.
The DRM device has to be initialized/suspended/resumed as a whole, no doubt about that. If that's not the case you indeed open up the door for all kinds of ordering issues.
Still the different components can be multiple devices, just initialize the drm device once all components are probed. Remove it again once a component is removed. Handle suspend in the DRM device, not in the individual component drivers. The suspend in the component drivers would only be called after the DRM device is completely quiesced. Similarly the resume in the component drivers would not reenable the components, this instead would be done in the DRM device when all components are there again.
But why? why should we have separate drivers for each component of a tightly coupled SoC?
it makes no sense, having a driver node per every block in the chip isn't an advantage, it complicates things for no advantage at all. If we don't have hotplug hw removing one device shouldn't be possible this idea that removing a sub-driver should teardown the drm is crazy as well.
This way all components could be proper (driver model)devices with proper drivers without DRM even noticing that multiple components are involved.
Side note: We have no choice anyway. All SoCs can (sometimes must) be extended with external I2C devices. On every SoC the I2C bus master is a separate device, so we have a multicomponent device (in the sense of driver model) already in many cases.
Having off-chip i2c devices being part of the driver model is fine, stuff works like that everywhere, having each SoC block part of the device model isn't fine unless you can really prove re-use and why having separate driver templating for each block is helpful.
I'm not willing to have overly generic sub drivers that provide no advantage and only add lots of disadvantage like init and suspend/resume ordering. I know there is going to be SoC ordering issues at init time that will end up circular between two separate drivers each deferring because they want another driver up. Don't dig us into that hole, i2c has a well defined ordering of init, I don't think internal SoC devices are so well defined.
Dave.
On Fri, Nov 01, 2013 at 10:10:41AM +1000, Dave Airlie wrote:
Still the different components can be multiple devices, just initialize the drm device once all components are probed. Remove it again once a
But why? why should we have separate drivers for each component of a tightly coupled SoC?
it makes no sense, having a driver node per every block in the chip isn't an advantage, it complicates things for no advantage at all. If we don't have hotplug hw removing one device shouldn't be possible this idea that removing a sub-driver should teardown the drm is crazy as well.
One case where this may be required is for integration with SoC power domains where the DRM components are split between multiple domains (and it may be more idiomatic even if they aren't). If the SoC is using power domains then it will expect to see at least one device within each domain that gets used to reference count the activity for the domain.
This could just be a composite device per domain though.
On Fri, Nov 01, 2013 at 10:10:41AM +1000, Dave Airlie wrote:
After looking at some of the ordering issues we've had with x86 GPUs (which are really just a tightly coupled SoC) I don't want separate drivers all having their own init, suspend/resume paths in them as I know we'll have to start making special vtable entry points etc to solve some random ordering issues that crop up.
The DRM device has to be initialized/suspended/resumed as a whole, no doubt about that. If that's not the case you indeed open up the door for all kinds of ordering issues.
Still the different components can be multiple devices, just initialize the drm device once all components are probed. Remove it again once a component is removed. Handle suspend in the DRM device, not in the individual component drivers. The suspend in the component drivers would only be called after the DRM device is completely quiesced. Similarly the resume in the component drivers would not reenable the components, this instead would be done in the DRM device when all components are there again.
But why? why should we have separate drivers for each component of a tightly coupled SoC?
it makes no sense, having a driver node per every block in the chip isn't an advantage, it complicates things for no advantage at all. If we don't have hotplug hw removing one device shouldn't be possible this idea that removing a sub-driver should teardown the drm is crazy as well.
In my opinion separating things out into separate drivers makes it less complicated. For instance it makes it very easy to manage the various resources used by each driver (registers, interrupts, ...).
The only added complexity lies in the fact that we need some code to synchronize the DRM device setup and teardown (and suspend and resume for that matter). It's been discussed elsewhere that most SoCs are very similar in their requirements, so I think we should be able to come up with a piece of code that can be shared between drivers. Perhaps it would even be possible to share that code between subsystems, since ALSA and V4L2 may have similar requirements.
That's effectively not very different from what you're proposing. As far as I can tell the only difference would be that this works in sort of a "bottom-up" fashion, whereas your proposal would be "top-down".
Thierry
2013/11/1 Dave Airlie airlied@gmail.com:
After looking at some of the ordering issues we've had with x86 GPUs (which are really just a tightly coupled SoC) I don't want separate drivers all having their own init, suspend/resume paths in them as I know we'll have to start making special vtable entry points etc to solve some random ordering issues that crop up.
The DRM device has to be initialized/suspended/resumed as a whole, no doubt about that. If that's not the case you indeed open up the door for all kinds of ordering issues.
Still the different components can be multiple devices, just initialize the drm device once all components are probed. Remove it again once a component is removed. Handle suspend in the DRM device, not in the individual component drivers. The suspend in the component drivers would only be called after the DRM device is completely quiesced. Similarly the resume in the component drivers would not reenable the components, this instead would be done in the DRM device when all components are there again.
But why? why should we have separate drivers for each component of a tightly coupled SoC?
it makes no sense, having a driver node per every block in the chip isn't an advantage, it complicates things for no advantage at all. If we don't have hotplug hw removing one device shouldn't be possible this idea that removing a sub-driver should teardown the drm is crazy as well.
This way all components could be proper (driver model)devices with proper drivers without DRM even noticing that multiple components are involved.
Side note: We have no choice anyway. All SoCs can (sometimes must) be extended with external I2C devices. On every SoC the I2C bus master is a separate device, so we have a multicomponent device (in the sense of driver model) already in many cases.
Having off-chip i2c devices being part of the driver model is fine, stuff works like that everywhere, having each SoC block part of the device model isn't fine unless you can really prove re-use and why having separate driver templating for each block is helpful.
I'm not willing to have overly generic sub drivers that provide no advantage and only add lots of disadvantage like init and suspend/resume ordering. I know there is going to be SoC ordering issues at init time that will end up circular between two separate drivers each deferring because they want another driver up. Don't dig us into that hole, i2c has a well defined ordering of init, I don't think internal SoC devices are so well defined.
It seems that the main reason we should go to a single drm driver is the probe ordering issue of sub drivers and the power ordering issue of them.
First, I'd like to ask qustions to myself and other people. Do we really need to define the display pipeline node? Is there really any good way to can use only existing device nodes?
Please suppose the below things, 1. crtc and encoder/connector can be created when KMS driver and display driver are probed regardless of the ordering 2. A crtc and a connector are connected when last one is created. This means that a framebuffer will be created and the framebuffer's image will be transferred to display via KMS driver.
And let see how hardware pipe lines can be linked each other, 1. Top level CRTC -------- Encoder ---------- Connector
2. CRTC Display controller or HDMI Display controller or HDMI ---------- Image Enhancement chips or other
3. Encoder/Connector LCD Panel Display bus(mipi, dp) ------- LCD panel or TV Display bus(mipi, dp) ------- bridge device(lvds) ------- LCD panel or TV
As you can see the above, if a crtc and a connector could be connected each other regardless of the probe order - actually possible, and we are already using this way in internal project - then I think it's enough to consider display pipeline node to CRTC, and to Encoder/Connector individually. DT binding of CRTC including Image Enhancement chips can be done at top level of drm driver, and DT binding of Encoder/Connector including bridge device and panel can be done at probe of separated encoder/connector driver. Of course, for avoiding power ordering issue, each encoder/connector drivers shouldn't have their resume/suspend interfaces, and their pm should be handled by dpms callback at top level of drm driver.
This way I think we could simplify to compose the display pipeline node as device tree,and also we could have a separated device driver as driver model.
Thanks, Inki Dae
Dave. _______________________________________________ dri-devel mailing list dri-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/dri-devel
Hi,
A bit old thread, but I noticed this only now.
On 01/11/13 02:10, Dave Airlie wrote:
But why? why should we have separate drivers for each component of a tightly coupled SoC?
it makes no sense, having a driver node per every block in the chip isn't an advantage, it complicates things for no advantage at all. If we don't have hotplug hw removing one device shouldn't be possible this idea that removing a sub-driver should teardown the drm is crazy as well.
It depends. The SoC's components may be independent as Mark noted, and having separate device/driver may even be more or less required by the arch code. I think this is so on OMAP.
In any case, I don't see any reason to require DRM developers to do it in one way or another. One big driver may work best on one SoC, multiple small drivers may work best on the other.
The thing is, we anyway need to support multiple devices/drivers, in cases where we have, say, external i2c controlled encoder, or panels that need explicit configuration.
So we can't just escape the init time problems by requiring a single big DRM driver. And if we have the solution for the panels and external encoders, I don't see why it would be any different for SoC internal components.
The video pipeline is often composed of multiple video components, and whether they reside on the SoC, or on the board, it doesn't really make any difference.
Tomi
dri-devel@lists.freedesktop.org