Hi,
I'm looking into implementing devicetree support in armada_drm and would like to better understand the best practice here.
Adding DT support for a DRM driver seems to be complicated by the fact that DRM is not "hotpluggable" - it is not possible for the drm_device to be initialised without an output, with the output connector/encoder appearing at some later moment. Instead, the connectors/encoders must be fully loaded before the drm_driver load function returns. This means that we cannot drive the individual display components through individual, separate modules - we need a way to control their load order.
Looking at existing DRM drivers:
tilcdc_drm takes an approach that each of the components in the display subsystem (panel, framebuffer, encoder, display controllers) are separate DT nodes and do not have any DT-level linkage. It implements just a single module, and that module carefully initialises things in this order: 1. Register platform drivers for output components 2. Register main drm_driver
When the output component's platform drivers get loaded, probes for such drivers immediately run as they match things in the device tree. At this point, there is no drm_device available for the outputs to bind to, so instead, references to these platform devices just get saved in some global structures.
Later, when the drm_driver gets registered and loaded, the global structures are consulted to find all of the output devices at the right moment.
exynos seems to take a the same approach. Components are separate in the device tree, and each component is implemented as a platform driver or i2c driver. However all the drivers are built together in the same module, and the module_init sequence is careful to initialise all of the output component drivers before loading the DRM driver. The output component driver store their findings in global structures.
Russell King suggested an alternative design for armada_drm. If all display components are represented within the same "display" super-node, we can examine the DT during drm_device initialisation and initialise the appropriate output components. In this case, the output components would not be registered as platform drivers.
The end result would be something similar to exynos/tilcdc (i.e. drm_driver figuring out its output in the appropriate moment), however the hackyness of using global storage to store output devices before drm_driver is ready is avoided. And there is the obvious difference in devicetree layout, which would look something like:
display { compatible = "marvell,armada-510-display"; clocks = <&ext_clk0>, <&lcd_pll_clk>; lcd0 { compatible = "marvell,armada-510-lcd"; reg = <0xf1820000 0x1000>; interrupts = <47>; }; lcd1 { compatible = "marvell,armada-510-lcd"; reg = <0xf1810000 0x1000>; interrupts = <46>; }; dcon { compatible = "marvell,armada-510-dcon"; reg = <...>; }; };
Any thoughts/comments?
Thanks Daniel
On Tue, 2 Jul 2013 11:43:59 -0600 Daniel Drake dsd@laptop.org wrote:
Hi,
Hi Daniel,
I'm looking into implementing devicetree support in armada_drm and would like to better understand the best practice here.
Adding DT support for a DRM driver seems to be complicated by the fact that DRM is not "hotpluggable" - it is not possible for the drm_device to be initialised without an output, with the output connector/encoder appearing at some later moment. Instead, the connectors/encoders must be fully loaded before the drm_driver load function returns. This means that we cannot drive the individual display components through individual, separate modules - we need a way to control their load order.
Looking at existing DRM drivers:
[snip]
It seems that you did not look at the NVIDIA Tegra driver (I got its general concept for my own driver, but I used a simple atomic counter):
- at probe time, the main driver (drivers/gpu/host1x/drm/drm.c) scans the DT and finds its external modules. These ones are put in a "clients" list.
- when loaded, the other modules register themselves into the main driver. This last one checks if each module is in the "client" list. If so, the module is moved from the "client" to an "active" list".
- when the "client" list is empty, this means all modules are started, and, so, the main driver starts the drm stuff.
The active list is kept for module unloading.
Russell King suggested an alternative design for armada_drm. If all display components are represented within the same "display" super-node, we can examine the DT during drm_device initialisation and initialise the appropriate output components. In this case, the output components would not be registered as platform drivers.
The end result would be something similar to exynos/tilcdc (i.e. drm_driver figuring out its output in the appropriate moment), however the hackyness of using global storage to store output devices before drm_driver is ready is avoided. And there is the obvious difference in devicetree layout, which would look something like:
display { compatible = "marvell,armada-510-display"; clocks = <&ext_clk0>, <&lcd_pll_clk>; lcd0 { compatible = "marvell,armada-510-lcd"; reg = <0xf1820000 0x1000>; interrupts = <47>; }; lcd1 { compatible = "marvell,armada-510-lcd"; reg = <0xf1810000 0x1000>; interrupts = <46>; }; dcon { compatible = "marvell,armada-510-dcon"; reg = <...>; }; };
Putting "phandle"s in the 'display' seems more flexible (I did not do so because I knew the hardware - 2 LCDs and the dcon/ire).
But, anyway, this does not solve the exact moment the modules are loaded at startup time.
On Tue, Jul 02, 2013 at 08:42:55PM +0200, Jean-Francois Moine wrote:
It seems that you did not look at the NVIDIA Tegra driver (I got its general concept for my own driver, but I used a simple atomic counter):
at probe time, the main driver (drivers/gpu/host1x/drm/drm.c) scans the DT and finds its external modules. These ones are put in a "clients" list.
when loaded, the other modules register themselves into the main driver. This last one checks if each module is in the "client" list. If so, the module is moved from the "client" to an "active" list".
when the "client" list is empty, this means all modules are started, and, so, the main driver starts the drm stuff.
The active list is kept for module unloading.
Please tell me how this works with the two LCD controllers if you wish to drive the two LCD controllers as entirely separate devices. Given that the above requires the use of global data in the driver, how do you distinguish between the two?
Putting "phandle"s in the 'display' seems more flexible (I did not do so because I knew the hardware - 2 LCDs and the dcon/ire).
Except you haven't looked at the bigger picture - the Armada 510 is unusual in that it has two LCD controllers and the DCON. All of the other SoCs using this IP block that I've been able to research have only one LCD controller and no DCON. I don't think they even have an IRE (image rotation engine) either.
Neither have you considered the case where you may wish to keep the two LCD controllers entirely separate (eg, you want X to drive one but something else on the other.) X drives the DRM device as a whole, including all CRTCs which make up that device - with them combined into one DRM device, you can't ask X to leave one controller alone because you're doing something else with it. (This is just the simple extension of the common case of a single LCD controller, so it's really nothing special.)
So, the unusual case _is_ the Armada 510 with its two LCD controllers and DCON which we _could_ work out some way of wrapping up into one DRM device, or we could just ignore the special case, ignore the DCON and just keep the two LCD CRTCs as two separate and independent DRM devices.
I'm actually starting to come towards the conclusion that we should go for the easiest solution, which is the one I just mentioned, and forget trying to combine these devices into one super DRM driver.
On 07/02/2013 09:19 PM, Russell King wrote:
On Tue, Jul 02, 2013 at 08:42:55PM +0200, Jean-Francois Moine wrote:
It seems that you did not look at the NVIDIA Tegra driver (I got its general concept for my own driver, but I used a simple atomic counter):
at probe time, the main driver (drivers/gpu/host1x/drm/drm.c) scans the DT and finds its external modules. These ones are put in a "clients" list.
when loaded, the other modules register themselves into the main driver. This last one checks if each module is in the "client" list. If so, the module is moved from the "client" to an "active" list".
when the "client" list is empty, this means all modules are started, and, so, the main driver starts the drm stuff.
The active list is kept for module unloading.
Please tell me how this works with the two LCD controllers if you wish to drive the two LCD controllers as entirely separate devices. Given that the above requires the use of global data in the driver, how do you distinguish between the two?
Putting "phandle"s in the 'display' seems more flexible (I did not do so because I knew the hardware - 2 LCDs and the dcon/ire).
Except you haven't looked at the bigger picture - the Armada 510 is unusual in that it has two LCD controllers and the DCON. All of the other SoCs using this IP block that I've been able to research have only one LCD controller and no DCON. I don't think they even have an IRE (image rotation engine) either.
Neither have you considered the case where you may wish to keep the two LCD controllers entirely separate (eg, you want X to drive one but something else on the other.) X drives the DRM device as a whole, including all CRTCs which make up that device - with them combined into one DRM device, you can't ask X to leave one controller alone because you're doing something else with it. (This is just the simple extension of the common case of a single LCD controller, so it's really nothing special.)
So, the unusual case _is_ the Armada 510 with its two LCD controllers and DCON which we _could_ work out some way of wrapping up into one DRM device, or we could just ignore the special case, ignore the DCON and just keep the two LCD CRTCs as two separate and independent DRM devices.
I'm actually starting to come towards the conclusion that we should go for the easiest solution, which is the one I just mentioned, and forget trying to combine these devices into one super DRM driver.
I am against a super node which contains lcd and dcon/ire nodes. You can enable those devices on a per board basis. We add them to dove.dtsi but disable them by default (status = "disabled").
The DRM driver itself should get a video-card node outside of soc/internal-regs where you can put e.g. video memory hole (or video mem size if it will be taken from RAM later)
About the unusual case, I guess we should try to get both lcd controllers into one DRM driver. Then support mirror or screen extension X already provides. For those applications where you want X on one lcd and some other totally different video stream - wait for someone to come up with a request or proposal.
Sebastian
On Tue, Jul 02, 2013 at 09:57:32PM +0200, Sebastian Hesselbarth wrote:
I am against a super node which contains lcd and dcon/ire nodes. You can enable those devices on a per board basis. We add them to dove.dtsi but disable them by default (status = "disabled").
The DRM driver itself should get a video-card node outside of soc/internal-regs where you can put e.g. video memory hole (or video mem size if it will be taken from RAM later)
About the unusual case, I guess we should try to get both lcd controllers into one DRM driver. Then support mirror or screen extension X already provides. For those applications where you want X on one lcd and some other totally different video stream - wait for someone to come up with a request or proposal.
Well, all I can say then is that the onus is on those who want to treat the components as separate devices to come up with some foolproof way to solve this problem which doesn't involve making assumptions about the way that devices are probed and doesn't end up creating artificial restrictions on how the devices can be used - and doesn't end up burdening the common case with lots of useless complexity that they don't need.
It's _that_ case which needs to come up with a proposal about how to handle it because you _can't_ handle it at the moment in any sane manner which meets the criteria I've set out above, and at the moment the best proposal by far to resolve that is the "super node" approach.
There is _no_ way in the device model to combine several individual devices together into one logical device safely when the subsystem requires that there be a definite point where everything is known. That applies even more so with -EPROBE_DEFER. With the presence of such a thing, there is now no logical point where any code can say definitively that the system has technically finished booting and all resources are known.
That's the problem - if you don't od the super-node approach, you end up with lots of individual devices which you have to figure out some way of combining, and coping with missing ones which might not be available in the order you want them to be, etc.
That's the advantage of the "super node" approach - it's a container to tell you what's required in order to complete the creation of the logical device, and you can parse the sub-nodes to locate the information you need.
An alternative as I see it is that DRM - and not only DRM but also the DRM API and Xorg - would need to evolve hotplug support for the various parts of the display subsystem. Do we have enough people with sufficient knowledge and willingness to be able to make all that happen? I don't think we do, and I don't see that there's any funding out there to make such a project happen, which would make it a volunteer/spare time effort.
On Tue, Jul 02, 2013 at 09:25:48PM +0100, Russell King wrote:
On Tue, Jul 02, 2013 at 09:57:32PM +0200, Sebastian Hesselbarth wrote:
I am against a super node which contains lcd and dcon/ire nodes. You can enable those devices on a per board basis. We add them to dove.dtsi but disable them by default (status = "disabled").
The DRM driver itself should get a video-card node outside of soc/internal-regs where you can put e.g. video memory hole (or video mem size if it will be taken from RAM later)
About the unusual case, I guess we should try to get both lcd controllers into one DRM driver. Then support mirror or screen extension X already provides. For those applications where you want X on one lcd and some other totally different video stream - wait for someone to come up with a request or proposal.
Well, all I can say then is that the onus is on those who want to treat the components as separate devices to come up with some foolproof way to solve this problem which doesn't involve making assumptions about the way that devices are probed and doesn't end up creating artificial restrictions on how the devices can be used - and doesn't end up burdening the common case with lots of useless complexity that they don't need.
It's _that_ case which needs to come up with a proposal about how to handle it because you _can't_ handle it at the moment in any sane manner which meets the criteria I've set out above, and at the moment the best proposal by far to resolve that is the "super node" approach.
There is _no_ way in the device model to combine several individual devices together into one logical device safely when the subsystem requires that there be a definite point where everything is known. That applies even more so with -EPROBE_DEFER. With the presence of such a thing, there is now no logical point where any code can say definitively that the system has technically finished booting and all resources are known.
That's the problem - if you don't od the super-node approach, you end up with lots of individual devices which you have to figure out some way of combining, and coping with missing ones which might not be available in the order you want them to be, etc.
That's the advantage of the "super node" approach - it's a container to tell you what's required in order to complete the creation of the logical device, and you can parse the sub-nodes to locate the information you need.
I think such an approach would lead to drm drivers which all parse their "super nodes" themselves and driver authors would become very creative how such a node should look like.
Also this gets messy with i2c devices which are normally registered under their i2c bus masters. With the super node approach these would have to live under the super node, maybe with a phandle to the i2c bus master. This again probably leads to very SoC specific solutions. It also doesn't solve the problem that the i2c bus master needs to be registered by the time the DRM driver probes.
On i.MX the IPU unit not only handles the display path but also the capture path. v4l2 begins to evolve an OF model in which each (sub)device has its natural position in the devicetree; the devices are then connected with phandles. I'm not sure how good this will work together with a super node approach.
An alternative as I see it is that DRM - and not only DRM but also the DRM API and Xorg - would need to evolve hotplug support for the various parts of the display subsystem. Do we have enough people with sufficient knowledge and willingness to be able to make all that happen? I don't think we do, and I don't see that there's any funding out there to make such a project happen, which would make it a volunteer/spare time effort.
+1 for this solution, even if this means more work to get from the ground.
Do we really need full hotplug support in the DRM API and Xorg? I mean it would really be nice if Xorg detected a newly registered device, but as a start it should be sufficient when Xorg detects what's there when it starts, no?
Sascha
On Wed, Jul 3, 2013 at 7:50 AM, Sascha Hauer s.hauer@pengutronix.de wrote:
On Tue, Jul 02, 2013 at 09:25:48PM +0100, Russell King wrote:
On Tue, Jul 02, 2013 at 09:57:32PM +0200, Sebastian Hesselbarth wrote:
I am against a super node which contains lcd and dcon/ire nodes. You can enable those devices on a per board basis. We add them to dove.dtsi but disable them by default (status = "disabled").
The DRM driver itself should get a video-card node outside of soc/internal-regs where you can put e.g. video memory hole (or video mem size if it will be taken from RAM later)
About the unusual case, I guess we should try to get both lcd controllers into one DRM driver. Then support mirror or screen extension X already provides. For those applications where you want X on one lcd and some other totally different video stream - wait for someone to come up with a request or proposal.
Well, all I can say then is that the onus is on those who want to treat the components as separate devices to come up with some foolproof way to solve this problem which doesn't involve making assumptions about the way that devices are probed and doesn't end up creating artificial restrictions on how the devices can be used - and doesn't end up burdening the common case with lots of useless complexity that they don't need.
It's _that_ case which needs to come up with a proposal about how to handle it because you _can't_ handle it at the moment in any sane manner which meets the criteria I've set out above, and at the moment the best proposal by far to resolve that is the "super node" approach.
There is _no_ way in the device model to combine several individual devices together into one logical device safely when the subsystem requires that there be a definite point where everything is known. That applies even more so with -EPROBE_DEFER. With the presence of such a thing, there is now no logical point where any code can say definitively that the system has technically finished booting and all resources are known.
That's the problem - if you don't od the super-node approach, you end up with lots of individual devices which you have to figure out some way of combining, and coping with missing ones which might not be available in the order you want them to be, etc.
That's the advantage of the "super node" approach - it's a container to tell you what's required in order to complete the creation of the logical device, and you can parse the sub-nodes to locate the information you need.
I think such an approach would lead to drm drivers which all parse their "super nodes" themselves and driver authors would become very creative how such a node should look like.
Also this gets messy with i2c devices which are normally registered under their i2c bus masters. With the super node approach these would have to live under the super node, maybe with a phandle to the i2c bus master. This again probably leads to very SoC specific solutions. It also doesn't solve the problem that the i2c bus master needs to be registered by the time the DRM driver probes.
On i.MX the IPU unit not only handles the display path but also the capture path. v4l2 begins to evolve an OF model in which each (sub)device has its natural position in the devicetree; the devices are then connected with phandles. I'm not sure how good this will work together with a super node approach.
An alternative as I see it is that DRM - and not only DRM but also the DRM API and Xorg - would need to evolve hotplug support for the various parts of the display subsystem. Do we have enough people with sufficient knowledge and willingness to be able to make all that happen? I don't think we do, and I don't see that there's any funding out there to make such a project happen, which would make it a volunteer/spare time effort.
+1 for this solution, even if this means more work to get from the ground.
Do we really need full hotplug support in the DRM API and Xorg? I mean it would really be nice if Xorg detected a newly registered device, but as a start it should be sufficient when Xorg detects what's there when it starts, no?
Since fbdev and fbcon sit on top of drm to provide the console currently I'd also expect some fun with them. How do I get a console if I have no outputs at boot, but I have crtcs? do I just wait around until an output appears.
There are a number of issues with hotplugging encoders and connectors at runtime, when really the SoC/board designer knows what it provides and should be able to tell the driver in some fashion.
The main problems when I played with hot adding eDP on Intel last time, are we have grouping of crtc/encoder/connectors for multi-seat future use, these groups need to be updated, and I think the other issue was updating the possible_crtcs/possible_clones stuff. In theory sending X a uevent will make it reload the list, and it mostly deals with device hotplug since 1.14 when I added the USB hotplug support.
I'm not saying this is a bad idea, but really it seems pointless where the hardware is pretty much hardcoded, that DT can't represent that and let the driver control the bring up ordering.
Have you also considered how suspend/resume works in such a place, where every driver is independent? The ChromeOS guys have bitched before about the exynos driver which is has lots of sub-drivers, how do you control the s/r ordering in a crazy system like that? I'd prefer a central driver, otherwise there is too many moving parts.
Dave.
On Wed, Jul 03, 2013 at 08:02:05AM +1000, Dave Airlie wrote:
Have you also considered how suspend/resume works in such a place, where every driver is independent? The ChromeOS guys have bitched before about the exynos driver which is has lots of sub-drivers, how do you control the s/r ordering in a crazy system like that? I'd prefer a central driver, otherwise there is too many moving parts.
From earlier in the evolution of Armada DRM, that has also been my
preferred idea - though probably not quite how people think.
My idea was to have a separate "driver" assemble all the constituent parts, and then register the "armada-drm" platform device providing via platform resources and/or platform data all the necessary information (maybe not even bothering to decode the OF nodes but just provide a collection of nodes for each consituent part.)
Such a thing could be turned into a generic solution for all the multi-part drivers. If we use Sebastian's idea of using phandles (it seems there's a precident for it with the direction v4l2 is going to solve a similar problem) then we likely have a standard way of describing component-ized display setups in DT.
On Tue, Jul 02, 2013 at 11:14:45PM +0100, Russell King wrote:
On Wed, Jul 03, 2013 at 08:02:05AM +1000, Dave Airlie wrote:
Have you also considered how suspend/resume works in such a place, where every driver is independent? The ChromeOS guys have bitched before about the exynos driver which is has lots of sub-drivers, how do you control the s/r ordering in a crazy system like that? I'd prefer a central driver, otherwise there is too many moving parts.
From earlier in the evolution of Armada DRM, that has also been my preferred idea - though probably not quite how people think.
My idea was to have a separate "driver" assemble all the constituent parts, and then register the "armada-drm" platform device providing via platform resources and/or platform data all the necessary information (maybe not even bothering to decode the OF nodes but just provide a collection of nodes for each consituent part.)
This sounds similar to what ASoC does. There a sound device is a devicenode which only has phandles to the various components which are still registered by the regular device model.
What I'm currently missing in DRM is a place where I can register the various components (analog to snd_soc_register_codec / snd_soc_register_component) until some upper layer DRM driver collects the pieces and registers a DRM device (as said, no need for real hotplug).
If we had this component, there would be no need for i2c encoder helpers which insist on registering their own i2c devices instead of using the devices which are found in the devicetree.
Such a thing could be turned into a generic solution for all the multi-part drivers. If we use Sebastian's idea of using phandles (it seems there's a precident for it with the direction v4l2 is going to solve a similar problem) then we likely have a standard way of describing component-ized display setups in DT.
What the v4l2 guys are currently doing is definitely worth looking at before we come up with a different approach for DRM. v4l2 has the same problems, it would be a shame if we come up with a totally different solution.
Sascha
On Tue, Jul 2, 2013 at 3:02 PM, Dave Airlie airlied@gmail.com wrote:
On Wed, Jul 3, 2013 at 7:50 AM, Sascha Hauer s.hauer@pengutronix.de wrote:
On Tue, Jul 02, 2013 at 09:25:48PM +0100, Russell King wrote:
On Tue, Jul 02, 2013 at 09:57:32PM +0200, Sebastian Hesselbarth wrote:
I am against a super node which contains lcd and dcon/ire nodes. You can enable those devices on a per board basis. We add them to dove.dtsi but disable them by default (status = "disabled").
The DRM driver itself should get a video-card node outside of soc/internal-regs where you can put e.g. video memory hole (or video mem size if it will be taken from RAM later)
About the unusual case, I guess we should try to get both lcd controllers into one DRM driver. Then support mirror or screen extension X already provides. For those applications where you want X on one lcd and some other totally different video stream - wait for someone to come up with a request or proposal.
Well, all I can say then is that the onus is on those who want to treat the components as separate devices to come up with some foolproof way to solve this problem which doesn't involve making assumptions about the way that devices are probed and doesn't end up creating artificial restrictions on how the devices can be used - and doesn't end up burdening the common case with lots of useless complexity that they don't need.
It's _that_ case which needs to come up with a proposal about how to handle it because you _can't_ handle it at the moment in any sane manner which meets the criteria I've set out above, and at the moment the best proposal by far to resolve that is the "super node" approach.
There is _no_ way in the device model to combine several individual devices together into one logical device safely when the subsystem requires that there be a definite point where everything is known. That applies even more so with -EPROBE_DEFER. With the presence of such a thing, there is now no logical point where any code can say definitively that the system has technically finished booting and all resources are known.
That's the problem - if you don't od the super-node approach, you end up with lots of individual devices which you have to figure out some way of combining, and coping with missing ones which might not be available in the order you want them to be, etc.
That's the advantage of the "super node" approach - it's a container to tell you what's required in order to complete the creation of the logical device, and you can parse the sub-nodes to locate the information you need.
I think such an approach would lead to drm drivers which all parse their "super nodes" themselves and driver authors would become very creative how such a node should look like.
Also this gets messy with i2c devices which are normally registered under their i2c bus masters. With the super node approach these would have to live under the super node, maybe with a phandle to the i2c bus master. This again probably leads to very SoC specific solutions. It also doesn't solve the problem that the i2c bus master needs to be registered by the time the DRM driver probes.
On i.MX the IPU unit not only handles the display path but also the capture path. v4l2 begins to evolve an OF model in which each (sub)device has its natural position in the devicetree; the devices are then connected with phandles. I'm not sure how good this will work together with a super node approach.
An alternative as I see it is that DRM - and not only DRM but also the DRM API and Xorg - would need to evolve hotplug support for the various parts of the display subsystem. Do we have enough people with sufficient knowledge and willingness to be able to make all that happen? I don't think we do, and I don't see that there's any funding out there to make such a project happen, which would make it a volunteer/spare time effort.
+1 for this solution, even if this means more work to get from the ground.
Do we really need full hotplug support in the DRM API and Xorg? I mean it would really be nice if Xorg detected a newly registered device, but as a start it should be sufficient when Xorg detects what's there when it starts, no?
Since fbdev and fbcon sit on top of drm to provide the console currently I'd also expect some fun with them. How do I get a console if I have no outputs at boot, but I have crtcs? do I just wait around until an output appears.
There are a number of issues with hotplugging encoders and connectors at runtime, when really the SoC/board designer knows what it provides and should be able to tell the driver in some fashion.
The main problems when I played with hot adding eDP on Intel last time, are we have grouping of crtc/encoder/connectors for multi-seat future use, these groups need to be updated, and I think the other issue was updating the possible_crtcs/possible_clones stuff. In theory sending X a uevent will make it reload the list, and it mostly deals with device hotplug since 1.14 when I added the USB hotplug support.
I'm not saying this is a bad idea, but really it seems pointless where the hardware is pretty much hardcoded, that DT can't represent that and let the driver control the bring up ordering.
Have you also considered how suspend/resume works in such a place, where every driver is independent? The ChromeOS guys have bitched before about the exynos driver which is has lots of sub-drivers, how do you control the s/r ordering in a crazy system like that? I'd prefer a central driver, otherwise there is too many moving parts.
In my experience with exynos, having separate drivers creates a lot of pain at the interfaces and transitions:
- on boot you need to make sure that those multiple drivers initialize in the right order. If one comes up too late, the next one doesn't get the EDID through some passthrough or loses a hotplug interrupt.
- on dpms or on modeset, the order in which things change is also important. For example if you have a DisplayPort bridge you sometimes need to train the link with a signal from the previous component, if the signal isn't there yet training fails.
- on suspend/resume, turning things on/off in the right order is also important. Again that can bite you when one component implicitly relies on the next guy in the chain to hold its signal or its clock until it's off. As you add/remove drivers in other places, the driver suspend/resume queues will order operations differently and will expose or hide race conditions. The bug reports look like "Graphics crashes when I enable the wifi". Another example is that the screen was showing noise for a second when resuming; this happens because the bridge is up first and doesn't have data to show. Or you turn on the first chip, but it needs a passthrough for the HPD line from the next guy which isn't up yet. So you decide that actually nothing is plugged in and you give up.
- the pm_runtime stuff is entangled with the code. grep tells me there are 67 lines containing "pm_runtime" in exynos drm. A lot of it is non-obvious.
- each driver needs to be self-standing and needs to keep some of its own state. Things like "am I suspended or not" don't need to be re-implemented in each driver. However if you can suspend/resume in arbitrary order and want to synchronize with your buddies, then you need to know your state. exynos drivers do their own state tracking (grep -- "->suspended")
So overall, yes you can make it "work" with multiple small, independent drivers where each driver has its own device tree node. However you will need global variables to synchronize these drivers. You will need cross-driver function calls (exynos_drm_device_register) to make it work. You will need to add loops to wait for the previous component to successfully initialize (or shutdown), and only then kick DisplayPort link training (or turn the transmitter off). That makes the code convoluted, and it's really hard to make it work well and to maintain it. In my opinion this is much more work to debug this than to just order things right from the start. It also doesn't scale as you add more drivers.
So we went in the super-node direction. What we do in Chrome OS (and we're still working on this; we still have separate DT nodes which we plan to merge which is the last step) is look at the device tree during DRM initialization to know which chips are present. With that we know which subdrivers to instantiate into DRM abstractions. We then use the normal DRM code for everything*. Since most issues I outlined above revolve around ordering, they disappear once you turn your separate drivers into proper DRM components. You also don't need pm_runtime in there at all if you use DRM properly, because instead suspend/resume will call DRM which will call into the dpms callbacks as needed. For exynos we could also remove most of the per-driver state tracking (DRM does it for you) and also remove code used to wrap a non-DRM driver into a DRM driver (see exynos_drm_hdmi.c for an example of such a wrapper).
Stéphane
* For our specific case, we needed an additional abstraction, the drm_bridge, to handle a chip after the drm_connector (it's not specific to ARM, other platforms have also needed this in the past, see for example the drivers in drivers/gpu/drm/i2c/*). We intend to upstream this bit once we're happy with the interface.
On Tue, Jul 2, 2013 at 9:46 PM, Stéphane Marchesin stephane.marchesin@gmail.com wrote:
On Tue, Jul 2, 2013 at 3:02 PM, Dave Airlie airlied@gmail.com wrote:
On Wed, Jul 3, 2013 at 7:50 AM, Sascha Hauer s.hauer@pengutronix.de wrote:
On Tue, Jul 02, 2013 at 09:25:48PM +0100, Russell King wrote:
On Tue, Jul 02, 2013 at 09:57:32PM +0200, Sebastian Hesselbarth wrote:
I am against a super node which contains lcd and dcon/ire nodes. You can enable those devices on a per board basis. We add them to dove.dtsi but disable them by default (status = "disabled").
The DRM driver itself should get a video-card node outside of soc/internal-regs where you can put e.g. video memory hole (or video mem size if it will be taken from RAM later)
About the unusual case, I guess we should try to get both lcd controllers into one DRM driver. Then support mirror or screen extension X already provides. For those applications where you want X on one lcd and some other totally different video stream - wait for someone to come up with a request or proposal.
Well, all I can say then is that the onus is on those who want to treat the components as separate devices to come up with some foolproof way to solve this problem which doesn't involve making assumptions about the way that devices are probed and doesn't end up creating artificial restrictions on how the devices can be used - and doesn't end up burdening the common case with lots of useless complexity that they don't need.
It's _that_ case which needs to come up with a proposal about how to handle it because you _can't_ handle it at the moment in any sane manner which meets the criteria I've set out above, and at the moment the best proposal by far to resolve that is the "super node" approach.
There is _no_ way in the device model to combine several individual devices together into one logical device safely when the subsystem requires that there be a definite point where everything is known. That applies even more so with -EPROBE_DEFER. With the presence of such a thing, there is now no logical point where any code can say definitively that the system has technically finished booting and all resources are known.
That's the problem - if you don't od the super-node approach, you end up with lots of individual devices which you have to figure out some way of combining, and coping with missing ones which might not be available in the order you want them to be, etc.
That's the advantage of the "super node" approach - it's a container to tell you what's required in order to complete the creation of the logical device, and you can parse the sub-nodes to locate the information you need.
I think such an approach would lead to drm drivers which all parse their "super nodes" themselves and driver authors would become very creative how such a node should look like.
Also this gets messy with i2c devices which are normally registered under their i2c bus masters. With the super node approach these would have to live under the super node, maybe with a phandle to the i2c bus master. This again probably leads to very SoC specific solutions. It also doesn't solve the problem that the i2c bus master needs to be registered by the time the DRM driver probes.
On i.MX the IPU unit not only handles the display path but also the capture path. v4l2 begins to evolve an OF model in which each (sub)device has its natural position in the devicetree; the devices are then connected with phandles. I'm not sure how good this will work together with a super node approach.
An alternative as I see it is that DRM - and not only DRM but also the DRM API and Xorg - would need to evolve hotplug support for the various parts of the display subsystem. Do we have enough people with sufficient knowledge and willingness to be able to make all that happen? I don't think we do, and I don't see that there's any funding out there to make such a project happen, which would make it a volunteer/spare time effort.
+1 for this solution, even if this means more work to get from the ground.
Do we really need full hotplug support in the DRM API and Xorg? I mean it would really be nice if Xorg detected a newly registered device, but as a start it should be sufficient when Xorg detects what's there when it starts, no?
Since fbdev and fbcon sit on top of drm to provide the console currently I'd also expect some fun with them. How do I get a console if I have no outputs at boot, but I have crtcs? do I just wait around until an output appears.
There are a number of issues with hotplugging encoders and connectors at runtime, when really the SoC/board designer knows what it provides and should be able to tell the driver in some fashion.
The main problems when I played with hot adding eDP on Intel last time, are we have grouping of crtc/encoder/connectors for multi-seat future use, these groups need to be updated, and I think the other issue was updating the possible_crtcs/possible_clones stuff. In theory sending X a uevent will make it reload the list, and it mostly deals with device hotplug since 1.14 when I added the USB hotplug support.
I'm not saying this is a bad idea, but really it seems pointless where the hardware is pretty much hardcoded, that DT can't represent that and let the driver control the bring up ordering.
Have you also considered how suspend/resume works in such a place, where every driver is independent? The ChromeOS guys have bitched before about the exynos driver which is has lots of sub-drivers, how do you control the s/r ordering in a crazy system like that? I'd prefer a central driver, otherwise there is too many moving parts.
In my experience with exynos, having separate drivers creates a lot of pain at the interfaces and transitions:
- on boot you need to make sure that those multiple drivers initialize
in the right order. If one comes up too late, the next one doesn't get the EDID through some passthrough or loses a hotplug interrupt.
- on dpms or on modeset, the order in which things change is also
important. For example if you have a DisplayPort bridge you sometimes need to train the link with a signal from the previous component, if the signal isn't there yet training fails.
- on suspend/resume, turning things on/off in the right order is also
important. Again that can bite you when one component implicitly relies on the next guy in the chain to hold its signal or its clock until it's off. As you add/remove drivers in other places, the driver suspend/resume queues will order operations differently and will expose or hide race conditions. The bug reports look like "Graphics crashes when I enable the wifi". Another example is that the screen was showing noise for a second when resuming; this happens because the bridge is up first and doesn't have data to show. Or you turn on the first chip, but it needs a passthrough for the HPD line from the next guy which isn't up yet. So you decide that actually nothing is plugged in and you give up.
- the pm_runtime stuff is entangled with the code. grep tells me there
are 67 lines containing "pm_runtime" in exynos drm. A lot of it is non-obvious.
- each driver needs to be self-standing and needs to keep some of its
own state. Things like "am I suspended or not" don't need to be re-implemented in each driver. However if you can suspend/resume in arbitrary order and want to synchronize with your buddies, then you need to know your state. exynos drivers do their own state tracking (grep -- "->suspended")
So overall, yes you can make it "work" with multiple small, independent drivers where each driver has its own device tree node. However you will need global variables to synchronize these drivers. You will need cross-driver function calls (exynos_drm_device_register) to make it work. You will need to add loops to wait for the previous component to successfully initialize (or shutdown), and only then kick DisplayPort link training (or turn the transmitter off). That makes the code convoluted, and it's really hard to make it work well and to maintain it. In my opinion this is much more work to debug this than to just order things right from the start. It also doesn't scale as you add more drivers.
So we went in the super-node direction. What we do in Chrome OS (and we're still working on this; we still have separate DT nodes which we plan to merge which is the last step) is look at the device tree during DRM initialization to know which chips are present. With that we know which subdrivers to instantiate into DRM abstractions. We then use the normal DRM code for everything*. Since most issues I outlined above revolve around ordering, they disappear once you turn your separate drivers into proper DRM components. You also don't need pm_runtime in there at all if you use DRM properly, because instead suspend/resume will call DRM which will call into the dpms callbacks as needed. For exynos we could also remove most of the per-driver state tracking (DRM does it for you) and also remove code used to wrap a non-DRM driver into a DRM driver (see exynos_drm_hdmi.c for an example of such a wrapper).
agreed with Stéphane and Dave.. there are enough real problems to solve without inventing new ones
Stéphane
- For our specific case, we needed an additional abstraction, the
drm_bridge, to handle a chip after the drm_connector (it's not specific to ARM, other platforms have also needed this in the past, see for example the drivers in drivers/gpu/drm/i2c/*). We intend to upstream this bit once we're happy with the interface.
I need something like drm_bridge for the driver I'm working on, I think it would be cleaner than what I am doing at the moment. So I'll probably take your patch and add a bit on top (which can later be squashed down if desired) for what I'm working on
BR, -R
dri-devel mailing list dri-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/dri-devel
-----Original Message----- From: dri-devel-bounces+inki.dae=samsung.com@lists.freedesktop.org [mailto:dri-devel-bounces+inki.dae=samsung.com@lists.freedesktop.org] On Behalf Of Stephane Marchesin Sent: Wednesday, July 03, 2013 10:46 AM To: Dave Airlie Cc: Jean-Francois Moine; Daniel Drake;
devicetree-discuss@lists.ozlabs.org;
dri-devel@lists.freedesktop.org; Russell King; Sebastian Hesselbarth Subject: Re: Best practice device tree design for display subsystems/DRM
On Tue, Jul 2, 2013 at 3:02 PM, Dave Airlie airlied@gmail.com wrote:
On Wed, Jul 3, 2013 at 7:50 AM, Sascha Hauer s.hauer@pengutronix.de
wrote:
On Tue, Jul 02, 2013 at 09:25:48PM +0100, Russell King wrote:
On Tue, Jul 02, 2013 at 09:57:32PM +0200, Sebastian Hesselbarth wrote:
I am against a super node which contains lcd and dcon/ire nodes. You
can
enable those devices on a per board basis. We add them to dove.dtsi
but
disable them by default (status = "disabled").
The DRM driver itself should get a video-card node outside of soc/internal-regs where you can put e.g. video memory hole (or video mem size if it will be taken from RAM later)
About the unusual case, I guess we should try to get both lcd controllers into one DRM driver. Then support mirror or screen extension X already provides. For those applications where you want X on one lcd and some other totally different video stream - wait for someone to come up with a request or proposal.
Well, all I can say then is that the onus is on those who want to
treat
the components as separate devices to come up with some foolproof way to solve this problem which doesn't involve making assumptions about the way that devices are probed and doesn't end up creating artificial restrictions on how the devices can be used - and doesn't end up
burdening
the common case with lots of useless complexity that they don't need.
It's _that_ case which needs to come up with a proposal about how to handle it because you _can't_ handle it at the moment in any sane manner which meets the criteria I've set out above, and at the moment the best proposal by far to resolve that is the "super node" approach.
There is _no_ way in the device model to combine several individual devices together into one logical device safely when the subsystem requires that there be a definite point where everything is known. That applies even more so with -EPROBE_DEFER. With the presence of such a thing, there is now no logical point where any code can say definitively that the system has technically finished booting and all resources are known.
That's the problem - if you don't od the super-node approach, you end up with lots of individual devices which you have to figure out some way of combining, and coping with missing ones which might not be available in the order you want them to be, etc.
That's the advantage of the "super node" approach - it's a container to tell you what's required in order to complete the creation of the logical device, and you can parse the sub-nodes to locate the information you need.
I think such an approach would lead to drm drivers which all parse
their
"super nodes" themselves and driver authors would become very creative how such a node should look like.
Also this gets messy with i2c devices which are normally registered under their i2c bus masters. With the super node approach these would have to live under the super node, maybe with a phandle to the i2c bus master. This again probably leads to very SoC specific solutions. It also doesn't solve the problem that the i2c bus master needs to be registered by the time the DRM driver probes.
On i.MX the IPU unit not only handles the display path but also the capture path. v4l2 begins to evolve an OF model in which each
(sub)device
has its natural position in the devicetree; the devices are then connected with phandles. I'm not sure how good this will work together with a super node approach.
An alternative as I see it is that DRM - and not only DRM but also the DRM API and Xorg - would need to evolve hotplug support for the various parts of the display subsystem. Do we have enough people with sufficient knowledge and willingness to be able to make all that happen? I don't think we do, and I don't see that there's any funding out there to make such a project happen, which would make it a volunteer/spare time effort.
+1 for this solution, even if this means more work to get from the ground.
Do we really need full hotplug support in the DRM API and Xorg? I mean it would really be nice if Xorg detected a newly registered device, but as a start it should be sufficient when Xorg detects what's there when it starts, no?
Since fbdev and fbcon sit on top of drm to provide the console currently I'd also expect some fun with them. How do I get a console if I have no outputs at boot, but I have crtcs? do I just wait around until an output appears.
There are a number of issues with hotplugging encoders and connectors at runtime, when really the SoC/board designer knows what it provides and should be able to tell the driver in some fashion.
The main problems when I played with hot adding eDP on Intel last time, are we have grouping of crtc/encoder/connectors for multi-seat future use, these groups need to be updated, and I think the other issue was updating the possible_crtcs/possible_clones stuff. In theory sending X a uevent will make it reload the list, and it mostly deals with device hotplug since 1.14 when I added the USB hotplug support.
I'm not saying this is a bad idea, but really it seems pointless where the hardware is pretty much hardcoded, that DT can't represent that and let the driver control the bring up ordering.
Have you also considered how suspend/resume works in such a place, where every driver is independent? The ChromeOS guys have bitched before about the exynos driver which is has lots of sub-drivers, how do you control the s/r ordering in a crazy system like that? I'd prefer a central driver, otherwise there is too many moving parts.
In my experience with exynos, having separate drivers creates a lot of pain at the interfaces and transitions:
- on boot you need to make sure that those multiple drivers initialize
in the right order. If one comes up too late, the next one doesn't get the EDID through some passthrough or loses a hotplug interrupt.
- on dpms or on modeset, the order in which things change is also
important. For example if you have a DisplayPort bridge you sometimes need to train the link with a signal from the previous component, if the signal isn't there yet training fails.
- on suspend/resume, turning things on/off in the right order is also
important. Again that can bite you when one component implicitly relies on the next guy in the chain to hold its signal or its clock until it's off. As you add/remove drivers in other places, the driver suspend/resume queues will order operations differently and will expose or hide race conditions. The bug reports look like "Graphics crashes when I enable the wifi". Another example is that the screen was showing noise for a second when resuming; this happens because the bridge is up first and doesn't have data to show. Or you turn on the first chip, but it needs a passthrough for the HPD line from the next guy which isn't up yet. So you decide that actually nothing is plugged in and you give up.
- the pm_runtime stuff is entangled with the code. grep tells me there
are 67 lines containing "pm_runtime" in exynos drm. A lot of it is non-obvious.
- each driver needs to be self-standing and needs to keep some of its
own state. Things like "am I suspended or not" don't need to be re-implemented in each driver. However if you can suspend/resume in arbitrary order and want to synchronize with your buddies, then you need to know your state. exynos drivers do their own state tracking (grep -- "->suspended")
So overall, yes you can make it "work" with multiple small, independent drivers where each driver has its own device tree node. However you will need global variables to synchronize these drivers. You will need cross-driver function calls (exynos_drm_device_register) to make it work. You will need to add loops to wait for the previous component to successfully initialize (or shutdown), and only then kick DisplayPort link training (or turn the transmitter off). That makes the code convoluted, and it's really hard to make it work well and to maintain it. In my opinion this is much more work to debug this than to just order things right from the start. It also doesn't scale as you add more drivers.
So we went in the super-node direction. What we do in Chrome OS (and we're still working on this; we still have separate DT nodes which we plan to merge which is the last step) is look at the device tree during DRM initialization to know which chips are present. With that we know which subdrivers to instantiate into DRM abstractions. We then use the normal DRM code for everything*. Since most issues I outlined above revolve around ordering, they disappear once you turn your separate drivers into proper DRM components. You also don't need pm_runtime in there at all if you use DRM properly, because instead suspend/resume will call DRM which will call into the dpms callbacks as needed. For exynos we could also remove most of the per-driver state tracking (DRM does it for you) and also remove code used to wrap a non-DRM driver into a DRM driver (see exynos_drm_hdmi.c for an example of such a wrapper).
Interesting and that is really what we want. Actually, we had thought it over but we couldn't afford to do it. Where I can refer to the relevant codes from? I'd like to look into it. And please post it as RFC so that we can discuss it.
Thanks, Inki Dae
Stéphane
- For our specific case, we needed an additional abstraction, the
drm_bridge, to handle a chip after the drm_connector (it's not specific to ARM, other platforms have also needed this in the past, see for example the drivers in drivers/gpu/drm/i2c/*). We intend to upstream this bit once we're happy with the interface. _______________________________________________ dri-devel mailing list dri-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/dri-devel
Am Dienstag, den 02.07.2013, 18:46 -0700 schrieb Stéphane Marchesin:
On Tue, Jul 2, 2013 at 3:02 PM, Dave Airlie airlied@gmail.com wrote:
On Wed, Jul 3, 2013 at 7:50 AM, Sascha Hauer s.hauer@pengutronix.de wrote:
On Tue, Jul 02, 2013 at 09:25:48PM +0100, Russell King wrote:
On Tue, Jul 02, 2013 at 09:57:32PM +0200, Sebastian Hesselbarth wrote:
I am against a super node which contains lcd and dcon/ire nodes. You can enable those devices on a per board basis. We add them to dove.dtsi but disable them by default (status = "disabled").
The DRM driver itself should get a video-card node outside of soc/internal-regs where you can put e.g. video memory hole (or video mem size if it will be taken from RAM later)
About the unusual case, I guess we should try to get both lcd controllers into one DRM driver. Then support mirror or screen extension X already provides. For those applications where you want X on one lcd and some other totally different video stream - wait for someone to come up with a request or proposal.
Well, all I can say then is that the onus is on those who want to treat the components as separate devices to come up with some foolproof way to solve this problem which doesn't involve making assumptions about the way that devices are probed and doesn't end up creating artificial restrictions on how the devices can be used - and doesn't end up burdening the common case with lots of useless complexity that they don't need.
It's _that_ case which needs to come up with a proposal about how to handle it because you _can't_ handle it at the moment in any sane manner which meets the criteria I've set out above, and at the moment the best proposal by far to resolve that is the "super node" approach.
There is _no_ way in the device model to combine several individual devices together into one logical device safely when the subsystem requires that there be a definite point where everything is known. That applies even more so with -EPROBE_DEFER. With the presence of such a thing, there is now no logical point where any code can say definitively that the system has technically finished booting and all resources are known.
That's the problem - if you don't od the super-node approach, you end up with lots of individual devices which you have to figure out some way of combining, and coping with missing ones which might not be available in the order you want them to be, etc.
That's the advantage of the "super node" approach - it's a container to tell you what's required in order to complete the creation of the logical device, and you can parse the sub-nodes to locate the information you need.
I think such an approach would lead to drm drivers which all parse their "super nodes" themselves and driver authors would become very creative how such a node should look like.
Also this gets messy with i2c devices which are normally registered under their i2c bus masters. With the super node approach these would have to live under the super node, maybe with a phandle to the i2c bus master. This again probably leads to very SoC specific solutions. It also doesn't solve the problem that the i2c bus master needs to be registered by the time the DRM driver probes.
On i.MX the IPU unit not only handles the display path but also the capture path. v4l2 begins to evolve an OF model in which each (sub)device has its natural position in the devicetree; the devices are then connected with phandles. I'm not sure how good this will work together with a super node approach.
An alternative as I see it is that DRM - and not only DRM but also the DRM API and Xorg - would need to evolve hotplug support for the various parts of the display subsystem. Do we have enough people with sufficient knowledge and willingness to be able to make all that happen? I don't think we do, and I don't see that there's any funding out there to make such a project happen, which would make it a volunteer/spare time effort.
+1 for this solution, even if this means more work to get from the ground.
Do we really need full hotplug support in the DRM API and Xorg? I mean it would really be nice if Xorg detected a newly registered device, but as a start it should be sufficient when Xorg detects what's there when it starts, no?
Since fbdev and fbcon sit on top of drm to provide the console currently I'd also expect some fun with them. How do I get a console if I have no outputs at boot, but I have crtcs? do I just wait around until an output appears.
There are a number of issues with hotplugging encoders and connectors at runtime, when really the SoC/board designer knows what it provides and should be able to tell the driver in some fashion.
The main problems when I played with hot adding eDP on Intel last time, are we have grouping of crtc/encoder/connectors for multi-seat future use, these groups need to be updated, and I think the other issue was updating the possible_crtcs/possible_clones stuff. In theory sending X a uevent will make it reload the list, and it mostly deals with device hotplug since 1.14 when I added the USB hotplug support.
I'm not saying this is a bad idea, but really it seems pointless where the hardware is pretty much hardcoded, that DT can't represent that and let the driver control the bring up ordering.
Have you also considered how suspend/resume works in such a place, where every driver is independent? The ChromeOS guys have bitched before about the exynos driver which is has lots of sub-drivers, how do you control the s/r ordering in a crazy system like that? I'd prefer a central driver, otherwise there is too many moving parts.
In my experience with exynos, having separate drivers creates a lot of pain at the interfaces and transitions:
- on boot you need to make sure that those multiple drivers initialize
in the right order. If one comes up too late, the next one doesn't get the EDID through some passthrough or loses a hotplug interrupt.
- on dpms or on modeset, the order in which things change is also
important. For example if you have a DisplayPort bridge you sometimes need to train the link with a signal from the previous component, if the signal isn't there yet training fails.
- on suspend/resume, turning things on/off in the right order is also
important. Again that can bite you when one component implicitly relies on the next guy in the chain to hold its signal or its clock until it's off. As you add/remove drivers in other places, the driver suspend/resume queues will order operations differently and will expose or hide race conditions. The bug reports look like "Graphics crashes when I enable the wifi". Another example is that the screen was showing noise for a second when resuming; this happens because the bridge is up first and doesn't have data to show. Or you turn on the first chip, but it needs a passthrough for the HPD line from the next guy which isn't up yet. So you decide that actually nothing is plugged in and you give up.
- the pm_runtime stuff is entangled with the code. grep tells me there
are 67 lines containing "pm_runtime" in exynos drm. A lot of it is non-obvious.
- each driver needs to be self-standing and needs to keep some of its
own state. Things like "am I suspended or not" don't need to be re-implemented in each driver. However if you can suspend/resume in arbitrary order and want to synchronize with your buddies, then you need to know your state. exynos drivers do their own state tracking (grep -- "->suspended")
So overall, yes you can make it "work" with multiple small, independent drivers where each driver has its own device tree node. However you will need global variables to synchronize these drivers. You will need cross-driver function calls (exynos_drm_device_register) to make it work. You will need to add loops to wait for the previous component to successfully initialize (or shutdown), and only then kick DisplayPort link training (or turn the transmitter off). That makes the code convoluted, and it's really hard to make it work well and to maintain it. In my opinion this is much more work to debug this than to just order things right from the start. It also doesn't scale as you add more drivers.
So we went in the super-node direction. What we do in Chrome OS (and we're still working on this; we still have separate DT nodes which we plan to merge which is the last step) is look at the device tree during DRM initialization to know which chips are present. With that we know which subdrivers to instantiate into DRM abstractions. We then use the normal DRM code for everything*. Since most issues I outlined above revolve around ordering, they disappear once you turn your separate drivers into proper DRM components. You also don't need pm_runtime in there at all if you use DRM properly, because instead suspend/resume will call DRM which will call into the dpms callbacks as needed. For exynos we could also remove most of the per-driver state tracking (DRM does it for you) and also remove code used to wrap a non-DRM driver into a DRM driver (see exynos_drm_hdmi.c for an example of such a wrapper).
From our perspective we really want to share drivers for specific
sub-components of a DRM device between different SoCs. We don't want to hardwire the code for a specific encoder into one DRM driver and copy-paste things for the next SoC which comes around.
I think we should really differentiate between the Linux device core mechanics and having small self contained drivers for sub-components. I think we all agree on the point that we need some kind of control node that represents the DRM device to userspace in the end. This control node can do all the hard stuff like getting suspend/resume and other order critical things right. We clearly don't want to rely on some random Linux device core governed probe or suspend/resume order.
What we want is small self-contained drivers for the sub-components that can be probed from the DT. Such a driver should not try to active it's driven hardware on probe time, but rather just signal it's existence to the control node. The control node can then (maybe at it's own open() time) look at all the registered sub-components and build a output path from them. Only at this point you try to active things so you can exactly control the order from your control node.
This means introducing some generic interfaces for encoders, crtcs and connectors, but hey this is just in-kernel API, so we don't have to get this completely right on the first try.
Regards, Lucas
On Wed, Jul 03, 2013 at 08:02:05AM +1000, Dave Airlie wrote:
On Wed, Jul 3, 2013 at 7:50 AM, Sascha Hauer s.hauer@pengutronix.de wrote:
On Tue, Jul 02, 2013 at 09:25:48PM +0100, Russell King wrote:
On Tue, Jul 02, 2013 at 09:57:32PM +0200, Sebastian Hesselbarth wrote:
I am against a super node which contains lcd and dcon/ire nodes. You can enable those devices on a per board basis. We add them to dove.dtsi but disable them by default (status = "disabled").
The DRM driver itself should get a video-card node outside of soc/internal-regs where you can put e.g. video memory hole (or video mem size if it will be taken from RAM later)
About the unusual case, I guess we should try to get both lcd controllers into one DRM driver. Then support mirror or screen extension X already provides. For those applications where you want X on one lcd and some other totally different video stream - wait for someone to come up with a request or proposal.
Well, all I can say then is that the onus is on those who want to treat the components as separate devices to come up with some foolproof way to solve this problem which doesn't involve making assumptions about the way that devices are probed and doesn't end up creating artificial restrictions on how the devices can be used - and doesn't end up burdening the common case with lots of useless complexity that they don't need.
It's _that_ case which needs to come up with a proposal about how to handle it because you _can't_ handle it at the moment in any sane manner which meets the criteria I've set out above, and at the moment the best proposal by far to resolve that is the "super node" approach.
There is _no_ way in the device model to combine several individual devices together into one logical device safely when the subsystem requires that there be a definite point where everything is known. That applies even more so with -EPROBE_DEFER. With the presence of such a thing, there is now no logical point where any code can say definitively that the system has technically finished booting and all resources are known.
That's the problem - if you don't od the super-node approach, you end up with lots of individual devices which you have to figure out some way of combining, and coping with missing ones which might not be available in the order you want them to be, etc.
That's the advantage of the "super node" approach - it's a container to tell you what's required in order to complete the creation of the logical device, and you can parse the sub-nodes to locate the information you need.
I think such an approach would lead to drm drivers which all parse their "super nodes" themselves and driver authors would become very creative how such a node should look like.
Also this gets messy with i2c devices which are normally registered under their i2c bus masters. With the super node approach these would have to live under the super node, maybe with a phandle to the i2c bus master. This again probably leads to very SoC specific solutions. It also doesn't solve the problem that the i2c bus master needs to be registered by the time the DRM driver probes.
On i.MX the IPU unit not only handles the display path but also the capture path. v4l2 begins to evolve an OF model in which each (sub)device has its natural position in the devicetree; the devices are then connected with phandles. I'm not sure how good this will work together with a super node approach.
An alternative as I see it is that DRM - and not only DRM but also the DRM API and Xorg - would need to evolve hotplug support for the various parts of the display subsystem. Do we have enough people with sufficient knowledge and willingness to be able to make all that happen? I don't think we do, and I don't see that there's any funding out there to make such a project happen, which would make it a volunteer/spare time effort.
+1 for this solution, even if this means more work to get from the ground.
Do we really need full hotplug support in the DRM API and Xorg? I mean it would really be nice if Xorg detected a newly registered device, but as a start it should be sufficient when Xorg detects what's there when it starts, no?
Since fbdev and fbcon sit on top of drm to provide the console currently I'd also expect some fun with them. How do I get a console if I have no outputs at boot, but I have crtcs? do I just wait around until an output appears.
I thought the console/fb stuff should go away.
There are a number of issues with hotplugging encoders and connectors at runtime, when really the SoC/board designer knows what it provides and should be able to tell the driver in some fashion.
The main problems when I played with hot adding eDP on Intel last time, are we have grouping of crtc/encoder/connectors for multi-seat future use, these groups need to be updated, and I think the other issue was updating the possible_crtcs/possible_clones stuff. In theory sending X a uevent will make it reload the list, and it mostly deals with device hotplug since 1.14 when I added the USB hotplug support.
I'm not saying this is a bad idea, but really it seems pointless where the hardware is pretty much hardcoded, that DT can't represent that and let the driver control the bring up ordering.
SoC hardware normally does not change during runtime, that's right. That's why I don't want to have full hotplug support up to xorg, but only a way of adding/removing crtcs, encoders and connectors on an already registered DRM device. We already do this in the i.MX DRM driver (see drivers/staging/imx-drm/imx-drm-core.c). I'm sure this is not without problems, but I think it would be doable.
Have you also considered how suspend/resume works in such a place, where every driver is independent? The ChromeOS guys have bitched before about the exynos driver which is has lots of sub-drivers, how do you control the s/r ordering in a crazy system like that? I'd prefer a central driver, otherwise there is too many moving parts.
Composing a DRM device out of subdevices doesn't necessarily mean the components should be suspended/resumed in arbitrary order. The DRM device should always be suspended first (thus deactivating sub devices as necessary and as done already) and resumed last.
Note that a super node approach does not solve this magically. We would still have to make sure that the i2c bus masters on our SoC are suspended after the DRM device.
Sascha
On 07/03/13 08:55, Sascha Hauer wrote:
On Wed, Jul 03, 2013 at 08:02:05AM +1000, Dave Airlie wrote:
Have you also considered how suspend/resume works in such a place, where every driver is independent? The ChromeOS guys have bitched before about the exynos driver which is has lots of sub-drivers, how do you control the s/r ordering in a crazy system like that? I'd prefer a central driver, otherwise there is too many moving parts.
Composing a DRM device out of subdevices doesn't necessarily mean the components should be suspended/resumed in arbitrary order. The DRM device should always be suspended first (thus deactivating sub devices as necessary and as done already) and resumed last.
Note that a super node approach does not solve this magically. We would still have to make sure that the i2c bus masters on our SoC are suspended after the DRM device.
+1 for a video card supernode that at best should be some very generic node with standard properties provided by DRM backend. IIRC there was a proposal for of_video_card a while ago.
At least for Marvell SoCs, moving device nodes out of the bus structure will not work. The parent bus is _required_ for address mapping as the base address is configurable. Using phandles can solve this without moving nodes.
Also, having separate device nodes does not require a separate driver for each nodes. All nodes get platform_devices registered, but you can choose not to have a matching driver for it. Then the video card super node can pick up that nodes by using the phandles passed and register a single DRM driver claiming the devices.
Moreover, if we talk about SoC graphics, we have to take audio into account. If you move all nodes to your video card super node, you will add another bunch of issues for ASoC linking to e.g. the I2C HDMI transmitter SPDIF codec.
IMHO phandles and super node subnodes are equivalent from a driver point-of-view but phandles are more likely to cause less pain for other subsystems.
The super node approach will also allow to have the same SoC/board components being used as single video card or multiple video cards environment. There is virtually no way to automatically determine what devices belong to "your" video card(s) in a SoC, so we need something to describe those cards.
One thing I am concerned about is what Sascha pointed out above. If you hook up an external I2C encoder to your card, you cannot make sure I2C bus is suspended before DRM device. To be honest, proposing a solution for that is still way beyond my expertise wrt to Linux internals, so I am not even trying it. Maybe I am even missing a very important point for the super node/phandle proposal, if so, please clarify.
Sebastian
On Tue, Jul 2, 2013 at 9:25 PM, Russell King rmk@arm.linux.org.uk wrote:
On Tue, Jul 02, 2013 at 09:57:32PM +0200, Sebastian Hesselbarth wrote:
I am against a super node which contains lcd and dcon/ire nodes. You can enable those devices on a per board basis. We add them to dove.dtsi but disable them by default (status = "disabled").
The DRM driver itself should get a video-card node outside of soc/internal-regs where you can put e.g. video memory hole (or video mem size if it will be taken from RAM later)
About the unusual case, I guess we should try to get both lcd controllers into one DRM driver. Then support mirror or screen extension X already provides. For those applications where you want X on one lcd and some other totally different video stream - wait for someone to come up with a request or proposal.
Well, all I can say then is that the onus is on those who want to treat the components as separate devices to come up with some foolproof way to solve this problem which doesn't involve making assumptions about the way that devices are probed and doesn't end up creating artificial restrictions on how the devices can be used - and doesn't end up burdening the common case with lots of useless complexity that they don't need.
It's _that_ case which needs to come up with a proposal about how to handle it because you _can't_ handle it at the moment in any sane manner which meets the criteria I've set out above, and at the moment the best proposal by far to resolve that is the "super node" approach.
There is _no_ way in the device model to combine several individual devices together into one logical device safely when the subsystem requires that there be a definite point where everything is known. That applies even more so with -EPROBE_DEFER. With the presence of such a thing, there is now no logical point where any code can say definitively that the system has technically finished booting and all resources are known.
That's the problem - if you don't od the super-node approach, you end up with lots of individual devices which you have to figure out some way of combining, and coping with missing ones which might not be available in the order you want them to be, etc.
That's the advantage of the "super node" approach - it's a container to tell you what's required in order to complete the creation of the logical device, and you can parse the sub-nodes to locate the information you need.
Alternatively, you can have the same effect with a property or set of properties in the controller node that contains phandles to the required devices. That would provide the driver with the same information about which devices must be present.
g.
On Fri, Jul 05, 2013 at 09:37:34AM +0100, Grant Likely wrote:
Alternatively, you can have the same effect with a property or set of properties in the controller node that contains phandles to the required devices. That would provide the driver with the same information about which devices must be present.
How do you go from phandle to something-that-the-driver-for-that-device- has-setup?
From what I can see, you can go from phandle to OF node, but no further.
I'm guessing we'd need some kind of "registry" for sub-drivers with these structures to register their devices OF node plus "shared" data so that other drivers can find it. "shared" data might be a standardized operations struct or something similar to 'struct device_driver' but for componentised devices.
On Fri, Jul 5, 2013 at 9:50 AM, Russell King rmk@arm.linux.org.uk wrote:
On Fri, Jul 05, 2013 at 09:37:34AM +0100, Grant Likely wrote:
Alternatively, you can have the same effect with a property or set of properties in the controller node that contains phandles to the required devices. That would provide the driver with the same information about which devices must be present.
How do you go from phandle to something-that-the-driver-for-that-device- has-setup?
From what I can see, you can go from phandle to OF node, but no further.
Correct, and that has historically been by design because it is possible for multiple struct devices to reference a single device_node. Any subsystem that needs to get a particular device has a lookup mechanism that searches the list of known devices and returns a match.
example: of_mdio_find_bus()
I'm guessing we'd need some kind of "registry" for sub-drivers with these structures to register their devices OF node plus "shared" data so that other drivers can find it. "shared" data might be a standardized operations struct or something similar to 'struct device_driver' but for componentised devices.
If it is per-subsystem, the effort shouldn't be too high because it will be able to collect devices of the same type. It gets more complicated to design a generic componentised device abstraction (which I'm not opposed to, it's just going to be tricky and subtle).
One big concern I have is differentiating between manditory and optional dependencies. Manditory is always easy to handle, but what about cases where the supernode (using phandles to other nodes) references other devices, but it is perfectly valid for the driver to complete probing before it becomes available? I may just be borrowing trouble here though. Supporting only mandatory dependencies in the first cut would still be a big step forward.
One simple approach would be to add a "depends on" list to struct device and make the driver core check that all the dependant devices have drivers before probing. The risk is that would become a complicated set of reference counting and housekeeping.
g.
On Tue, Jul 2, 2013 at 1:57 PM, Sebastian Hesselbarth sebastian.hesselbarth@gmail.com wrote:
I am against a super node which contains lcd and dcon/ire nodes. You can enable those devices on a per board basis. We add them to dove.dtsi but disable them by default (status = "disabled").
The DRM driver itself should get a video-card node outside of soc/internal-regs where you can put e.g. video memory hole (or video mem size if it will be taken from RAM later)
For completeness of the discussion, and my understanding too, can you explain your objections to the display super-node in a bit more detail?
Thanks Daniel
On 07/02/2013 11:04 PM, Daniel Drake wrote:
On Tue, Jul 2, 2013 at 1:57 PM, Sebastian Hesselbarth sebastian.hesselbarth@gmail.com wrote:
I am against a super node which contains lcd and dcon/ire nodes. You can enable those devices on a per board basis. We add them to dove.dtsi but disable them by default (status = "disabled").
The DRM driver itself should get a video-card node outside of soc/internal-regs where you can put e.g. video memory hole (or video mem size if it will be taken from RAM later)
For completeness of the discussion, and my understanding too, can you explain your objections to the display super-node in a bit more detail?
lcd-controller nodes and dcon node will need to be children of internal-regs nodes. The internal-regs node is required for address translation as the mbus base address can be configured. The above does not permit a super-node - but you cannot have the nodes above outside of internal-regs.
As Russell stated, he wants a proposal for the "unusual case" i.e. you have two lcd controllers. You use one for Xorg and the other for e.g. running a linux terminal console.
This would require some reference from the super node to the lcd controller to sort out which DRM device (represented by the super node) should be using which lcd controller device.
Using status = "disabled" alone will only allow to enable or disable lcd controller nodes but not assign any of it to your two super-nodes.
So my current proposal after thinking about Russell's statements whould be phandles as Jean-Francois already mentioned. I am not sure what OF maintainers will think about it, but that is another thing.
Basically, you will have: (Note: names and property-names are just to show how it could work, and example is joined from possible future dove.dtsi and dove-board.dts)
video { /* Single video card w/ multiple lcd controllers */ card0 { compatible = "marvell,armada-510-display"; reg = <0 0x3f000000 0x1000000>; /* video-mem hole */ /* later: linux,video-memory-size = <0x1000000>; */ marvell,video-devices = <&lcd0 &lcd1 &dcon>; };
/* OR: Multiple video card w/ single lcd controllers */ card0 { compatible = "marvell,armada-510-display"; ... marvell,video-devices = <&lcd0>; };
card1 { compatible = "marvell,armada-510-display"; ... marvell,video-devices = <&lcd1>; }; };
mbus { compatible = "marvell,dove-mbus"; ranges = <...>;
sb-regs { ranges = <0 0xf1000000 0 0x100000>; ... }
nb-regs { ranges = <0 0xf1800000 0 0x100000>;
lcd0: lcd-controller@20000 { compatible = "marvell,armada-510-lcd"; reg = <0x20000 0x1000>; interrupts = <47>; ... /* use EXTCLK0 with lcd0 */ clocks = <&ext_clk0>; clock-names = "extclk0"; marvell,external-encoder = <&tda998x>; };
lcd1: lcd-controller@10000 { compatible = "marvell,armada-510-lcd"; reg = <0x10000 0x1000>; interrupts = <46>; ... /* use LCDPLL with lcd1 */ clocks = <&lcd_pll_clk>; clock-names = "lcdpll"; }; } };
&i2c0 { tda998x: hdmi-transmitter@60 { compatible = "nxp,tda19988"; reg = <0x60>; ... } };
Each lcd controller node represents a platform_device and the display nodes above should look up phandles and determine type (ctrc or dcon) by compatible string of the nodes the phandles point to.
Sebastian
-----Original Message----- From: dri-devel-bounces+inki.dae=samsung.com@lists.freedesktop.org [mailto:dri-devel-bounces+inki.dae=samsung.com@lists.freedesktop.org] On Behalf Of Sebastian Hesselbarth Sent: Wednesday, July 03, 2013 6:41 AM To: Daniel Drake Cc: Jean-Francois Moine; devicetree-discuss@lists.ozlabs.org; dri- devel@lists.freedesktop.org; Russell King Subject: Re: Best practice device tree design for display subsystems/DRM
On 07/02/2013 11:04 PM, Daniel Drake wrote:
On Tue, Jul 2, 2013 at 1:57 PM, Sebastian Hesselbarth sebastian.hesselbarth@gmail.com wrote:
I am against a super node which contains lcd and dcon/ire nodes. You
can
enable those devices on a per board basis. We add them to dove.dtsi but disable them by default (status = "disabled").
The DRM driver itself should get a video-card node outside of soc/internal-regs where you can put e.g. video memory hole (or video mem size if it will be taken from RAM later)
For completeness of the discussion, and my understanding too, can you explain your objections to the display super-node in a bit more detail?
lcd-controller nodes and dcon node will need to be children of internal-regs nodes. The internal-regs node is required for address translation as the mbus base address can be configured. The above does not permit a super-node - but you cannot have the nodes above outside of internal-regs.
As Russell stated, he wants a proposal for the "unusual case" i.e. you have two lcd controllers. You use one for Xorg and the other for e.g. running a linux terminal console.
This would require some reference from the super node to the lcd controller to sort out which DRM device (represented by the super node) should be using which lcd controller device.
Using status = "disabled" alone will only allow to enable or disable lcd controller nodes but not assign any of it to your two super-nodes.
So my current proposal after thinking about Russell's statements whould be phandles as Jean-Francois already mentioned. I am not sure what OF maintainers will think about it, but that is another thing.
Basically, you will have: (Note: names and property-names are just to show how it could work, and example is joined from possible future dove.dtsi and dove-board.dts)
video { /* Single video card w/ multiple lcd controllers */ card0 { compatible = "marvell,armada-510-display"; reg = <0 0x3f000000 0x1000000>; /* video-mem hole */ /* later: linux,video-memory-size = <0x1000000>; */ marvell,video-devices = <&lcd0 &lcd1 &dcon>; };
/* OR: Multiple video card w/ single lcd controllers */ card0 { compatible = "marvell,armada-510-display"; ... marvell,video-devices = <&lcd0>; };
card1 { compatible = "marvell,armada-510-display"; ... marvell,video-devices = <&lcd1>; }; };
Sorry but I'd like to say that this cannot be used commonly. Shouldn't you really consider Linux framebuffer or other subsystems? The above dtsi file is specific to DRM subsystem. And I think the dtsi file has no any dependency on certain subsystem so board dtsi file should be considered for all device drivers based on other subsystems: i.e., Linux framebuffer, DRM, and so no. So I *strongly* object to it. All we have to do is to keep the dtsi file as is, and to find other better way that can be used commonly in DRM.
Thanks, Inki Dae
mbus { compatible = "marvell,dove-mbus"; ranges = <...>;
sb-regs { ranges = <0 0xf1000000 0 0x100000>; ... }
nb-regs { ranges = <0 0xf1800000 0 0x100000>;
lcd0: lcd-controller@20000 { compatible = "marvell,armada-510-lcd"; reg = <0x20000 0x1000>; interrupts = <47>; ... /* use EXTCLK0 with lcd0 */ clocks = <&ext_clk0>; clock-names = "extclk0"; marvell,external-encoder = <&tda998x>; }; lcd1: lcd-controller@10000 { compatible = "marvell,armada-510-lcd"; reg = <0x10000 0x1000>; interrupts = <46>; ... /* use LCDPLL with lcd1 */ clocks = <&lcd_pll_clk>; clock-names = "lcdpll"; };
} };
&i2c0 { tda998x: hdmi-transmitter@60 { compatible = "nxp,tda19988"; reg = <0x60>; ... } };
Each lcd controller node represents a platform_device and the display nodes above should look up phandles and determine type (ctrc or dcon) by compatible string of the nodes the phandles point to.
Sebastian _______________________________________________ dri-devel mailing list dri-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/dri-devel
On Wed, Jul 03, 2013 at 05:57:18PM +0900, Inki Dae wrote:
video { /* Single video card w/ multiple lcd controllers */ card0 { compatible = "marvell,armada-510-display"; reg = <0 0x3f000000 0x1000000>; /* video-mem hole */ /* later: linux,video-memory-size = <0x1000000>; */ marvell,video-devices = <&lcd0 &lcd1 &dcon>; };
/* OR: Multiple video card w/ single lcd controllers */ card0 { compatible = "marvell,armada-510-display"; ... marvell,video-devices = <&lcd0>; };
card1 { compatible = "marvell,armada-510-display"; ... marvell,video-devices = <&lcd1>; }; };
Sorry but I'd like to say that this cannot be used commonly. Shouldn't you really consider Linux framebuffer or other subsystems? The above dtsi file is specific to DRM subsystem. And I think the dtsi file has no any dependency on certain subsystem so board dtsi file should be considered for all device drivers based on other subsystems: i.e., Linux framebuffer, DRM, and so no. So I *strongly* object to it. All we have to do is to keep the dtsi file as is, and to find other better way that can be used commonly in DRM.
+1 for not encoding the projected usecase of the graphics subsystem into the devicetree. Whether the two LCD controllers shall be used together or separately should not affect the devicetree. devicetree is about hardware description, not configuration.
Sascha
On 07/03/13 11:02, Sascha Hauer wrote:
On Wed, Jul 03, 2013 at 05:57:18PM +0900, Inki Dae wrote:
video { /* Single video card w/ multiple lcd controllers */ card0 { compatible = "marvell,armada-510-display"; reg = <0 0x3f000000 0x1000000>; /* video-mem hole */ /* later: linux,video-memory-size = <0x1000000>; */ marvell,video-devices = <&lcd0 &lcd1 &dcon>; };
/* OR: Multiple video card w/ single lcd controllers */ card0 { compatible = "marvell,armada-510-display"; ... marvell,video-devices = <&lcd0>; };
card1 { compatible = "marvell,armada-510-display"; ... marvell,video-devices = <&lcd1>; }; };
Sorry but I'd like to say that this cannot be used commonly. Shouldn't you really consider Linux framebuffer or other subsystems? The above dtsi file is specific to DRM subsystem. And I think the dtsi file has no any dependency on certain subsystem so board dtsi file should be considered for all device drivers based on other subsystems: i.e., Linux framebuffer, DRM, and so no. So I *strongly* object to it. All we have to do is to keep the dtsi file as is, and to find other better way that can be used commonly in DRM.
Sascha, Inki,
can you clarify how the above will _not_ allow you to write a fb driver exploiting the cardX nodes?
While lcd controller and dcon are physically available, the video card is just a virtual combination of those.
+1 for not encoding the projected usecase of the graphics subsystem into the devicetree. Whether the two LCD controllers shall be used together or separately should not affect the devicetree. devicetree is about hardware description, not configuration.
Have you had a look at gpio-leds? It _is_ actually a configuration of GPIO to be used as LED triggers. IMHO DT is just fine for describing even "virtual" hardware you make up out of existing devices. Without it there is no way for the subsystems to determine the configuration.
Regarding gpio-leds, how should the driver know the single gpio line out of tens of available lines, if you do not use a virtual gpio led node to describe it?
Sebastian
-----Original Message----- From: Sebastian Hesselbarth [mailto:sebastian.hesselbarth@gmail.com] Sent: Wednesday, July 03, 2013 6:09 PM To: Sascha Hauer Cc: Inki Dae; 'Daniel Drake'; 'Jean-Francois Moine'; devicetree- discuss@lists.ozlabs.org; 'Russell King'; dri-devel@lists.freedesktop.org Subject: Re: Best practice device tree design for display subsystems/DRM
On 07/03/13 11:02, Sascha Hauer wrote:
On Wed, Jul 03, 2013 at 05:57:18PM +0900, Inki Dae wrote:
video { /* Single video card w/ multiple lcd controllers */ card0 { compatible = "marvell,armada-510-display"; reg = <0 0x3f000000 0x1000000>; /* video-mem hole */ /* later: linux,video-memory-size = <0x1000000>; */ marvell,video-devices = <&lcd0 &lcd1 &dcon>; };
/* OR: Multiple video card w/ single lcd controllers */ card0 { compatible = "marvell,armada-510-display"; ... marvell,video-devices = <&lcd0>; };
card1 { compatible = "marvell,armada-510-display"; ... marvell,video-devices = <&lcd1>; }; };
Sorry but I'd like to say that this cannot be used commonly. Shouldn't
you
really consider Linux framebuffer or other subsystems? The above dtsi
file
is specific to DRM subsystem. And I think the dtsi file has no any dependency on certain subsystem so board dtsi file should be considered
for
all device drivers based on other subsystems: i.e., Linux framebuffer,
DRM,
and so no. So I *strongly* object to it. All we have to do is to keep
the
dtsi file as is, and to find other better way that can be used commonly
in
DRM.
Sascha, Inki,
can you clarify how the above will _not_ allow you to write a fb driver exploiting the cardX nodes?
That's not whether we can write device driver or not. dtsi is common spot in other subsystems. Do you think the cardX node is meaningful to other subsystems?
Thanks, Inki Dae
While lcd controller and dcon are physically available, the video card is just a virtual combination of those.
+1 for not encoding the projected usecase of the graphics subsystem into the devicetree. Whether the two LCD controllers shall be used together or separately should not affect the devicetree. devicetree is about hardware description, not configuration.
Have you had a look at gpio-leds? It _is_ actually a configuration of GPIO to be used as LED triggers. IMHO DT is just fine for describing even "virtual" hardware you make up out of existing devices. Without it there is no way for the subsystems to determine the configuration.
Regarding gpio-leds, how should the driver know the single gpio line out of tens of available lines, if you do not use a virtual gpio led node to describe it?
Sebastian
On Wed, Jul 03, 2013 at 06:48:41PM +0900, Inki Dae wrote:
That's not whether we can write device driver or not. dtsi is common spot in other subsystems. Do you think the cardX node is meaningful to other subsystems?
Yes, because fbdev could also use it to solve the same problem which we're having with DRM.
On 07/03/13 11:53, Russell King wrote:
On Wed, Jul 03, 2013 at 06:48:41PM +0900, Inki Dae wrote:
That's not whether we can write device driver or not. dtsi is common spot in other subsystems. Do you think the cardX node is meaningful to other subsystems?
Yes, because fbdev could also use it to solve the same problem which we're having with DRM.
Inki,
I do not understand why you keep referring to the SoC dtsi. Im my example, I said that it is made up and joined from both SoC dtsi and board dts.
So, of course, lcd controller nodes and dcon are part of dove.dtsi because they are physically available on every Dove SoC.
Also, the connection from lcd0 to the external HDMI encoder node is in the board dts because you can happily build a Dove SoC board with a different HDMI encoder or with no encoder at all.
The video-card super node is not in any way specific to DRM and describes a virtual graphics card comprising both SoC and board components (on a per-board basis). You can have both DRM or fbdev use that virtual video card node to register your subsystem drivers required to provide video output.
I agree to what Sascha said, the decision to put one or two virtual graphics card in the device tree depending on the use case is sketchy. You can have one card/two card on the same board, so at this point device tree is not describing HW but use case.
But honestly, I see no way around it and it is the only way to allow to even have the decision for one or two cards at all. There is no way for auto-probing the users intention...
Sebastian
On Wed, Jul 03, 2013 at 12:52:37PM +0200, Sebastian Hesselbarth wrote:
But honestly, I see no way around it and it is the only way to allow to even have the decision for one or two cards at all. There is no way for auto-probing the users intention...
It's not _just_ about the users intention - for get that, because really it's to do with solving a much bigger question, and that question is:
How do we know when all the components are present?
In other words, how do we know that we have LCD0 and LCD1 connected to the DCON, which is connected to a LCD and/or a HDMI tranceiver. How do we know that the on-board VGA DACs are wired up and to be used? How do we know which I2C bus the VGA port is connected to, and whether to expect an I2C bus?
Let's look at the Cubox setup (sorry, but you _will_ have to use a fixed-width font for this):
CPU bus | +-I2C -------------TDA998X --(HDMI)--> Display | | | (RGB888+clock+sync) +-LCD0 ---------. / +--------------DCON ---(VGA)---> not wired +-LCD1 (unused)-'
DCON can allegedly route the data from LCD0 or LCD1 to the parallel interface which the TDA998x sits on, and/or the VGA interface. In the case of other setups, the TDA998x may be a LCD panel.
The OLPC setup (which seems to be the more common case in terms of the on-SoC device structure):
CPU bus | +-LCD ---(RGB666+clock+sync)----> LCD panel
and I believe an HDMI tranceiver somewhere.
In the above diagrams, "LCD" and "LCD0"/"LCD1" are essentially all the same basic IP block, so they should use the same driver code. Moreover, each named element is a separate platform device.
In the first, to drive that correctly, you need the following before "loading" the display system: 1. LCD0, and optionally LCD1 and DCON to be found and known to display driver. 2. I2C driver to be probed and available for use. 3. TDA998x to be found and known to display driver. Only once you have all those components can you allow display to "load".
Now consider the case where the TDA998x is not present but the parallel interface is connected directly to a LCD panel. This then becomes: 1. LCD0, and optionally LCD1 and DCON to be found and known to display driver. 2. LCD panel details known to display driver.
If the VGA port is being used, then both of these cases need to be supplemented with: N. I2C bus for VGA DDC to be probed and available for use. N+1. DCON must be known to the display driver. N+2. LCD1 required if different display modes on the devices are required.
In the OLPC case, it's just: 1. LCD to be found and known to display driver. 2. LCD panel details known to display driver.
What you should be getting from the above is that the platform devices which are required for any kind of display subsystem driver to initialize is not really a function of the "software" use case, but how (a) the board hardware has been designed and put together, and (b) the internal structure of the SoC.
Moreover, the problem which we're facing is this: how does a display driver know which platform devices to expect from a DT description to make the decision that all parts required for the physical wiring of the board are now present.
Consider this too: what if you have a LCD panel on your RGB888 interface which is also connected to a HDMI transceiver which can do scaling and re-syncing (basically format conversion - the TDA998x appears to have this capability), and you drive it with a mode suitable for HDMI but not the LCD panel because the driver doesn't know that there's a LCD panel also connected? This is why I feel that the hotplug idea is actually rather unsafe, and if we go down that path we're building in that risk.
(And I think the OLPC guys may be have exactly that kind of setup...)
On 07/03/13 13:32, Russell King wrote:
On Wed, Jul 03, 2013 at 12:52:37PM +0200, Sebastian Hesselbarth wrote:
But honestly, I see no way around it and it is the only way to allow to even have the decision for one or two cards at all. There is no way for auto-probing the users intention...
Russell,
in general, for me it is hard to find out when you are really asking questions, use rhetorical questions or sarcasm. I am not a native speaker, so do not take any of the below personal if I am not getting the point of it.
It's not _just_ about the users intention - for get that, because really it's to do with solving a much bigger question, and that question is:
How do we know when all the components are present?
By exploiting phandles passed to the supernode. That supernode is very board specific so it is defined in a board dts and can take into account every available/unavailable physical device.
In other words, how do we know that we have LCD0 and LCD1 connected to the DCON, which is connected to a LCD and/or a HDMI tranceiver. How
About LCD0/LCD1 connected to DCON, you have to deal with that in the subsystem driver, i.e. DRM driver knows about some possible DCON but registers it only if there is a phandle to a node compatible with "marvell,armada-510-dcon" passed.
About LCD (panel)/HDMI transceiver, use a (hopefully standard) property to hook the device (LCD0 on Dove) with the node of the HDMI transmitter.
do we know that the on-board VGA DACs are wired up and to be used?
Boards not using VGA DAC (LCD1 on Dove) just disable the device_node and do not pass the node's phandle in the supernode.
How do we know which I2C bus the VGA port is connected to, and whether to expect an I2C bus?
Again, passing a phandle to the i2c-controller node in lcd1 node. Please note that the pure existence of this phandle property does not in any way imply you have to use it at all in the driver. But if you have a driver for Dove's LCD1 and that needs to know how it can access monitor's EDID, connect it to the i2c-controller node.
I understand that this very i2c-controller driver may not be loaded the time DRM driver accesses it. But that is not related to DT but driver core. Currently, I guess -EPROBEDEFER or bail out is the only option. But from a driver POV you can still say: "somebody told me to use i2c0 for EDID, so don't blame me it is not there".
Let's look at the Cubox setup (sorry, but you _will_ have to use a fixed-width font for this):
CPU bus | +-I2C -------------TDA998X --(HDMI)--> Display | | | (RGB888+clock+sync) +-LCD0 ---------. / +--------------DCON ---(VGA)---> not wired +-LCD1 (unused)-'
DCON can allegedly route the data from LCD0 or LCD1 to the parallel interface which the TDA998x sits on, and/or the VGA interface. In the case of other setups, the TDA998x may be a LCD panel.
dove.dtsi: ... soc { internal-regs { ... i2c0: i2c-controller@abcd0 { compatible = "marvell,mv64xxx-i2c"; ... status = "disabled"; };
lcd0: lcd-controller@820000 { compatible = "marvell,armada-510-lcd"; reg = <0x820000 0x1000>; status = "disabled"; };
lcd1: lcd-controller@810000 { compatible = "marvell,armada-510-lcd"; reg = <0x810000 0x1000>; status = "disabled"; };
dcon: display-controller@830000 { compatible = "marvell,armada-510-dcon"; reg = <0x830000 0x100>; status = "disabled"; }; }; };
dove-cubox.dts: /include/ "dove.dtsi"
video { card0 { compatible = "marvell,armada-510-video", "linux,video-card"; linux,video-memory-size = <0x100000>; linux,video-devices = <&lcd0 &dcon>; }; };
&dcon { status = "okay"; };
&lcd0 { status = "okay"; clocks = <&si5351 0>; clock-names = "extclk0"; /* pin config 0 = DUMB_RGB888 */ marvell,pin-configuration = <0>; ... linux,video-external-encoder = <&tda998x>; };
&i2c0 { status = "okay";
tda998x: hdmi-transmitter@60 { compatible = "nxp,tda19988"; reg = <0x60>; /* pin config 18 = RGB888 */ nxp,pin-configuration = <18>; /* HPD gpio pin */ interrupt-gpios = <&gpio0 12>; };
si5351: programmable-pll { /* Note: this binding already exists */ compatible = "silabs,5351a-msop10"; ... #clock-cells = <1>;
/* referenced as <&si5351 0> */ clk0: { silabs,drive-strength = <8>; silabs,clk-source = <0>; ... }; }; };
The OLPC setup (which seems to be the more common case in terms of the on-SoC device structure):
CPU bus | +-LCD ---(RGB666+clock+sync)----> LCD panel
and I believe an HDMI tranceiver somewhere.
(for the sake of simplicity, I am assuming OLPC is Armada 510 aka Dove, which it isn't)
dove-olpc.dts: /include/ "dove.dtsi"
video { card0 { compatible = "marvell,armada-510-video", "linux,video-card"; linux,video-memory-size = <0x100000>; linux,video-devices = <&lcd0>; }; };
&lcd0 { status = "okay"; /* core clock 5 = LCD PLL */ clocks = <&core_clk 5>; clock-names = "lcdclk"; /* pin config 1 = DUMB_RGB666 */ marvell,pin-configuration = <1>;
videomodes { mode_800x600 { ... }; }; };
In the above diagrams, "LCD" and "LCD0"/"LCD1" are essentially all the same basic IP block, so they should use the same driver code. Moreover, each named element is a separate platform device.
In the first, to drive that correctly, you need the following before "loading" the display system:
- LCD0, and optionally LCD1 and DCON to be found and known to display driver.
Looping over phandles passed by a video-devices property of the supernode. Cubox video card node finds lcd0 and dcon, OLPC finds lcd0 only.
- I2C driver to be probed and available for use.
phandle property in lcd controller node passing the i2c controller node. Cubox finds tda998x, OLPC does not have the property;
- TDA998x to be found and known to display driver.
IMHO the exact model, e.g. TDA19988, Analog Devices 1234, shouldn't even be "known" to the display driver. You should have specific properties for rgb/yuv pin configuration in Dove lcd-controller node (marvell,pin-configuration) and tda998x node (nxp,pin-configuration).
Only once you have all those components can you allow display to "load".
Or fail, if DT is not sufficient for a DRM/fbdev driver.
Now consider the case where the TDA998x is not present but the parallel interface is connected directly to a LCD panel. This then becomes:
- LCD0, and optionally LCD1 and DCON to be found and known to display driver.
- LCD panel details known to display driver.
Different marvell,pin-configuration property value. There is no way to probe the correct configuration given by board wiring, so rely on DT author to (a) set lcd-controller property correctly and (b) set tda998x property correctly.
If the VGA port is being used, then both of these cases need to be supplemented with: N. I2C bus for VGA DDC to be probed and available for use.
marvell,ddc-i2c-controller = <&i2c0>;
N+1. DCON must be known to the display driver.
That there may be a DCON will be known by the display driver, if it should be used is passed by the video-device property above.
N+2. LCD1 required if different display modes on the devices are required.
Either obey EDID or if it is a dumb lcd display, pass available modes using of_videomode or similar.
In the OLPC case, it's just:
- LCD to be found and known to display driver.
- LCD panel details known to display driver.
What you should be getting from the above is that the platform devices which are required for any kind of display subsystem driver to initialize is not really a function of the "software" use case, but how (a) the board hardware has been designed and put together, and (b) the internal structure of the SoC.
Exactly, the driver has to be prepared for any possible (or yet known) configuration of separate/required devices. Super node defines the board specific setup.
Moreover, the problem which we're facing is this: how does a display driver know which platform devices to expect from a DT description to make the decision that all parts required for the physical wiring of the board are now present.
That is maybe the trickiest point. But as long as all involved devices share the same API (e.g. DRM) there should be an "easy" way. If you cross subsystems borders, e.g. DRM <-> ASoC, I have absolutely no clue, yet.
Consider this too: what if you have a LCD panel on your RGB888 interface which is also connected to a HDMI transceiver which can do scaling and re-syncing (basically format conversion - the TDA998x appears to have this capability), and you drive it with a mode suitable for HDMI but not the LCD panel because the driver doesn't know that there's a LCD panel also connected? This is why I feel that the hotplug idea is actually rather unsafe, and if we go down that path we're building in that risk.
We already have this situation on CuBox. SPDIF is connected to a SPDIF jack and tda998x SPDIF audio input. ASoC doesn't know about one stream of audio connected to multiple "codecs" with possibly different requirements.
(And I think the OLPC guys may be have exactly that kind of setup...)
I hope the whole explanations above at least clear out how the logical and physical setup can be accomplished by using DT and phandles. About the hotplug/suspend/driver load order, it is way beyond of what I know about driver core and different subsystems.
Also, please note that all property names, prefixes, and compatible strings are just for reference and in no way represent actual requirements for subsystems/SoCs/boards. Vendor specific prefixes ("marvell", "nxp",...) represent properties that can only be interpreted correctly by that very driver, "linux" prefix represents properties that can possibly be shared among different drivers/subsystems.
Sebastian
-----Original Message----- From: Sebastian Hesselbarth [mailto:sebastian.hesselbarth@gmail.com] Sent: Wednesday, July 03, 2013 7:53 PM To: Russell King Cc: Inki Dae; 'Sascha Hauer'; 'Daniel Drake'; 'Jean-Francois Moine'; devicetree-discuss@lists.ozlabs.org; dri-devel@lists.freedesktop.org Subject: Re: Best practice device tree design for display subsystems/DRM
On 07/03/13 11:53, Russell King wrote:
On Wed, Jul 03, 2013 at 06:48:41PM +0900, Inki Dae wrote:
That's not whether we can write device driver or not. dtsi is common
spot in
other subsystems. Do you think the cardX node is meaningful to other subsystems?
Yes, because fbdev could also use it to solve the same problem which
we're
having with DRM.
Inki,
I do not understand why you keep referring to the SoC dtsi. Im my example, I said that it is made up and joined from both SoC dtsi and board dts.
So, of course, lcd controller nodes and dcon are part of dove.dtsi because they are physically available on every Dove SoC.
Also, the connection from lcd0 to the external HDMI encoder node is in the board dts because you can happily build a Dove SoC board with a different HDMI encoder or with no encoder at all.
The video-card super node is not in any way specific to DRM and
In case of fbdev, framebuffer driver would use lcd0 or lcd1 driver, or lcd0 and lcd1 drivers which are placed in drivers/video/backlight/.
And let's assume the following:
On board A Display controller ------------- lcd 0 On board B Display controller ------------- lcd 1 On board C Display controller ------------- lcd 0 and lcd 1
Without the super node, user could configure Linux kernel through menuconfig like below; board A: enabling lcd 0, and disabling lcd 1, board B: disabling lcd 0, and enabling lcd 1, board C: enabling lcd 0 and lcd 1.
All we have to do is to configure menuconfig to enable only drivers for certain board. Why does fbdev need the super node? Please give me comments if there is my missing point.
Thanks, Inki Dae
describes a virtual graphics card comprising both SoC and board components (on a per-board basis). You can have both DRM or fbdev use that virtual video card node to register your subsystem drivers required to provide video output.
I agree to what Sascha said, the decision to put one or two virtual graphics card in the device tree depending on the use case is sketchy. You can have one card/two card on the same board, so at this point device tree is not describing HW but use case.
But honestly, I see no way around it and it is the only way to allow to even have the decision for one or two cards at all. There is no way for auto-probing the users intention...
Sebastian
On 07/03/13 13:43, Inki Dae wrote:
I do not understand why you keep referring to the SoC dtsi. Im my example, I said that it is made up and joined from both SoC dtsi and board dts.
So, of course, lcd controller nodes and dcon are part of dove.dtsi because they are physically available on every Dove SoC.
Also, the connection from lcd0 to the external HDMI encoder node is in the board dts because you can happily build a Dove SoC board with a different HDMI encoder or with no encoder at all.
The video-card super node is not in any way specific to DRM and
In case of fbdev, framebuffer driver would use lcd0 or lcd1 driver, or lcd0 and lcd1 drivers which are placed in drivers/video/backlight/.
And let's assume the following:
On board A Display controller ------------- lcd 0 On board B Display controller ------------- lcd 1 On board C Display controller ------------- lcd 0 and lcd 1
Without the super node, user could configure Linux kernel through menuconfig like below; board A: enabling lcd 0, and disabling lcd 1, board B: disabling lcd 0, and enabling lcd 1, board C: enabling lcd 0 and lcd 1.
All we have to do is to configure menuconfig to enable only drivers for certain board. Why does fbdev need the super node? Please give me comments if there is my missing point.
I assume when you say "configure menuconfig" you mean "create a CONFIG_DISPLAY_CONTROLLER_AS_USED_ON_BOARD_A, CONFIG_DISPLAY_CONTROLLER_AS_USED_ON_BOARD_B, ..." ?
This finally will require you to have (a) #ifdef mess for every single board _and_ driver above (b) new CONFIG_.._BOARD_D plus new #ifdefs in fbdev driver for every new board (c) A new set of the above CONFIG_/#ifdef for DRM driver
This can also be done with device_tree and supernode approach, so for your example above:
BoardA.dts: video { card0 { video-devices = <&lcd0>; }; };
BoardB.dts: video { card0 { video-devices = <&lcd1>; }; };
BoardC.dts: video { card0 { video-devices = <&lcd0 &lcd1>; }; };
and in the driver your are prepared for looping over the video-devices property and parsing the compatible string of the nodes passed.
Sebastian
-----Original Message----- From: Sebastian Hesselbarth [mailto:sebastian.hesselbarth@gmail.com] Sent: Wednesday, July 03, 2013 8:52 PM To: Inki Dae Cc: 'Russell King'; devicetree-discuss@lists.ozlabs.org; 'Jean-Francois Moine'; 'Sascha Hauer'; 'Daniel Drake'; dri-devel@lists.freedesktop.org Subject: Re: Best practice device tree design for display subsystems/DRM
On 07/03/13 13:43, Inki Dae wrote:
I do not understand why you keep referring to the SoC dtsi. Im my example, I said that it is made up and joined from both SoC dtsi and board dts.
So, of course, lcd controller nodes and dcon are part of dove.dtsi because they are physically available on every Dove SoC.
Also, the connection from lcd0 to the external HDMI encoder node is in the board dts because you can happily build a Dove SoC board with a different HDMI encoder or with no encoder at all.
The video-card super node is not in any way specific to DRM and
In case of fbdev, framebuffer driver would use lcd0 or lcd1 driver, or
lcd0
and lcd1 drivers which are placed in drivers/video/backlight/.
And let's assume the following:
On board A Display controller ------------- lcd 0 On board B Display controller ------------- lcd 1 On board C Display controller ------------- lcd 0 and lcd 1
Without the super node, user could configure Linux kernel through
menuconfig
like below; board A: enabling lcd 0, and disabling lcd 1, board B: disabling lcd 0, and enabling lcd 1, board C: enabling lcd 0 and lcd 1.
All we have to do is to configure menuconfig to enable only drivers for certain board. Why does fbdev need the super node? Please give me
comments
if there is my missing point.
I assume when you say "configure menuconfig" you mean "create a CONFIG_DISPLAY_CONTROLLER_AS_USED_ON_BOARD_A, CONFIG_DISPLAY_CONTROLLER_AS_USED_ON_BOARD_B, ..." ?
This finally will require you to have (a) #ifdef mess for every single board _and_ driver above (b) new CONFIG_.._BOARD_D plus new #ifdefs in fbdev driver for every new board (c) A new set of the above CONFIG_/#ifdef for DRM driver
This can also be done with device_tree and supernode approach, so for your example above:
BoardA.dts: video { card0 { video-devices = <&lcd0>; }; };
BoardB.dts: video { card0 { video-devices = <&lcd1>; }; };
BoardC.dts: video { card0 { video-devices = <&lcd0 &lcd1>; }; };
and in the driver your are prepared for looping over the video-devices property and parsing the compatible string of the nodes passed.
As I mentioned before, fbdev don't need the super node, card0. Please see the below,
BoardA.dts: video { dcon: display-controller@830000 { video-devices = <&lcd0>; }; };
BoardB.dts: video { dcon: display-controller@830000 { video-devices = <&lcd1>; }; };
BoardC.dts: video { dcon: display-controller@830000 { video-devices = <&lcd0 &lcd1>; }; };
With the above dts file, does the fbdev have any problem? I just changed the super node to real hardware node. That is why the super node is specific to DRM.
Thanks, Inki Dae
Sebastian
On 07/04/13 09:05, Inki Dae wrote:
-----Original Message----- From: Sebastian Hesselbarth [mailto:sebastian.hesselbarth@gmail.com] Sent: Wednesday, July 03, 2013 8:52 PM To: Inki Dae Cc: 'Russell King'; devicetree-discuss@lists.ozlabs.org; 'Jean-Francois Moine'; 'Sascha Hauer'; 'Daniel Drake'; dri-devel@lists.freedesktop.org Subject: Re: Best practice device tree design for display subsystems/DRM
On 07/03/13 13:43, Inki Dae wrote:
I do not understand why you keep referring to the SoC dtsi. Im my example, I said that it is made up and joined from both SoC dtsi and board dts.
So, of course, lcd controller nodes and dcon are part of dove.dtsi because they are physically available on every Dove SoC.
Also, the connection from lcd0 to the external HDMI encoder node is in the board dts because you can happily build a Dove SoC board with a different HDMI encoder or with no encoder at all.
The video-card super node is not in any way specific to DRM and
In case of fbdev, framebuffer driver would use lcd0 or lcd1 driver, or
lcd0
and lcd1 drivers which are placed in drivers/video/backlight/.
And let's assume the following:
On board A Display controller ------------- lcd 0 On board B Display controller ------------- lcd 1 On board C Display controller ------------- lcd 0 and lcd 1
Without the super node, user could configure Linux kernel through
menuconfig
like below; board A: enabling lcd 0, and disabling lcd 1, board B: disabling lcd 0, and enabling lcd 1, board C: enabling lcd 0 and lcd 1.
All we have to do is to configure menuconfig to enable only drivers for certain board. Why does fbdev need the super node? Please give me
comments
if there is my missing point.
I assume when you say "configure menuconfig" you mean "create a CONFIG_DISPLAY_CONTROLLER_AS_USED_ON_BOARD_A, CONFIG_DISPLAY_CONTROLLER_AS_USED_ON_BOARD_B, ..." ?
This finally will require you to have (a) #ifdef mess for every single board _and_ driver above (b) new CONFIG_.._BOARD_D plus new #ifdefs in fbdev driver for every new board (c) A new set of the above CONFIG_/#ifdef for DRM driver
This can also be done with device_tree and supernode approach, so for your example above:
BoardA.dts: video { card0 { video-devices = <&lcd0>; }; };
BoardB.dts: video { card0 { video-devices = <&lcd1>; }; };
BoardC.dts: video { card0 { video-devices = <&lcd0 &lcd1>; }; };
and in the driver your are prepared for looping over the video-devices property and parsing the compatible string of the nodes passed.
As I mentioned before, fbdev don't need the super node, card0. Please see the below,
BoardA.dts: video { dcon: display-controller@830000 { video-devices = <&lcd0>; }; };
BoardB.dts: video { dcon: display-controller@830000 { video-devices = <&lcd1>; }; };
BoardC.dts: video { dcon: display-controller@830000 { video-devices = <&lcd0 &lcd1>; }; };
With the above dts file, does the fbdev have any problem? I just changed the super node to real hardware node. That is why the super node is specific to DRM.
Inki,
I guess there is a misunderstanding of what lcd-controller and display- controller are for on Dove. lcd-controller reads framebuffer from memory, optionally does some conversions/scaling, and drives the SoCs pins with pixel data and sync. display-controller (dcon) on Dove is for mirroring lcd0 framebuffer to lcd1 framebuffer and some other things.
And, as stated several times, you cannot move internal-registers out of the corresponding node on Dove. You _need_ that parent node for address mapping.
IMHO also fbdev needs the super-node because lcd0/1, dcon, hdmi-transmitter, programmable-pll is what make up what you would call a graphics card on x86. There is no such "graphics card" on most SoCs but it is built up by using separate devices and SoC internal devices.
Moreover, it is highly board dependent because you will easily find another board manufacturer chosing a different hdmi-transmitter or programmable-pll, using two lcd-controllers or just one. And there is no way of probing the boards configuration.
So even fvdev needs the super-node, there is no difference what video subsystem you use - just because DT describes HW (even virtual one) not subsystem.
Sebastian
-----Original Message----- From: Sebastian Hesselbarth [mailto:sebastian.hesselbarth@gmail.com] Sent: Thursday, July 04, 2013 4:25 PM To: Inki Dae Cc: 'Jean-Francois Moine'; 'Daniel Drake'; devicetree- discuss@lists.ozlabs.org; dri-devel@lists.freedesktop.org; 'Sascha Hauer'; 'Russell King' Subject: Re: Best practice device tree design for display subsystems/DRM
On 07/04/13 09:05, Inki Dae wrote:
-----Original Message----- From: Sebastian Hesselbarth [mailto:sebastian.hesselbarth@gmail.com] Sent: Wednesday, July 03, 2013 8:52 PM To: Inki Dae Cc: 'Russell King'; devicetree-discuss@lists.ozlabs.org; 'Jean-Francois Moine'; 'Sascha Hauer'; 'Daniel Drake'; dri-devel@lists.freedesktop.org Subject: Re: Best practice device tree design for display
subsystems/DRM
On 07/03/13 13:43, Inki Dae wrote:
I do not understand why you keep referring to the SoC dtsi. Im my example, I said that it is made up and joined from both SoC dtsi and board dts.
So, of course, lcd controller nodes and dcon are part of dove.dtsi because they are physically available on every Dove SoC.
Also, the connection from lcd0 to the external HDMI encoder node is in the board dts because you can happily build a Dove SoC board with a different HDMI encoder or with no encoder at all.
The video-card super node is not in any way specific to DRM and
In case of fbdev, framebuffer driver would use lcd0 or lcd1 driver, or
lcd0
and lcd1 drivers which are placed in drivers/video/backlight/.
And let's assume the following:
On board A Display controller ------------- lcd 0 On board B Display controller ------------- lcd 1 On board C Display controller ------------- lcd 0 and lcd 1
Without the super node, user could configure Linux kernel through
menuconfig
like below; board A: enabling lcd 0, and disabling lcd 1, board B: disabling lcd 0, and enabling lcd 1, board C: enabling lcd 0 and lcd 1.
All we have to do is to configure menuconfig to enable only drivers
for
certain board. Why does fbdev need the super node? Please give me
comments
if there is my missing point.
I assume when you say "configure menuconfig" you mean "create a CONFIG_DISPLAY_CONTROLLER_AS_USED_ON_BOARD_A, CONFIG_DISPLAY_CONTROLLER_AS_USED_ON_BOARD_B, ..." ?
This finally will require you to have (a) #ifdef mess for every single board _and_ driver above (b) new CONFIG_.._BOARD_D plus new #ifdefs in fbdev driver for every new board (c) A new set of the above CONFIG_/#ifdef for DRM driver
This can also be done with device_tree and supernode approach, so for your example above:
BoardA.dts: video { card0 { video-devices = <&lcd0>; }; };
BoardB.dts: video { card0 { video-devices = <&lcd1>; }; };
BoardC.dts: video { card0 { video-devices = <&lcd0 &lcd1>; }; };
and in the driver your are prepared for looping over the video-devices property and parsing the compatible string of the nodes passed.
As I mentioned before, fbdev don't need the super node, card0. Please
see
the below,
BoardA.dts: video { dcon: display-controller@830000 { video-devices = <&lcd0>; }; };
BoardB.dts: video { dcon: display-controller@830000 { video-devices = <&lcd1>; }; };
BoardC.dts: video { dcon: display-controller@830000 { video-devices = <&lcd0
&lcd1>; };
};
With the above dts file, does the fbdev have any problem? I just changed
the
super node to real hardware node. That is why the super node is specific
to
DRM.
Inki,
I guess there is a misunderstanding of what lcd-controller and display- controller are for on Dove. lcd-controller reads framebuffer from memory, optionally does some conversions/scaling, and drives the SoCs pins with pixel data and sync. display-controller (dcon) on Dove is for mirroring lcd0 framebuffer to lcd1 framebuffer and some other things.
Right, there was definitely my misunderstanding. I haven't ever seen such hardware so I thought lcd controller means just lcd panel. I should really have read previous email threads. Thanks to your comments. I will have time to look into such hardware, and to think about that we really need the super node for such hardware.
Thanks, Inki Dae
And, as stated several times, you cannot move internal-registers out of the corresponding node on Dove. You _need_ that parent node for address mapping.
IMHO also fbdev needs the super-node because lcd0/1, dcon, hdmi-transmitter, programmable-pll is what make up what you would call a graphics card on x86. There is no such "graphics card" on most SoCs but it is built up by using separate devices and SoC internal devices.
Moreover, it is highly board dependent because you will easily find another board manufacturer chosing a different hdmi-transmitter or programmable-pll, using two lcd-controllers or just one. And there is no way of probing the boards configuration.
So even fvdev needs the super-node, there is no difference what video subsystem you use - just because DT describes HW (even virtual one) not subsystem.
Sebastian
On Wed, Jul 03, 2013 at 08:43:20PM +0900, Inki Dae wrote:
In case of fbdev, framebuffer driver would use lcd0 or lcd1 driver, or lcd0 and lcd1 drivers which are placed in drivers/video/backlight/.
No, that's totally wrong. Framebuffer drivers are not backlights. Framebuffer drivers go in drivers/video not drivers/video/backlight.
And let's assume the following:
On board A Display controller ------------- lcd 0 On board B Display controller ------------- lcd 1 On board C Display controller ------------- lcd 0 and lcd 1
Without the super node, user could configure Linux kernel through menuconfig like below; board A: enabling lcd 0, and disabling lcd 1, board B: disabling lcd 0, and enabling lcd 1, board C: enabling lcd 0 and lcd 1.
I don't think so. By using menuconfig, you completely miss the point of using DT - which is to allow us to have a single kernel image which can support multiple boards with different configurations, even different SoCs.
All we have to do is to configure menuconfig to enable only drivers for certain board. Why does fbdev need the super node? Please give me comments if there is my missing point.
fbdev needs the supernode _OR_ some way to specify that information which you're putting into menuconfig, because what's correct for the way one board is physically wired is not correct for how another board is physically wired.
With that information in menuconfig, you get a kernel image which can support board A, or board B, or board C but not a single kernel image which can support board A and board B and board C by loading that very same kernel image onto all three boards with just a different DT image.
This is the *whole* point of ARM moving over to DT.
If we wanted to use menuconfig to sort these kinds of board specific details, we wouldn't be investing so much time and effort into moving over to DT for ARM. In fact, we used to use menuconfig to sort out some of these kinds of details, and we've firmly decided that this is the wrong approach.
Today, there is a very strong push towards having a single kernel image which runs on every (modern) ARM board with DT describing not only the board level hardware but also the internal SoC as well.
-----Original Message----- From: Russell King [mailto:rmk@arm.linux.org.uk] Sent: Wednesday, July 03, 2013 9:05 PM To: Inki Dae Cc: 'Sebastian Hesselbarth'; 'Sascha Hauer'; 'Daniel Drake'; 'Jean- Francois Moine'; devicetree-discuss@lists.ozlabs.org; dri- devel@lists.freedesktop.org Subject: Re: Best practice device tree design for display subsystems/DRM
On Wed, Jul 03, 2013 at 08:43:20PM +0900, Inki Dae wrote:
In case of fbdev, framebuffer driver would use lcd0 or lcd1 driver, or
lcd0
and lcd1 drivers which are placed in drivers/video/backlight/.
No, that's totally wrong. Framebuffer drivers are not backlights. Framebuffer drivers go in drivers/video not drivers/video/backlight.
Really not that mean. Framebuffer driver controls DCON, and lcd panel driver controls lcd0 or lcd1. Maybe there is *wrong use of sentence* . Sorry about it.
And let's assume the following:
On board A Display controller ------------- lcd 0 On board B Display controller ------------- lcd 1 On board C Display controller ------------- lcd 0 and lcd 1
Without the super node, user could configure Linux kernel through
menuconfig
like below; board A: enabling lcd 0, and disabling lcd 1, board B: disabling lcd 0, and enabling lcd 1, board C: enabling lcd 0 and lcd 1.
I don't think so. By using menuconfig, you completely miss the point of using DT - which is to allow us to have a single kernel image which can support multiple boards with different configurations, even different SoCs.
All we have to do is to configure menuconfig to enable only drivers for certain board. Why does fbdev need the super node? Please give me
comments
if there is my missing point.
fbdev needs the supernode _OR_ some way to specify that information which you're putting into menuconfig, because what's correct for the way one board is physically wired is not correct for how another board is physically wired.
With that information in menuconfig, you get a kernel image which can support board A, or board B, or board C but not a single kernel image which can support board A and board B and board C by loading that very same kernel image onto all three boards with just a different DT image.
This is the *whole* point of ARM moving over to DT.
If we wanted to use menuconfig to sort these kinds of board specific details, we wouldn't be investing so much time and effort into moving over to DT for ARM. In fact, we used to use menuconfig to sort out some of these kinds of details, and we've firmly decided that this is the wrong approach.
Today, there is a very strong push towards having a single kernel image which runs on every (modern) ARM board with DT describing not only the board level hardware but also the internal SoC as well.
Dear Russell. I understand what you try to do and that's true. Please see the below in addition,
dove.dtsi: ... soc { internal-regs { ... lcd0: lcd-controller@820000 { compatible = "marvell,armada-510-lcd"; reg = <0x820000 0x1000>; status = "disabled"; };
lcd1: lcd-controller@810000 { compatible = "marvell,armada-510-lcd"; reg = <0x810000 0x1000>; status = "disabled"; };
dcon: display-controller@830000 { compatible = "marvell,armada-510-dcon"; reg = <0x830000 0x100>; status = "disabled"; }; }; };
Board A.dts: /include/ "dove.dtsi"
dcon: display-controller@830000 { compatible = "marvell,armada-510-video", "linux,video-card"; linux,video-memory-size = <0x100000>; linux,video-devices = <&lcd0 &dcon>; }; ...
Board B.dts: /include/ "dove.dtsi"
dcon: display-controller@830000 { compatible = "marvell,armada-510-video", "linux,video-card"; linux,video-memory-size = <0x100000>; linux,video-devices = <&lcd1 &dcon>; }; ...
Board C.dts: /include/ "dove.dtsi"
dcon: display-controller@830000 { compatible = "marvell,armada-510-video", "linux,video-card"; linux,video-memory-size = <0x100000>; linux,video-devices = <&lcd0 &lcd1 &dcon>; }; ...
Like above, board specific dts files could have their own board specific information. So I think we can do and are already doing what you try to do without the super node. The super node don't really mean real hardware.
Thanks, Inki Dae
The OLPC setup (which seems to be the more common case in terms of the on-SoC device structure):
CPU bus | +-LCD ---(RGB666+clock+sync)----> LCD panel
and I believe an HDMI tranceiver somewhere.
(for the sake of simplicity, I am assuming OLPC is Armada 510 aka Dove, which it isn't)
dove-olpc.dts: /include/ "dove.dtsi"
video { card0 { compatible = "marvell,armada-510-video", "linux,video-card"; linux,video-memory-size = <0x100000>; linux,video-devices = <&lcd0>; }; };
&lcd0 { status = "okay"; /* core clock 5 = LCD PLL */ clocks = <&core_clk 5>; clock-names = "lcdclk"; /* pin config 1 = DUMB_RGB666 */ marvell,pin-configuration = <1>;
videomodes { mode_800x600 { ... }; }; };
-- Russell King
On Wed, Jul 03, 2013 at 11:02:42AM +0200, Sascha Hauer wrote:
On Wed, Jul 03, 2013 at 05:57:18PM +0900, Inki Dae wrote:
video { /* Single video card w/ multiple lcd controllers */ card0 { compatible = "marvell,armada-510-display"; reg = <0 0x3f000000 0x1000000>; /* video-mem hole */ /* later: linux,video-memory-size = <0x1000000>; */ marvell,video-devices = <&lcd0 &lcd1 &dcon>; };
/* OR: Multiple video card w/ single lcd controllers */ card0 { compatible = "marvell,armada-510-display"; ... marvell,video-devices = <&lcd0>; };
card1 { compatible = "marvell,armada-510-display"; ... marvell,video-devices = <&lcd1>; }; };
Sorry but I'd like to say that this cannot be used commonly. Shouldn't you really consider Linux framebuffer or other subsystems? The above dtsi file is specific to DRM subsystem. And I think the dtsi file has no any dependency on certain subsystem so board dtsi file should be considered for all device drivers based on other subsystems: i.e., Linux framebuffer, DRM, and so no. So I *strongly* object to it. All we have to do is to keep the dtsi file as is, and to find other better way that can be used commonly in DRM.
+1 for not encoding the projected usecase of the graphics subsystem into the devicetree. Whether the two LCD controllers shall be used together or separately should not affect the devicetree. devicetree is about hardware description, not configuration.
And if we listen to that argument, then this problem is basically impossible to solve sanely.
Are we really saying that we have no acceptable way to represent componentized devices in DT? If that's true, then DT fails to represent quite a lot of ARM hardware, and frankly we shouldn't be using it. I can't believe that's true though.
The problem is that even with an ASoC like approach, that doesn't work here because there's no way to know how many "components" to expect. That's what the "supernode" is doing - telling us what components group together to form a device.
Moreover, if you pay attention to my proposal, what you will realise is that it isn't DRM specific - it's totally subsystem agnostic. All it's doing is collecting a set of other devices together and only then publishing a device representing the full set of sub-devices.
Now think about that: what is DRM specific about that solution? What is the DRM specific about "collecting a set of devices together and publishing a new device" ?
How is this not "describing the hardware"? If I attach a HDMI transceiver to the DCON which is then connected to LCD0, is it not "describing the hardware" to put into DT that LCD0, DCON, and the HDMI transceiver are all connected together and therefore are required? One of the points of DT after all is that it can and should be used to represent the relationship between devices.
No - using the tree approach doesn't work, because LCD0, LCD1 and DCON are all on the same physical bus, but are themselves connected together. If you like, there are multiple heirarchies here - there's the bus heirarchy, and then there's the device heirarchy. Both of these heirarchies need to be represented in DT, otherwise you're not describing the hardware properly.
On 07/03/13 11:52, Russell King wrote:
On Wed, Jul 03, 2013 at 11:02:42AM +0200, Sascha Hauer wrote:
On Wed, Jul 03, 2013 at 05:57:18PM +0900, Inki Dae wrote:
video { /* Single video card w/ multiple lcd controllers */ card0 { compatible = "marvell,armada-510-display"; reg = <0 0x3f000000 0x1000000>; /* video-mem hole */ /* later: linux,video-memory-size = <0x1000000>; */ marvell,video-devices = <&lcd0 &lcd1 &dcon>; };
/* OR: Multiple video card w/ single lcd controllers */ card0 { compatible = "marvell,armada-510-display"; ... marvell,video-devices = <&lcd0>; };
card1 { compatible = "marvell,armada-510-display"; ... marvell,video-devices = <&lcd1>; }; };
Sorry but I'd like to say that this cannot be used commonly. Shouldn't you really consider Linux framebuffer or other subsystems? The above dtsi file is specific to DRM subsystem. And I think the dtsi file has no any dependency on certain subsystem so board dtsi file should be considered for all device drivers based on other subsystems: i.e., Linux framebuffer, DRM, and so no. So I *strongly* object to it. All we have to do is to keep the dtsi file as is, and to find other better way that can be used commonly in DRM.
+1 for not encoding the projected usecase of the graphics subsystem into the devicetree. Whether the two LCD controllers shall be used together or separately should not affect the devicetree. devicetree is about hardware description, not configuration.
And if we listen to that argument, then this problem is basically impossible to solve sanely.
Are we really saying that we have no acceptable way to represent componentized devices in DT? If that's true, then DT fails to represent quite a lot of ARM hardware, and frankly we shouldn't be using it. I can't believe that's true though.
I think DT is able to describe componentized devices, as long as you ignore DRM/fbdev/ASoC's demands and try to have a look at the HW without any specific backend in mind.
We both had a similar discussion about ASoC's separation of bus-side and codec-side subdevices. In HW, there is no such separation but one single audio controller (at least on Dove). Moreover, a full featured, again virtual, sound card comprises a lot more than just what is in the SoC. There is external codecs, jacks, aso.
The problem is that even with an ASoC like approach, that doesn't work here because there's no way to know how many "components" to expect. That's what the "supernode" is doing - telling us what components group together to form a device.
True. The supernode forms a virtual device on top of the individual components of both SoC and board. For the driver subsystem, all that is required should be probed by starting from the supernode. If what is found is not sufficient for the driver subsystem to register a working subsystem device, bail out. If there is more than you expect, ignore it and cross your fingers. IMHO DT is not the solution for describing the world, but it is sufficient for any subsystem driver to find what it needs to know.
Moreover, if you pay attention to my proposal, what you will realise is that it isn't DRM specific - it's totally subsystem agnostic. All it's doing is collecting a set of other devices together and only then publishing a device representing the full set of sub-devices.
Now think about that: what is DRM specific about that solution? What is the DRM specific about "collecting a set of devices together and publishing a new device" ?
How is this not "describing the hardware"? If I attach a HDMI transceiver to the DCON which is then connected to LCD0, is it not "describing the hardware" to put into DT that LCD0, DCON, and the HDMI transceiver are all connected together and therefore are required? One of the points of DT after all is that it can and should be used to represent the relationship between devices.
No - using the tree approach doesn't work, because LCD0, LCD1 and DCON are all on the same physical bus, but are themselves connected together. If you like, there are multiple heirarchies here - there's the bus heirarchy, and then there's the device heirarchy. Both of these heirarchies need to be represented in DT, otherwise you're not describing the hardware properly.
IMHO DT is more than describing physical connections between devices but also describing logical connections between devices. While, for example, a SATA bus master could happily do DMA writes to the GPIO registers just because it is physically connected to it, it makes no sense. OTOH an LED connected to a gpio pin is not directly connected to the GPIO register but at least needs to know who to ask for toggling the line.
I know there may be better examples than those above, but describing a virtual video card with a supernode connecting separate devices is sane and sound to me. It is done in a lot other driver subsystems just the same way, DRM or fbdev is no different from that.
Also, maybe my point-of-view is influenced by how it is done on Marvell SoCs/boards but I somehow consider it as a worst-case (maybe even common case for SoCs in general).
Sebastian
On Wed, Jul 03, 2013 at 10:52:49AM +0100, Russell King wrote:
On Wed, Jul 03, 2013 at 11:02:42AM +0200, Sascha Hauer wrote:
+1 for not encoding the projected usecase of the graphics subsystem into the devicetree. Whether the two LCD controllers shall be used together or separately should not affect the devicetree. devicetree is about hardware description, not configuration.
And if we listen to that argument, then this problem is basically impossible to solve sanely.
Are we really saying that we have no acceptable way to represent componentized devices in DT? If that's true, then DT fails to represent quite a lot of ARM hardware, and frankly we shouldn't be using it. I can't believe that's true though.
The problem is that even with an ASoC like approach, that doesn't work here because there's no way to know how many "components" to expect. That's what the "supernode" is doing - telling us what components group together to form a device.
Moreover, if you pay attention to my proposal, what you will realise is that it isn't DRM specific - it's totally subsystem agnostic. All it's doing is collecting a set of other devices together and only then publishing a device representing the full set of sub-devices.
Now think about that: what is DRM specific about that solution? What is the DRM specific about "collecting a set of devices together and publishing a new device" ?
How is this not "describing the hardware"? If I attach a HDMI transceiver to the DCON which is then connected to LCD0, is it not "describing the hardware" to put into DT that LCD0, DCON, and the HDMI transceiver are all connected together and therefore are required? One of the points of DT after all is that it can and should be used to represent the relationship between devices.
No - using the tree approach doesn't work, because LCD0, LCD1 and DCON are all on the same physical bus, but are themselves connected together. If you like, there are multiple heirarchies here - there's the bus heirarchy, and then there's the device heirarchy. Both of these heirarchies need to be represented in DT, otherwise you're not describing the hardware properly.
And I think with these multiple hierarchies there is some confusion in this thread.
The devicetree is structured by the bus hierarchy and we shouldn't change that. The bus hierarchy doesn't necessarily match the device hierarchy though.
The supernode has to describe the device hierarchy instead. If it does so by referencing the physical devices by using phandles I'm perfectly fine with this approach. If this even leads to subsystem agnostic code which can be used to compose v4l2, ASoC or DRM devices I'd really love it.
The only thing we shouldn't do is to describe a whole virtual device directly under a single node in the devicetree as this breaks when bus hierarchy and device hierarchy differ.
Sascha
On Wed, Jul 03, 2013 at 10:52:49AM +0100, Russell King wrote:
Sorry but I'd like to say that this cannot be used commonly. Shouldn't you really consider Linux framebuffer or other subsystems? The above dtsi file is specific to DRM subsystem. And I think the dtsi file has no any dependency on certain subsystem so board dtsi file should be considered for all device drivers based on other subsystems: i.e., Linux framebuffer, DRM, and so no. So I *strongly* object to it. All we have to do is to keep the dtsi file as is, and to find other better way that can be used commonly in DRM.
+1 for not encoding the projected usecase of the graphics subsystem into the devicetree. Whether the two LCD controllers shall be used together or separately should not affect the devicetree. devicetree is about hardware description, not configuration.
And if we listen to that argument, then this problem is basically impossible to solve sanely.
Are we really saying that we have no acceptable way to represent componentized devices in DT? If that's true, then DT fails to represent quite a lot of ARM hardware, and frankly we shouldn't be using it. I can't believe that's true though.
The problem is that even with an ASoC like approach, that doesn't work here because there's no way to know how many "components" to expect. That's what the "supernode" is doing - telling us what components group together to form a device.
A componentized device never completes and it doesn't have to. A componentized device can start once there is a path from an input (crtc, i2s unit) to an output (connector, speaker).
Consider what happens with a supernode approach. Your board provides a devicetree which has a supernode with hdmi and lvds referenced. Now you build a kernel with the hdmi driver disabled. You would still expect the lvds port to be working without having the kernel wait for the supernode being complete.
Without supernode you can just start once you have everything between the crtc and lvds nodes. If later a hdmi device joins in then you can either notify the users (provided the DRM/KMS API supports it) or just ignore it until the DRM device gets reopened.
Sascha
On Thu, Jul 04, 2013 at 10:33:07AM +0200, Sascha Hauer wrote:
On Wed, Jul 03, 2013 at 10:52:49AM +0100, Russell King wrote:
Sorry but I'd like to say that this cannot be used commonly. Shouldn't you really consider Linux framebuffer or other subsystems? The above dtsi file is specific to DRM subsystem. And I think the dtsi file has no any dependency on certain subsystem so board dtsi file should be considered for all device drivers based on other subsystems: i.e., Linux framebuffer, DRM, and so no. So I *strongly* object to it. All we have to do is to keep the dtsi file as is, and to find other better way that can be used commonly in DRM.
+1 for not encoding the projected usecase of the graphics subsystem into the devicetree. Whether the two LCD controllers shall be used together or separately should not affect the devicetree. devicetree is about hardware description, not configuration.
And if we listen to that argument, then this problem is basically impossible to solve sanely.
Are we really saying that we have no acceptable way to represent componentized devices in DT? If that's true, then DT fails to represent quite a lot of ARM hardware, and frankly we shouldn't be using it. I can't believe that's true though.
The problem is that even with an ASoC like approach, that doesn't work here because there's no way to know how many "components" to expect. That's what the "supernode" is doing - telling us what components group together to form a device.
A componentized device never completes and it doesn't have to. A componentized device can start once there is a path from an input (crtc, i2s unit) to an output (connector, speaker).
Wrong. Please read the example with the diagrams I gave. Consider what happens if you have two display devices connected to a single output, one which fixes the allowable mode and one which _can_ reformat the selected mode.
If you go down that path, you risk driving the LCD panel with inappropriate timings which may damage it.
On Thu, Jul 04, 2013 at 09:40:52AM +0100, Russell King wrote:
On Thu, Jul 04, 2013 at 10:33:07AM +0200, Sascha Hauer wrote:
On Wed, Jul 03, 2013 at 10:52:49AM +0100, Russell King wrote:
Sorry but I'd like to say that this cannot be used commonly. Shouldn't you really consider Linux framebuffer or other subsystems? The above dtsi file is specific to DRM subsystem. And I think the dtsi file has no any dependency on certain subsystem so board dtsi file should be considered for all device drivers based on other subsystems: i.e., Linux framebuffer, DRM, and so no. So I *strongly* object to it. All we have to do is to keep the dtsi file as is, and to find other better way that can be used commonly in DRM.
+1 for not encoding the projected usecase of the graphics subsystem into the devicetree. Whether the two LCD controllers shall be used together or separately should not affect the devicetree. devicetree is about hardware description, not configuration.
And if we listen to that argument, then this problem is basically impossible to solve sanely.
Are we really saying that we have no acceptable way to represent componentized devices in DT? If that's true, then DT fails to represent quite a lot of ARM hardware, and frankly we shouldn't be using it. I can't believe that's true though.
The problem is that even with an ASoC like approach, that doesn't work here because there's no way to know how many "components" to expect. That's what the "supernode" is doing - telling us what components group together to form a device.
A componentized device never completes and it doesn't have to. A componentized device can start once there is a path from an input (crtc, i2s unit) to an output (connector, speaker).
Wrong. Please read the example with the diagrams I gave. Consider what happens if you have two display devices connected to a single output, one which fixes the allowable mode and one which _can_ reformat the selected mode.
What you describe here is a forced clone mode. This could be described in the devicetree so that a driver wouldn't start before all connected displays (links) are present, but this should be limited to the affected path, not to the whole componentized device.
Sascha
On Thu, Jul 04, 2013 at 10:58:17AM +0200, Sascha Hauer wrote:
On Thu, Jul 04, 2013 at 09:40:52AM +0100, Russell King wrote:
Wrong. Please read the example with the diagrams I gave. Consider what happens if you have two display devices connected to a single output, one which fixes the allowable mode and one which _can_ reformat the selected mode.
What you describe here is a forced clone mode. This could be described in the devicetree so that a driver wouldn't start before all connected displays (links) are present, but this should be limited to the affected path, not to the whole componentized device.
Okay, to throw a recent argument back at you: so what in this scenario if you have a driver for the fixed-mode device but not the other device?
It's exactly the same problem which you were describing to Sebastian just a moment ago with drivers missing from the supernode approach - you can't start if one of those "forced clone" drivers is missing.
On Thu, Jul 04, 2013 at 10:11:31AM +0100, Russell King wrote:
On Thu, Jul 04, 2013 at 10:58:17AM +0200, Sascha Hauer wrote:
On Thu, Jul 04, 2013 at 09:40:52AM +0100, Russell King wrote:
Wrong. Please read the example with the diagrams I gave. Consider what happens if you have two display devices connected to a single output, one which fixes the allowable mode and one which _can_ reformat the selected mode.
What you describe here is a forced clone mode. This could be described in the devicetree so that a driver wouldn't start before all connected displays (links) are present, but this should be limited to the affected path, not to the whole componentized device.
Okay, to throw a recent argument back at you: so what in this scenario if you have a driver for the fixed-mode device but not the other device?
It's exactly the same problem which you were describing to Sebastian just a moment ago with drivers missing from the supernode approach - you can't start if one of those "forced clone" drivers is missing.
Indeed, then you will see nothing on your display, but I rather make this setup a special case than the rather usual case that we do not have compiled in all drivers for all devices referenced in the supernode.
Sascha
On 07/04/13 11:30, Sascha Hauer wrote:
On Thu, Jul 04, 2013 at 10:11:31AM +0100, Russell King wrote:
On Thu, Jul 04, 2013 at 10:58:17AM +0200, Sascha Hauer wrote:
On Thu, Jul 04, 2013 at 09:40:52AM +0100, Russell King wrote:
Wrong. Please read the example with the diagrams I gave. Consider what happens if you have two display devices connected to a single output, one which fixes the allowable mode and one which _can_ reformat the selected mode.
What you describe here is a forced clone mode. This could be described in the devicetree so that a driver wouldn't start before all connected displays (links) are present, but this should be limited to the affected path, not to the whole componentized device.
Okay, to throw a recent argument back at you: so what in this scenario if you have a driver for the fixed-mode device but not the other device?
It's exactly the same problem which you were describing to Sebastian just a moment ago with drivers missing from the supernode approach - you can't start if one of those "forced clone" drivers is missing.
Indeed, then you will see nothing on your display, but I rather make this setup a special case than the rather usual case that we do not have compiled in all drivers for all devices referenced in the supernode.
The super-node links SoC internal devices that do not necessarily match with the subsystem driver. You have one single DRM driver exploiting several device nodes for a single video card.
But you need one device node to hook the driver to.
Sebastian
On Thu, Jul 04, 2013 at 11:44:41AM +0200, Sebastian Hesselbarth wrote:
On 07/04/13 11:30, Sascha Hauer wrote:
On Thu, Jul 04, 2013 at 10:11:31AM +0100, Russell King wrote:
On Thu, Jul 04, 2013 at 10:58:17AM +0200, Sascha Hauer wrote:
On Thu, Jul 04, 2013 at 09:40:52AM +0100, Russell King wrote:
Wrong. Please read the example with the diagrams I gave. Consider what happens if you have two display devices connected to a single output, one which fixes the allowable mode and one which _can_ reformat the selected mode.
What you describe here is a forced clone mode. This could be described in the devicetree so that a driver wouldn't start before all connected displays (links) are present, but this should be limited to the affected path, not to the whole componentized device.
Okay, to throw a recent argument back at you: so what in this scenario if you have a driver for the fixed-mode device but not the other device?
It's exactly the same problem which you were describing to Sebastian just a moment ago with drivers missing from the supernode approach - you can't start if one of those "forced clone" drivers is missing.
Indeed, then you will see nothing on your display, but I rather make this setup a special case than the rather usual case that we do not have compiled in all drivers for all devices referenced in the supernode.
The super-node links SoC internal devices that do not necessarily match with the subsystem driver. You have one single DRM driver exploiting several device nodes for a single video card.
But you need one device node to hook the driver to.
Currently on i.MX we use a platform_device for this purpose.
Sascha
On 07/04/13 12:09, Sascha Hauer wrote:
On Thu, Jul 04, 2013 at 11:44:41AM +0200, Sebastian Hesselbarth wrote:
On 07/04/13 11:30, Sascha Hauer wrote:
On Thu, Jul 04, 2013 at 10:11:31AM +0100, Russell King wrote:
On Thu, Jul 04, 2013 at 10:58:17AM +0200, Sascha Hauer wrote:
On Thu, Jul 04, 2013 at 09:40:52AM +0100, Russell King wrote:
Wrong. Please read the example with the diagrams I gave. Consider what happens if you have two display devices connected to a single output, one which fixes the allowable mode and one which _can_ reformat the selected mode.
What you describe here is a forced clone mode. This could be described in the devicetree so that a driver wouldn't start before all connected displays (links) are present, but this should be limited to the affected path, not to the whole componentized device.
Okay, to throw a recent argument back at you: so what in this scenario if you have a driver for the fixed-mode device but not the other device?
It's exactly the same problem which you were describing to Sebastian just a moment ago with drivers missing from the supernode approach - you can't start if one of those "forced clone" drivers is missing.
Indeed, then you will see nothing on your display, but I rather make this setup a special case than the rather usual case that we do not have compiled in all drivers for all devices referenced in the supernode.
The super-node links SoC internal devices that do not necessarily match with the subsystem driver. You have one single DRM driver exploiting several device nodes for a single video card.
But you need one device node to hook the driver to.
Currently on i.MX we use a platform_device for this purpose.
Sascha,
I have the impression that we are not that far away in our proposals.
The platform_device you are using on i.MX is what we have been referring as the "super-node" during the discussion. I see device nodes as some kind of platform_device - no all really end up in one as it depends on the bus probing the nodes - but they are logical nodes that sometimes 1:1 match the physical nodes (devices).
The remaining issue I see at least for Dove and the DRM driver that will be compatible with Armada 510 and e.g. PXA2128 or Armada 610 is:
We cannot match the DRM driver with any of the devices_nodes in question (a) using lcd-controller will always end-up in two DRM drivers on Dove having two lcd controllers (b) using display-controller will not work on other SoCs because it is unique to Armada 510.
With (a) you could tell lcd1 to go to "slave-mode" as v4l2 does but that will also lead to very SoC specific drivers. Moreover, you will also have to tell lcd0 to be either stand-alone or master-mode. You need to know weather to wait for DRM driver loaded on lcd1 (slave) to fail after reading "slave-mode" property.
The super-node solves it easily and has a strong relation to a virtual video card. The actual point-to-point links match v4l2 approach.
Even the v4l2 approach could be used to describe all possible combinations we discussed. But I do not see the beauty of it as it will make dealing with translation of device node to subsystem requirements and even physical SoC IP a lot more complicated.
With v4l2 you will have to link (=> denoting visible video stream, -> logical link)
the single card, single lcd-controller case: (LCD0)->(HDMI)=>
the multiple card, single lcd-controller case: (LCD0)->(DCON)->(HDMI)=> and (LCD1)---+ +=>
and the single card, multiple lcd-controller case: (LCD0)->(LCD1)->(DCON)->(HDMI)=> +=>
All this may allow you to determine the required setup in the driver but it relates in no way how the data flow is nor how the devices are physically connected.
Sebastian
On Thu, Jul 04, 2013 at 12:58:29PM +0200, Sebastian Hesselbarth wrote:
On 07/04/13 12:09, Sascha Hauer wrote:
On Thu, Jul 04, 2013 at 11:44:41AM +0200, Sebastian Hesselbarth wrote:
On 07/04/13 11:30, Sascha Hauer wrote:
On Thu, Jul 04, 2013 at 10:11:31AM +0100, Russell King wrote:
On Thu, Jul 04, 2013 at 10:58:17AM +0200, Sascha Hauer wrote:
On Thu, Jul 04, 2013 at 09:40:52AM +0100, Russell King wrote: >Wrong. Please read the example with the diagrams I gave. Consider >what happens if you have two display devices connected to a single >output, one which fixes the allowable mode and one which _can_ >reformat the selected mode.
What you describe here is a forced clone mode. This could be described in the devicetree so that a driver wouldn't start before all connected displays (links) are present, but this should be limited to the affected path, not to the whole componentized device.
Okay, to throw a recent argument back at you: so what in this scenario if you have a driver for the fixed-mode device but not the other device?
It's exactly the same problem which you were describing to Sebastian just a moment ago with drivers missing from the supernode approach - you can't start if one of those "forced clone" drivers is missing.
Indeed, then you will see nothing on your display, but I rather make this setup a special case than the rather usual case that we do not have compiled in all drivers for all devices referenced in the supernode.
The super-node links SoC internal devices that do not necessarily match with the subsystem driver. You have one single DRM driver exploiting several device nodes for a single video card.
But you need one device node to hook the driver to.
Currently on i.MX we use a platform_device for this purpose.
Sascha,
I have the impression that we are not that far away in our proposals.
The platform_device you are using on i.MX is what we have been referring as the "super-node" during the discussion. I see device nodes as some kind of platform_device - no all really end up in one as it depends on the bus probing the nodes - but they are logical nodes that sometimes 1:1 match the physical nodes (devices).
The remaining issue I see at least for Dove and the DRM driver that will be compatible with Armada 510 and e.g. PXA2128 or Armada 610 is:
We cannot match the DRM driver with any of the devices_nodes in question (a) using lcd-controller will always end-up in two DRM drivers on Dove having two lcd controllers (b) using display-controller will not work on other SoCs because it is unique to Armada 510.
With (a) you could tell lcd1 to go to "slave-mode" as v4l2 does but that will also lead to very SoC specific drivers. Moreover, you will also have to tell lcd0 to be either stand-alone or master-mode. You need to know weather to wait for DRM driver loaded on lcd1 (slave) to fail after reading "slave-mode" property.
The super-node solves it easily and has a strong relation to a virtual video card. The actual point-to-point links match v4l2 approach.
Even the v4l2 approach could be used to describe all possible combinations we discussed. But I do not see the beauty of it as it will make dealing with translation of device node to subsystem requirements and even physical SoC IP a lot more complicated.
With v4l2 you will have to link (=> denoting visible video stream, -> logical link)
the single card, single lcd-controller case: (LCD0)->(HDMI)=>
the multiple card, single lcd-controller case: (LCD0)->(DCON)->(HDMI)=> and (LCD1)---+ +=>
and the single card, multiple lcd-controller case: (LCD0)->(LCD1)->(DCON)->(HDMI)=>
No, the link can spawn a graph. There is no need to build a chain only. You can't link LCD0 to LCD1, that makes no sense.
Again the difference between supernodes and graphs is that the supernode approach does not contain information about what components are needed to do something useful with the device. You simply have to wait until *all* components are present which may never happen if you don't drivers for all components of the device. With the graph instead you can start doing something once you find a link between a source and a sink, no matter if other links are still missing.
Another important point is that if you have a board with multiple i2c encoder chips, how do you decide which one is connected to which LCDx when all information you have is: "I need these x components", but have no information how these components are connected to each other. Fortunately I don't have hardware here which does something like this, but it would also be possible to chain multiple encoder chips. This could be described in a graph, but when all we have is a list of components without connection information we would need board specific code to handle the layout.
Sascha
On Fri, Jul 5, 2013 at 11:07 AM, Sascha Hauer s.hauer@pengutronix.de wrote:
Again the difference between supernodes and graphs is that the supernode approach does not contain information about what components are needed to do something useful with the device. You simply have to wait until *all* components are present which may never happen if you don't drivers for all components of the device. With the graph instead you can start doing something once you find a link between a source and a sink, no matter if other links are still missing.
I really think you're overstating your argument here. The whole point of a super node is that a *driver* can bind against it and a driver can be made intelligent enough to know which links are mandatory and which are optional (with the assumption that the data about which is which is encoded in the supernode). Graph vs. supernode vs some mixture of the two can all do exactly the same thing.
What really matters is which approach best describes the hardware, and then the drivers can be designed based on that.
Another important point is that if you have a board with multiple i2c encoder chips, how do you decide which one is connected to which LCDx when all information you have is: "I need these x components", but have no information how these components are connected to each other. Fortunately I don't have hardware here which does something like this, but it would also be possible to chain multiple encoder chips. This could be described in a graph, but when all we have is a list of components without connection information we would need board specific code to handle the layout.
It is still absolutely true that i2c devices must have a node below the i2c bus, same for spi, same for MMIO. That doesn't change. It really isn't an option to have the subservient devices directly under the supernode. On that point the supernode pretty much must have phandles to those device nodes.
*However* there is absolutely nothing that says the subservient devices have to be bound to device drivers! It is perfectly fine for the supernode to go looking for a registered struct i2c_device (or whatever) and drive the device directly*. There are certainly cases where it wouldn't make sense to split a driver for what is effectively one device into two. But I digress.
*You'll note that I said look for i2c_device here, and not device_node. The reason is that waiting for the i2c_device to appear gives a guarantee that the i2c bus is initialized. Looking for the device_node does not.
g.
On 07/04/13 10:33, Sascha Hauer wrote:
On Wed, Jul 03, 2013 at 10:52:49AM +0100, Russell King wrote:
Sorry but I'd like to say that this cannot be used commonly. Shouldn't you really consider Linux framebuffer or other subsystems? The above dtsi file is specific to DRM subsystem. And I think the dtsi file has no any dependency on certain subsystem so board dtsi file should be considered for all device drivers based on other subsystems: i.e., Linux framebuffer, DRM, and so no. So I *strongly* object to it. All we have to do is to keep the dtsi file as is, and to find other better way that can be used commonly in DRM.
+1 for not encoding the projected usecase of the graphics subsystem into the devicetree. Whether the two LCD controllers shall be used together or separately should not affect the devicetree. devicetree is about hardware description, not configuration.
And if we listen to that argument, then this problem is basically impossible to solve sanely.
Are we really saying that we have no acceptable way to represent componentized devices in DT? If that's true, then DT fails to represent quite a lot of ARM hardware, and frankly we shouldn't be using it. I can't believe that's true though.
The problem is that even with an ASoC like approach, that doesn't work here because there's no way to know how many "components" to expect. That's what the "supernode" is doing - telling us what components group together to form a device.
A componentized device never completes and it doesn't have to. A componentized device can start once there is a path from an input (crtc, i2s unit) to an output (connector, speaker).
Consider what happens with a supernode approach. Your board provides a devicetree which has a supernode with hdmi and lvds referenced. Now you build a kernel with the hdmi driver disabled. You would still expect the lvds port to be working without having the kernel wait for the supernode being complete.
Without supernode you can just start once you have everything between the crtc and lvds nodes. If later a hdmi device joins in then you can either notify the users (provided the DRM/KMS API supports it) or just ignore it until the DRM device gets reopened.
Sascha,
that is what it is all about. You assume you a priori know what devices will be required for the componentized device to successfully output a video stream.
We have shown setups where you don't know what is required. Cubox _needs_ lcd0 and hdmi-transmitter, olpc just needs lcd0 and has built-in hdmi in the SoC (IIRC). The driver needs to know what to wait for, and that is given by the DT super-node.
I consider kernels with missing drivers compared to what is given in the DT as broken setup. You cannot complain about missing SATA if you leave out SATA driver, or - if you implemented the driver into two parts - leave out one of the two SATA driver parts.
Sebastian
On Thu, Jul 04, 2013 at 10:45:40AM +0200, Sebastian Hesselbarth wrote:
On 07/04/13 10:33, Sascha Hauer wrote:
A componentized device never completes and it doesn't have to. A componentized device can start once there is a path from an input (crtc, i2s unit) to an output (connector, speaker).
Consider what happens with a supernode approach. Your board provides a devicetree which has a supernode with hdmi and lvds referenced. Now you build a kernel with the hdmi driver disabled. You would still expect the lvds port to be working without having the kernel wait for the supernode being complete.
Without supernode you can just start once you have everything between the crtc and lvds nodes. If later a hdmi device joins in then you can either notify the users (provided the DRM/KMS API supports it) or just ignore it until the DRM device gets reopened.
Sascha,
that is what it is all about. You assume you a priori know what devices will be required for the componentized device to successfully output a video stream.
We have shown setups where you don't know what is required. Cubox _needs_ lcd0 and hdmi-transmitter,
Then your Cubox devicetree has a link (that's how they call it in v4l2, a link doesn't necessarily is a direct connection but can have multiple devices in it) between lcd0 and hdmi.
olpc just needs lcd0 and has built-in hdmi in the SoC (IIRC).
And olpc has a link with lcd0 as the source and the builtin hdmi.
The driver needs to know what to wait for, and that is given by the DT super-node.
You need a source (described in the devicetree), a sink (also described in the devicetree) and a link between them, nothing more.
I consider kernels with missing drivers compared to what is given in the DT as broken setup.
What if your devicetree describes components not yet in mainline. Would you consider mainline kernels broken then?
Sascha
On 07/04/13 10:53, Sascha Hauer wrote:
On Thu, Jul 04, 2013 at 10:45:40AM +0200, Sebastian Hesselbarth wrote:
On 07/04/13 10:33, Sascha Hauer wrote:
A componentized device never completes and it doesn't have to. A componentized device can start once there is a path from an input (crtc, i2s unit) to an output (connector, speaker).
Consider what happens with a supernode approach. Your board provides a devicetree which has a supernode with hdmi and lvds referenced. Now you build a kernel with the hdmi driver disabled. You would still expect the lvds port to be working without having the kernel wait for the supernode being complete.
Without supernode you can just start once you have everything between the crtc and lvds nodes. If later a hdmi device joins in then you can either notify the users (provided the DRM/KMS API supports it) or just ignore it until the DRM device gets reopened.
Sascha,
that is what it is all about. You assume you a priori know what devices will be required for the componentized device to successfully output a video stream.
We have shown setups where you don't know what is required. Cubox _needs_ lcd0 and hdmi-transmitter,
Then your Cubox devicetree has a link (that's how they call it in v4l2, a link doesn't necessarily is a direct connection but can have multiple devices in it) between lcd0 and hdmi.
I haven't looked up v4l2 "link" yet. But (a) if it is a separate node how is that different from the "super-node" we are talking about or (b) if it is a property, where do you put it?
olpc just needs lcd0 and has built-in hdmi in the SoC (IIRC).
And olpc has a link with lcd0 as the source and the builtin hdmi.
Sure, as my DT proposal that Inki shamelessly copied, has a "link" property for lcd0 on Cubox, while OPLC's lcd0 hasn't.
The driver needs to know what to wait for, and that is given by the DT super-node.
You need a source (described in the devicetree), a sink (also described in the devicetree) and a link between them, nothing more.
What if you need source-to-multi-sink or source-to-transceiver-to-sink? The DT proposal I have given, is source-to-sink on a per device node basis. If you don't like OPLC's lcd0 having no "link" while Cubox' lcd0 has, put a fake node for the possibly connected HDMI monitor in there. ASoC does that for SPDIF jack by also having a "codecs" driver although there is nothing to control.
I consider kernels with missing drivers compared to what is given in the DT as broken setup.
What if your devicetree describes components not yet in mainline. Would you consider mainline kernels broken then?
No. But I do not 1:1 match device tree nodes with subsystem drivers. The devices are there, there is no driver.
Currently, for CuBox DT, video _is_ broken because there is no way to represent the way lcd0, hdmi-transmitter, and pll are connected to form a single video card. All separate drivers exist, you may be lucky to get video out of the CuBox because of some reset values that allow you to get it by coincidence. But as soon as you try the very same kernel on a different board, your experience will change quickly.
Sebastian
On Thu, Jul 04, 2013 at 11:10:35AM +0200, Sebastian Hesselbarth wrote:
On 07/04/13 10:53, Sascha Hauer wrote:
On Thu, Jul 04, 2013 at 10:45:40AM +0200, Sebastian Hesselbarth wrote:
On 07/04/13 10:33, Sascha Hauer wrote:
A componentized device never completes and it doesn't have to. A componentized device can start once there is a path from an input (crtc, i2s unit) to an output (connector, speaker).
Consider what happens with a supernode approach. Your board provides a devicetree which has a supernode with hdmi and lvds referenced. Now you build a kernel with the hdmi driver disabled. You would still expect the lvds port to be working without having the kernel wait for the supernode being complete.
Without supernode you can just start once you have everything between the crtc and lvds nodes. If later a hdmi device joins in then you can either notify the users (provided the DRM/KMS API supports it) or just ignore it until the DRM device gets reopened.
Sascha,
that is what it is all about. You assume you a priori know what devices will be required for the componentized device to successfully output a video stream.
We have shown setups where you don't know what is required. Cubox _needs_ lcd0 and hdmi-transmitter,
Then your Cubox devicetree has a link (that's how they call it in v4l2, a link doesn't necessarily is a direct connection but can have multiple devices in it) between lcd0 and hdmi.
I haven't looked up v4l2 "link" yet. But (a) if it is a separate node how is that different from the "super-node" we are talking about or (b) if it is a property, where do you put it?
Sorry, I should have explained this. The basic idea the v4l2 guys are following is that they describe their hardware pipelines in the devicetree.
Each device can have ports which are connected via links. In the devicetree a link basically becomes a phandle (a remote device will have a phandle pointing back to the original device). For an overview have a look at
Documentation/devicetree/bindings/media/video-interfaces.txt
With this you can describe the whole graph of devices you have in the devicetree. The examples in this file have a path from a camera sensor via a MIPI converter to a capture interface.
The difference to a supernode is that this approach describes the data flow in the devicetree so that we can iterate over it to find links between source and sink rather than relying on a list of subdevices to be completed.
Sascha
On 07/04/13 11:23, Sascha Hauer wrote:
On Thu, Jul 04, 2013 at 11:10:35AM +0200, Sebastian Hesselbarth wrote:
On 07/04/13 10:53, Sascha Hauer wrote:
On Thu, Jul 04, 2013 at 10:45:40AM +0200, Sebastian Hesselbarth wrote:
On 07/04/13 10:33, Sascha Hauer wrote:
A componentized device never completes and it doesn't have to. A componentized device can start once there is a path from an input (crtc, i2s unit) to an output (connector, speaker).
Consider what happens with a supernode approach. Your board provides a devicetree which has a supernode with hdmi and lvds referenced. Now you build a kernel with the hdmi driver disabled. You would still expect the lvds port to be working without having the kernel wait for the supernode being complete.
Without supernode you can just start once you have everything between the crtc and lvds nodes. If later a hdmi device joins in then you can either notify the users (provided the DRM/KMS API supports it) or just ignore it until the DRM device gets reopened.
Sascha,
that is what it is all about. You assume you a priori know what devices will be required for the componentized device to successfully output a video stream.
We have shown setups where you don't know what is required. Cubox _needs_ lcd0 and hdmi-transmitter,
Then your Cubox devicetree has a link (that's how they call it in v4l2, a link doesn't necessarily is a direct connection but can have multiple devices in it) between lcd0 and hdmi.
I haven't looked up v4l2 "link" yet. But (a) if it is a separate node how is that different from the "super-node" we are talking about or (b) if it is a property, where do you put it?
Sorry, I should have explained this. The basic idea the v4l2 guys are following is that they describe their hardware pipelines in the devicetree.
Each device can have ports which are connected via links. In the devicetree a link basically becomes a phandle (a remote device will have a phandle pointing back to the original device). For an overview have a look at
Documentation/devicetree/bindings/media/video-interfaces.txt
With this you can describe the whole graph of devices you have in the devicetree. The examples in this file have a path from a camera sensor via a MIPI converter to a capture interface.
The difference to a supernode is that this approach describes the data flow in the devicetree so that we can iterate over it to find links between source and sink rather than relying on a list of subdevices to be completed.
Agree. But that is not that different from linux,video-external-encoder property I made up, except that the name is different.
And, I still see no way with that source/sink linking _alone_ how to tell that either lcd0 and lcd1 act as a _single_ video card or lcd0 and lcd1 are used in a _two_ video card setup.
There is no single device node on Dove that would sufficiently act as the top node for a working video card on all boards. And there is no framebuffer node to link each of the lcd0/1 nodes to.
That is what the super-node is for, form a virtual device called video card to act as a container for all those SoC devices that are not sufficient for a working video setup on their own.
If lcd0 needs that hdmi-transmitter you link it to the lcd0 node - not the super-node. If lcd0 needs some pll clock you link it to the lcd0 node - again not the super-node.
The super-node(s) just connects all SoC devices that shall be part of your board-specific video card(s) - for Dove that is any combination of lcd0, lcd0, dcon and video memory allocation.
Sebastian
Sebastian
On Thu, Jul 04, 2013 at 11:40:17AM +0200, Sebastian Hesselbarth wrote:
On 07/04/13 11:23, Sascha Hauer wrote:
With this you can describe the whole graph of devices you have in the devicetree. The examples in this file have a path from a camera sensor via a MIPI converter to a capture interface.
The difference to a supernode is that this approach describes the data flow in the devicetree so that we can iterate over it to find links between source and sink rather than relying on a list of subdevices to be completed.
Agree. But that is not that different from linux,video-external-encoder property I made up, except that the name is different.
And, I still see no way with that source/sink linking _alone_ how to tell that either lcd0 and lcd1 act as a _single_ video card or lcd0 and lcd1 are used in a _two_ video card setup.
There is no single device node on Dove that would sufficiently act as the top node for a working video card on all boards. And there is no framebuffer node to link each of the lcd0/1 nodes to.
That is what the super-node is for, form a virtual device called video card to act as a container for all those SoC devices that are not sufficient for a working video setup on their own.
If lcd0 needs that hdmi-transmitter you link it to the lcd0 node - not the super-node. If lcd0 needs some pll clock you link it to the lcd0 node - again not the super-node.
The super-node(s) just connects all SoC devices that shall be part of your board-specific video card(s) - for Dove that is any combination of lcd0, lcd0, dcon and video memory allocation.
So with the supernode approach you would have one/two supernodes and with a v4l2 approach you would have either one graph containing lcd0 and lcd1 or to graphs (without a connection in between).
Sascha
On Thu, Jul 04, 2013 at 10:33:07AM +0200, Sascha Hauer wrote:
A componentized device never completes and it doesn't have to. A componentized device can start once there is a path from an input (crtc, i2s unit) to an output (connector, speaker).
Sorry for the incomplete reply.
If you read all the messages in this thread, then you will realise that DRM does not support an incremental startup approach. It needs to know everything at the point of "load".
Without supernode you can just start once you have everything between the crtc and lvds nodes. If later a hdmi device joins in then you can either notify the users (provided the DRM/KMS API supports it) or just ignore it until the DRM device gets reopened.
It's not a case that you can ignore it until the "DRM device gets reopened" because the DRM device never shuts down. You'd have to ignore it until you tear down what you have already registered into DRM, causing all the display hardware to be shutdown, and then re-"load" DRM.
To make this work, you would have to modify not only DRM to allow that, but also the framebuffer layer too. Are you volunteering? :)
I don't think that Sebastian nor myself have either the motivation nor the time available to go down that route of majorly rewriting kernel subsystems.
Not only that but I believe it to be an unsafe approach as I've already outlined.
On Thu, Jul 04, 2013 at 10:08:29AM +0100, Russell King wrote:
On Thu, Jul 04, 2013 at 10:33:07AM +0200, Sascha Hauer wrote:
A componentized device never completes and it doesn't have to. A componentized device can start once there is a path from an input (crtc, i2s unit) to an output (connector, speaker).
Sorry for the incomplete reply.
If you read all the messages in this thread, then you will realise that DRM does not support an incremental startup approach. It needs to know everything at the point of "load".
I know that DRM does not support this incremental startup approach.
Without supernode you can just start once you have everything between the crtc and lvds nodes. If later a hdmi device joins in then you can either notify the users (provided the DRM/KMS API supports it) or just ignore it until the DRM device gets reopened.
It's not a case that you can ignore it until the "DRM device gets reopened" because the DRM device never shuts down. You'd have to ignore it until you tear down what you have already registered into DRM, causing all the display hardware to be shutdown, and then re-"load" DRM.
To make this work, you would have to modify not only DRM to allow that, but also the framebuffer layer too. Are you volunteering? :)
I tend to ignore the framebuffer layer since I still hope that it will be removed from DRM soon.
We @pengutronix can put quite some time into this, but for sure we can't do it alone.
A general problem with DRM on embedded at the moment is that there's very little cooperation between the different driver authors, mainly because DRM makes it hard to share code between the drivers. There currently is no place in DRM to register components in a drm_device agnostic way, so everyone currently tries to work around this in the SoC drivers. The supernode approach seems to be in this fashion.
Coming from the embedded area componentized devices are nothing new to us and I believe that working on this could also help the desktop people who suddenly get componentized devices aswell (Optimus).
Sascha
On Wed, Jul 3, 2013 at 5:02 AM, Sascha Hauer s.hauer@pengutronix.de wrote:
On Wed, Jul 03, 2013 at 05:57:18PM +0900, Inki Dae wrote:
video { /* Single video card w/ multiple lcd controllers */ card0 { compatible = "marvell,armada-510-display"; reg = <0 0x3f000000 0x1000000>; /* video-mem hole */ /* later: linux,video-memory-size = <0x1000000>; */ marvell,video-devices = <&lcd0 &lcd1 &dcon>; };
/* OR: Multiple video card w/ single lcd controllers */ card0 { compatible = "marvell,armada-510-display"; ... marvell,video-devices = <&lcd0>; }; card1 { compatible = "marvell,armada-510-display"; ... marvell,video-devices = <&lcd1>; };
};
Sorry but I'd like to say that this cannot be used commonly. Shouldn't you really consider Linux framebuffer or other subsystems? The above dtsi file is specific to DRM subsystem. And I think the dtsi file has no any dependency on certain subsystem so board dtsi file should be considered for all device drivers based on other subsystems: i.e., Linux framebuffer, DRM, and so no. So I *strongly* object to it. All we have to do is to keep the dtsi file as is, and to find other better way that can be used commonly in DRM.
+1 for not encoding the projected usecase of the graphics subsystem into the devicetree. Whether the two LCD controllers shall be used together or separately should not affect the devicetree. devicetree is about hardware description, not configuration.
I do like the supernode idea, it would have avoided the ugly global-list-of-sub-devices in tilcdc (and probably some other drivers).
As for projection of use-case, and whether it is something drm specific or not.. if there is a way to make it generic enough that it could work for fbdev, well I suppose that is nice. Not a hard requirement in my mind, or at least it is a secondary priority compared to having a better way to having drm devices composed of sub-{devices,nodes,whatever}.
BR, -R
Sascha
-- Pengutronix e.K. | | Industrial Linux Solutions | http://www.pengutronix.de/ | Peiner Str. 6-8, 31137 Hildesheim, Germany | Phone: +49-5121-206917-0 | Amtsgericht Hildesheim, HRA 2686 | Fax: +49-5121-206917-5555 | _______________________________________________ dri-devel mailing list dri-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/dri-devel
...
Sorry but I'd like to say that this cannot be used commonly. Shouldn't you really consider Linux framebuffer or other subsystems? The above dtsi file is specific to DRM subsystem. And I think the dtsi file has no any dependency on certain subsystem so board dtsi file should be considered for all device drivers based on other subsystems: i.e., Linux framebuffer, DRM, and so no. So I *strongly* object to it. All we have to do is to keep the dtsi file as is, and to find other better way that can be used commonly in DRM.
+1 for not encoding the projected usecase of the graphics subsystem into the devicetree. Whether the two LCD controllers shall be used together or separately should not affect the devicetree. devicetree is about hardware description, not configuration.
I do like the supernode idea, it would have avoided the ugly global-list-of-sub-devices in tilcdc (and probably some other drivers).
As for projection of use-case, and whether it is something drm specific or not.. if there is a way to make it generic enough that it could work for fbdev, well I suppose that is nice. Not a hard requirement in my mind, or at least it is a secondary priority compared to having a better way to having drm devices composed of sub-{devices,nodes,whatever}.
I'm not following who has the requirement for exposing different output device paths as possibly one or two devices, so you could have a drm device for one or the other, or X could use a subset of devices.
I'm not really sure how best to deal with that, we have had plans for drm to actually expose sub-groups of crtc/encoders/connectors to userspace via different device nodes for a while, its just never received the final push on how to configure it.
Dave.
On Thu, Jul 4, 2013 at 5:28 PM, Dave Airlie airlied@gmail.com wrote:
...
Sorry but I'd like to say that this cannot be used commonly. Shouldn't you really consider Linux framebuffer or other subsystems? The above dtsi file is specific to DRM subsystem. And I think the dtsi file has no any dependency on certain subsystem so board dtsi file should be considered for all device drivers based on other subsystems: i.e., Linux framebuffer, DRM, and so no. So I *strongly* object to it. All we have to do is to keep the dtsi file as is, and to find other better way that can be used commonly in DRM.
+1 for not encoding the projected usecase of the graphics subsystem into the devicetree. Whether the two LCD controllers shall be used together or separately should not affect the devicetree. devicetree is about hardware description, not configuration.
I do like the supernode idea, it would have avoided the ugly global-list-of-sub-devices in tilcdc (and probably some other drivers).
As for projection of use-case, and whether it is something drm specific or not.. if there is a way to make it generic enough that it could work for fbdev, well I suppose that is nice. Not a hard requirement in my mind, or at least it is a secondary priority compared to having a better way to having drm devices composed of sub-{devices,nodes,whatever}.
I'm not following who has the requirement for exposing different output device paths as possibly one or two devices, so you could have a drm device for one or the other, or X could use a subset of devices.
I'm not really sure how best to deal with that, we have had plans for drm to actually expose sub-groups of crtc/encoders/connectors to userspace via different device nodes for a while, its just never received the final push on how to configure it.
David Herrmann is working on that as part of the GSoC render node project: http://dvdhrm.wordpress.com/2013/05/29/drm-render-and-modeset-nodes/
Alex
On Wed, Jul 3, 2013 at 10:02 AM, Sascha Hauer s.hauer@pengutronix.de wrote:
On Wed, Jul 03, 2013 at 05:57:18PM +0900, Inki Dae wrote:
video { /* Single video card w/ multiple lcd controllers */ card0 { compatible = "marvell,armada-510-display"; reg = <0 0x3f000000 0x1000000>; /* video-mem hole */ /* later: linux,video-memory-size = <0x1000000>; */ marvell,video-devices = <&lcd0 &lcd1 &dcon>; };
/* OR: Multiple video card w/ single lcd controllers */ card0 { compatible = "marvell,armada-510-display"; ... marvell,video-devices = <&lcd0>; }; card1 { compatible = "marvell,armada-510-display"; ... marvell,video-devices = <&lcd1>; };
};
Sorry but I'd like to say that this cannot be used commonly. Shouldn't you really consider Linux framebuffer or other subsystems? The above dtsi file is specific to DRM subsystem. And I think the dtsi file has no any dependency on certain subsystem so board dtsi file should be considered for all device drivers based on other subsystems: i.e., Linux framebuffer, DRM, and so no. So I *strongly* object to it. All we have to do is to keep the dtsi file as is, and to find other better way that can be used commonly in DRM.
+1 for not encoding the projected usecase of the graphics subsystem into the devicetree. Whether the two LCD controllers shall be used together or separately should not affect the devicetree. devicetree is about hardware description, not configuration.
It is however relevant to encode information about how devices are related to each other. That could be an orthogonal binding though to describe how displays are oriented relative to each other.
g.
On 07/05/13 10:43, Grant Likely wrote:
On Wed, Jul 3, 2013 at 10:02 AM, Sascha Hauer s.hauer@pengutronix.de wrote:
On Wed, Jul 03, 2013 at 05:57:18PM +0900, Inki Dae wrote:
video { /* Single video card w/ multiple lcd controllers */ card0 { compatible = "marvell,armada-510-display"; reg = <0 0x3f000000 0x1000000>; /* video-mem hole */ /* later: linux,video-memory-size = <0x1000000>; */ marvell,video-devices = <&lcd0 &lcd1 &dcon>; };
/* OR: Multiple video card w/ single lcd controllers */ card0 { compatible = "marvell,armada-510-display"; ... marvell,video-devices = <&lcd0>; }; card1 { compatible = "marvell,armada-510-display"; ... marvell,video-devices = <&lcd1>; };
};
Sorry but I'd like to say that this cannot be used commonly. Shouldn't you really consider Linux framebuffer or other subsystems? The above dtsi file is specific to DRM subsystem. And I think the dtsi file has no any dependency on certain subsystem so board dtsi file should be considered for all device drivers based on other subsystems: i.e., Linux framebuffer, DRM, and so no. So I *strongly* object to it. All we have to do is to keep the dtsi file as is, and to find other better way that can be used commonly in DRM.
+1 for not encoding the projected usecase of the graphics subsystem into the devicetree. Whether the two LCD controllers shall be used together or separately should not affect the devicetree. devicetree is about hardware description, not configuration.
It is however relevant to encode information about how devices are related to each other. That could be an orthogonal binding though to describe how displays are oriented relative to each other.
Grant,
from what I can see from either super-node approach or v4l2 links with respect to what we discuss about Marvell SoCs, both are more or less equivalent.
The only issue left for how to describe that in DT in a sane way is:
Are we using a super-node or node properties for the virtual graphics card to tell if there is one card with two lcd-controllers (lcd0/lcd1 above) or two cards with one lcd-controller.
You see the super-node solution above, but even with orthogonal properties we can achieve the same. If we hook up the DRM (or fbdev or whatever) driver to the lcd-controller nodes, we will have two driver instances trying to register a DRM card.
For the two card scenario, everything is fine. The driver knows about a possible DCON (output mux/mirror) and looks for a compatible available node. Both driver instances may need to access DCON registers but that is a driver issue and not DT related.
For the one card with two lcd-controllers scenario the only difference between super-node and node-to-node linking alone is that you get two driver instances in the first place. With a property equivalent to v4l2 "slave-mode" that you put on e.g. lcd1, the driver loaded for lcd1 node bails out silently. The driver loaded for lcd0 also looks for lcd-controller nodes with "slave-mode" property and picks up lcd1.
This possibly leads to races, but IMHO as long as the driver looks for its "slave-mode" property early, everything should be fine.
All other links required, e.g. lcd0 -> hdmi-transmitter, belong to the respective nodes in both approaches. DCON existence and requirement is implicitly put in the driver and not required in DT.
Of course, this is independent of how to handle and register sub-drivers for DRM. But this is subsystem dependent and also not related to DT.
So for the discussion, I can see that there have been some voting for super-node, some for node-to-node linking. Although I initially proposed super-nodes, I can also happily live with node-to-node linking alone.
Either someone can give an example where one of the approaches will not work (i.MX, exynos?), Grant or one of the DRM maintainers has a preference, or we are stuck at the decision.
Sebastian
On Fri, Jul 5, 2013 at 10:34 AM, Sebastian Hesselbarth sebastian.hesselbarth@gmail.com wrote:
So for the discussion, I can see that there have been some voting for super-node, some for node-to-node linking. Although I initially proposed super-nodes, I can also happily live with node-to-node linking alone.
Either someone can give an example where one of the approaches will not work (i.MX, exynos?), Grant or one of the DRM maintainers has a preference, or we are stuck at the decision.
I tend to prefer a top-level super nodes with phandles to all of the components that compose the device when there is no clear one device that controls all the others. There is some precedence for that in other subsystems (leds, asoc, etc). Sound in particular has a lot of different bits and pieces that are interconnected with audio channels, gpios, and other things that get quite complicated, so it is convenient to have a single node that describes how they all fit together *and* allows for a platform to use a completely different device driver if required.
node-to-node linking works well if there an absolute 'master' can be identified for the virtual device. ie. Ethernet MAC devices use a "phy-device" property to link to the phy it requires. In that case it is pretty clear that the Ethernet MAC is in charge and it uses the PHY.
In either case it is absolutely required that the 'master' driver knows how to find and wait for all the subservient devices before probing can complete.
I know that isn't a solid answer, but you know the problem space better than I. Take the above into account, make a decision and post a binding proposal for review.
g.
On 07/05/13 11:51, Grant Likely wrote:
On Fri, Jul 5, 2013 at 10:34 AM, Sebastian Hesselbarth sebastian.hesselbarth@gmail.com wrote:
So for the discussion, I can see that there have been some voting for super-node, some for node-to-node linking. Although I initially proposed super-nodes, I can also happily live with node-to-node linking alone.
Either someone can give an example where one of the approaches will not work (i.MX, exynos?), Grant or one of the DRM maintainers has a preference, or we are stuck at the decision.
I tend to prefer a top-level super nodes with phandles to all of the components that compose the device when there is no clear one device that controls all the others. There is some precedence for that in other subsystems (leds, asoc, etc). Sound in particular has a lot of different bits and pieces that are interconnected with audio channels, gpios, and other things that get quite complicated, so it is convenient to have a single node that describes how they all fit together *and* allows for a platform to use a completely different device driver if required.
Actually, I consider the super-node not as the single point for _all_ components involved but more as the top node that allows you to have a single starting point from where can explore the links on a node-to-node basis. This by coincidence perfectly fits what will be required for a DRM driver to match against.
Sascha Hauer just also replied to a mail earlier mentioning references to external i2c encoders put _into_ the phandles of the super-node. This is not what I consider this super-node for. Maybe the following drawings also help a little bit.
(X) Hardware layout inside the SoC: {BUS}<->{RAM} | +<->{LCD0}-+ +->{LCD0-PINS} | +->{DCON}-+ +<->{LCD1}-+ +->{LCD1-PINS}
From a logical point-of-view and just because we have no single starting point on Marvell SoCs the use cases can be described as follows. (x) denotes a device tree node, --> a link installed by some phandle property, [x] device tree nodes not linked but handled by driver looking it up in DT, ==> first user visible video stream.
(1) single card, single lcd-controller: [DCON] (SUPERNODE)--->(LCD0)-->(HDMI)==>
(2) multiple cards, single lcd-controller: [DCON] (SUPERNODE0)-->(LCD0)-->(HDMI)==> (SUPERNODE1)-->(LCD1)==>
(3) single card, multiple lcd-controller: [DCON] +->(LCD0)-->(HDMI)==> (SUPERNODE)-+ +->(LCD1)==>
So the super-node is just used as a single starting point for the node-to-node walk. IMHO this is very compatible with what v4l2 guys came up with - except that you _can_ install a virtual starting point where it is missing from a SoC device point-of-view. SoCs with two unrelated lcd-controllers will pick up the lcd-controller node for their DRM drivers.
As mentioned before, to achieve the same you can leave the super-node and use lcd-controller nodes with "slave-mode" type-of property.
Maybe calling it "super-node" after some point of the discussion was misleading. It is *not* an umbrella node with phandles to every device involved, but *the* root node for your logical graph/tree/chain of device nodes required for video.
node-to-node linking works well if there an absolute 'master' can be identified for the virtual device. ie. Ethernet MAC devices use a "phy-device" property to link to the phy it requires. In that case it is pretty clear that the Ethernet MAC is in charge and it uses the PHY.
In either case it is absolutely required that the 'master' driver knows how to find and wait for all the subservient devices before probing can complete.
I know that isn't a solid answer, but you know the problem space better than I. Take the above into account, make a decision and post a binding proposal for review.
Well, I have given a proposal of what I already implemented during Russell's Armada DRM driver RFCs. I am fine with *anyone* picking up *any* solution discussed here as long as it involves phandles linking SoC nodes (lcd-controller) with external I2C nodes (hdmi-transceiver).
If it is super-node or master/slave properties, I don't care as long as it is somehow related to HW and not some SW subsystem requirements. I can think of both solutions solving Marvell SoC DRM driver "issues" and I guess even for the most of other SoCs as well.
The only scenario out of the three above that can possibly start displaying video while waiting for sub-drivers is (3). You can output video through LCD1 while waiting for HDMI.
But, that is in no way related to "best practice device tree design for display subsystems" which this discussion is about but implementation details of DRM or any other subsystem.
The pure existence of the link in a specific device tree description has to be sufficient for the driver or its subsystem to find out that (a) the node linked *is* mandatory, (b) how to wait for the (possible) driver for the linked node, and (c) fatally fail if it is not responding.
Finally, if there is no proposal done in the meantime, I will pick it up in a month or two.
Sebastian
On Tue, Jul 02, 2013 at 11:43:59AM -0600, Daniel Drake wrote:
exynos seems to take a the same approach. Components are separate in the device tree, and each component is implemented as a platform driver or i2c driver. However all the drivers are built together in the same module, and the module_init sequence is careful to initialise all of the output component drivers before loading the DRM driver. The output component driver store their findings in global structures.
I will point out that relying on driver probing orders has already been stated by driver model people to be unsafe. This is why I will not adopt such a solution for my driver; it is a bad design.
On Tue, Jul 2, 2013 at 12:43 PM, Russell King rmk@arm.linux.org.uk wrote:
I will point out that relying on driver probing orders has already been stated by driver model people to be unsafe. This is why I will not adopt such a solution for my driver; it is a bad design.
Just to clarify, what you're objecting to is effectively the following? Because it is not guaranteed in the future that the probe order will be the same as the platform_driver_register() calls?
static int __init exynos_drm_init(void) { ret = platform_driver_register(&hdmi_driver); if (ret < 0) goto out_hdmi; ret = platform_driver_register(&mixer_driver); if (ret < 0) goto out_mixer; ret = platform_driver_register(&exynos_drm_common_hdmi_driver); if (ret < 0) goto out_common_hdmi; ret = platform_driver_register(&exynos_drm_platform_driver); if (ret < 0) goto out_drm;
(exynos_drm_platform_driver is the driver that creates the drm_device)
Thanks Daniel
On Tue, Jul 02, 2013 at 12:54:41PM -0600, Daniel Drake wrote:
On Tue, Jul 2, 2013 at 12:43 PM, Russell King rmk@arm.linux.org.uk wrote:
I will point out that relying on driver probing orders has already been stated by driver model people to be unsafe. This is why I will not adopt such a solution for my driver; it is a bad design.
Just to clarify, what you're objecting to is effectively the following? Because it is not guaranteed in the future that the probe order will be the same as the platform_driver_register() calls?
Correct. Consider what happens if the devices are registered after the driver(s) have been registered, which may not be in the correct order.
-----Original Message----- From: dri-devel-bounces+inki.dae=samsung.com@lists.freedesktop.org [mailto:dri-devel-bounces+inki.dae=samsung.com@lists.freedesktop.org] On Behalf Of Russell King Sent: Wednesday, July 03, 2013 4:08 AM To: Daniel Drake Cc: Jean-François Moine; devicetree-discuss@lists.ozlabs.org; dri- devel@lists.freedesktop.org; Sebastian Hesselbarth Subject: Re: Best practice device tree design for display subsystems/DRM
On Tue, Jul 02, 2013 at 12:54:41PM -0600, Daniel Drake wrote:
On Tue, Jul 2, 2013 at 12:43 PM, Russell King rmk@arm.linux.org.uk
wrote:
I will point out that relying on driver probing orders has already
been
stated by driver model people to be unsafe. This is why I will not adopt such a solution for my driver; it is a bad design.
Just to clarify, what you're objecting to is effectively the following? Because it is not guaranteed in the future that the probe order will be the same as the platform_driver_register() calls?
Correct. Consider what happens if the devices are registered after the driver(s) have been registered, which may not be in the correct order.
That's true but how drivers could be registered prior to devices? The devices registering codes are built in kernel image so the drivers cannot be registered prior to devices as long as we don't modify the devices to be registered first. Is there any case that driver should be registered first?
Thanks, Inki Dae
-- Russell King _______________________________________________ dri-devel mailing list dri-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/dri-devel
dri-devel@lists.freedesktop.org