Add documentation for DT properties supported by ps8622/ps8625 eDP-LVDS converter.
Signed-off-by: Ajay Kumar ajaykumar.rs@samsung.com --- .../devicetree/bindings/video/bridge/ps8622.txt | 20 ++++++++++++++++++++ 1 file changed, 20 insertions(+) create mode 100644 Documentation/devicetree/bindings/video/bridge/ps8622.txt
diff --git a/Documentation/devicetree/bindings/video/bridge/ps8622.txt b/Documentation/devicetree/bindings/video/bridge/ps8622.txt new file mode 100644 index 0000000..0ec8172 --- /dev/null +++ b/Documentation/devicetree/bindings/video/bridge/ps8622.txt @@ -0,0 +1,20 @@ +ps8622-bridge bindings + +Required properties: + - compatible: "parade,ps8622" or "parade,ps8625" + - reg: first i2c address of the bridge + - sleep-gpios: OF device-tree gpio specification for PD_ pin. + - reset-gpios: OF device-tree gpio specification for RST_ pin. + +Optional properties: + - lane-count: number of DP lanes to use + - use-external-pwm: backlight will be controlled by an external PWM + +Example: + lvds-bridge@48 { + compatible = "parade,ps8622"; + reg = <0x48>; + sleep-gpios = <&gpc3 6 1 0 0>; + reset-gpios = <&gpc3 1 1 0 0>; + lane-count = <1>; + };
ps8622 eDP-LVDS converter bridge chip is from parade technologies
Signed-off-by: Ajay Kumar ajaykumar.rs@samsung.com --- .../devicetree/bindings/vendor-prefixes.txt | 1 + 1 file changed, 1 insertion(+)
diff --git a/Documentation/devicetree/bindings/vendor-prefixes.txt b/Documentation/devicetree/bindings/vendor-prefixes.txt index ac7269f..5f28264 100644 --- a/Documentation/devicetree/bindings/vendor-prefixes.txt +++ b/Documentation/devicetree/bindings/vendor-prefixes.txt @@ -100,6 +100,7 @@ nxp NXP Semiconductors onnn ON Semiconductor Corp. opencores OpenCores.org panasonic Panasonic Corporation +parade Parade Technologies Inc. phytec PHYTEC Messtechnik GmbH picochip Picochip Ltd plathome Plat'Home Co., Ltd.
Move drm/bridge documentation to video/bridge. Also, add proper documentation for gpios used by ptn3460.
Signed-off-by: Ajay Kumar ajaykumar.rs@samsung.com --- .../devicetree/bindings/drm/bridge/ptn3460.txt | 27 -------------------- .../devicetree/bindings/video/bridge/ptn3460.txt | 27 ++++++++++++++++++++ 2 files changed, 27 insertions(+), 27 deletions(-) delete mode 100644 Documentation/devicetree/bindings/drm/bridge/ptn3460.txt create mode 100644 Documentation/devicetree/bindings/video/bridge/ptn3460.txt
diff --git a/Documentation/devicetree/bindings/drm/bridge/ptn3460.txt b/Documentation/devicetree/bindings/drm/bridge/ptn3460.txt deleted file mode 100644 index 52b93b2..0000000 --- a/Documentation/devicetree/bindings/drm/bridge/ptn3460.txt +++ /dev/null @@ -1,27 +0,0 @@ -ptn3460 bridge bindings - -Required properties: - - compatible: "nxp,ptn3460" - - reg: i2c address of the bridge - - powerdown-gpio: OF device-tree gpio specification - - reset-gpio: OF device-tree gpio specification - - edid-emulation: The EDID emulation entry to use - +-------+------------+------------------+ - | Value | Resolution | Description | - | 0 | 1024x768 | NXP Generic | - | 1 | 1920x1080 | NXP Generic | - | 2 | 1920x1080 | NXP Generic | - | 3 | 1600x900 | Samsung LTM200KT | - | 4 | 1920x1080 | Samsung LTM230HT | - | 5 | 1366x768 | NXP Generic | - | 6 | 1600x900 | ChiMei M215HGE | - +-------+------------+------------------+ - -Example: - lvds-bridge@20 { - compatible = "nxp,ptn3460"; - reg = <0x20>; - powerdown-gpio = <&gpy2 5 1 0 0>; - reset-gpio = <&gpx1 5 1 0 0>; - edid-emulation = <5>; - }; diff --git a/Documentation/devicetree/bindings/video/bridge/ptn3460.txt b/Documentation/devicetree/bindings/video/bridge/ptn3460.txt new file mode 100644 index 0000000..d9281dd --- /dev/null +++ b/Documentation/devicetree/bindings/video/bridge/ptn3460.txt @@ -0,0 +1,27 @@ +ptn3460 bridge bindings + +Required properties: + - compatible: "nxp,ptn3460" + - reg: i2c address of the bridge + - powerdown-gpios: OF device-tree gpio specification for PD_N pin. + - reset-gpios: OF device-tree gpio specification for RST_N pin. + - edid-emulation: The EDID emulation entry to use + +-------+------------+------------------+ + | Value | Resolution | Description | + | 0 | 1024x768 | NXP Generic | + | 1 | 1920x1080 | NXP Generic | + | 2 | 1920x1080 | NXP Generic | + | 3 | 1600x900 | Samsung LTM200KT | + | 4 | 1920x1080 | Samsung LTM230HT | + | 5 | 1366x768 | NXP Generic | + | 6 | 1600x900 | ChiMei M215HGE | + +-------+------------+------------------+ + +Example: + lvds-bridge@20 { + compatible = "nxp,ptn3460"; + reg = <0x20>; + powerdown-gpios = <&gpy2 5 1 0 0>; + reset-gpios = <&gpx1 5 1 0 0>; + edid-emulation = <5>; + };
On 27/08/14 17:39, Ajay Kumar wrote:
Add documentation for DT properties supported by ps8622/ps8625 eDP-LVDS converter.
Signed-off-by: Ajay Kumar ajaykumar.rs@samsung.com
.../devicetree/bindings/video/bridge/ps8622.txt | 20 ++++++++++++++++++++ 1 file changed, 20 insertions(+) create mode 100644 Documentation/devicetree/bindings/video/bridge/ps8622.txt
diff --git a/Documentation/devicetree/bindings/video/bridge/ps8622.txt b/Documentation/devicetree/bindings/video/bridge/ps8622.txt new file mode 100644 index 0000000..0ec8172 --- /dev/null +++ b/Documentation/devicetree/bindings/video/bridge/ps8622.txt @@ -0,0 +1,20 @@ +ps8622-bridge bindings
+Required properties:
- compatible: "parade,ps8622" or "parade,ps8625"
- reg: first i2c address of the bridge
- sleep-gpios: OF device-tree gpio specification for PD_ pin.
- reset-gpios: OF device-tree gpio specification for RST_ pin.
+Optional properties:
- lane-count: number of DP lanes to use
- use-external-pwm: backlight will be controlled by an external PWM
What does this mean? That the backlight support from ps8625 is not used? If so, maybe "disable-pwm" or something?
+Example:
- lvds-bridge@48 {
compatible = "parade,ps8622";
reg = <0x48>;
sleep-gpios = <&gpc3 6 1 0 0>;
reset-gpios = <&gpc3 1 1 0 0>;
lane-count = <1>;
- };
I wish all new display component bindings would use the video ports/endpoints to describe the connections. It will be very difficult to improve the display driver model later if we're missing such critical pieces from the DT bindings.
Tomi
Hi Tomi,
Thanks for your comments.
On Wed, Sep 17, 2014 at 5:22 PM, Tomi Valkeinen tomi.valkeinen@ti.com wrote:
On 27/08/14 17:39, Ajay Kumar wrote:
Add documentation for DT properties supported by ps8622/ps8625 eDP-LVDS converter.
Signed-off-by: Ajay Kumar ajaykumar.rs@samsung.com
.../devicetree/bindings/video/bridge/ps8622.txt | 20 ++++++++++++++++++++ 1 file changed, 20 insertions(+) create mode 100644 Documentation/devicetree/bindings/video/bridge/ps8622.txt
diff --git a/Documentation/devicetree/bindings/video/bridge/ps8622.txt b/Documentation/devicetree/bindings/video/bridge/ps8622.txt new file mode 100644 index 0000000..0ec8172 --- /dev/null +++ b/Documentation/devicetree/bindings/video/bridge/ps8622.txt @@ -0,0 +1,20 @@ +ps8622-bridge bindings
+Required properties:
- compatible: "parade,ps8622" or "parade,ps8625"
- reg: first i2c address of the bridge
- sleep-gpios: OF device-tree gpio specification for PD_ pin.
- reset-gpios: OF device-tree gpio specification for RST_ pin.
+Optional properties:
- lane-count: number of DP lanes to use
- use-external-pwm: backlight will be controlled by an external PWM
What does this mean? That the backlight support from ps8625 is not used? If so, maybe "disable-pwm" or something?
"use-external-pwm" or "disable-bridge-pwm" would be better.
+Example:
lvds-bridge@48 {
compatible = "parade,ps8622";
reg = <0x48>;
sleep-gpios = <&gpc3 6 1 0 0>;
reset-gpios = <&gpc3 1 1 0 0>;
lane-count = <1>;
};
I wish all new display component bindings would use the video ports/endpoints to describe the connections. It will be very difficult to improve the display driver model later if we're missing such critical pieces from the DT bindings.
Why do we need a complex graph when it can be handled using a simple phandle?
Ajay
On 17/09/14 17:29, Ajay kumar wrote:
Hi Tomi,
Thanks for your comments.
On Wed, Sep 17, 2014 at 5:22 PM, Tomi Valkeinen tomi.valkeinen@ti.com wrote:
On 27/08/14 17:39, Ajay Kumar wrote:
Add documentation for DT properties supported by ps8622/ps8625 eDP-LVDS converter.
Signed-off-by: Ajay Kumar ajaykumar.rs@samsung.com
.../devicetree/bindings/video/bridge/ps8622.txt | 20 ++++++++++++++++++++ 1 file changed, 20 insertions(+) create mode 100644 Documentation/devicetree/bindings/video/bridge/ps8622.txt
diff --git a/Documentation/devicetree/bindings/video/bridge/ps8622.txt b/Documentation/devicetree/bindings/video/bridge/ps8622.txt new file mode 100644 index 0000000..0ec8172 --- /dev/null +++ b/Documentation/devicetree/bindings/video/bridge/ps8622.txt @@ -0,0 +1,20 @@ +ps8622-bridge bindings
+Required properties:
- compatible: "parade,ps8622" or "parade,ps8625"
- reg: first i2c address of the bridge
- sleep-gpios: OF device-tree gpio specification for PD_ pin.
- reset-gpios: OF device-tree gpio specification for RST_ pin.
+Optional properties:
- lane-count: number of DP lanes to use
- use-external-pwm: backlight will be controlled by an external PWM
What does this mean? That the backlight support from ps8625 is not used? If so, maybe "disable-pwm" or something?
"use-external-pwm" or "disable-bridge-pwm" would be better.
Well, the properties are about the bridge. "use-external-pwm" means that the bridge uses an external PWM, which, if I understood correctly, is not what the property is about.
"disable-bridge-pwm" is ok, but the "bridge" there is extra. The properties are about the bridge, so it's implicit.
+Example:
lvds-bridge@48 {
compatible = "parade,ps8622";
reg = <0x48>;
sleep-gpios = <&gpc3 6 1 0 0>;
reset-gpios = <&gpc3 1 1 0 0>;
lane-count = <1>;
};
I wish all new display component bindings would use the video ports/endpoints to describe the connections. It will be very difficult to improve the display driver model later if we're missing such critical pieces from the DT bindings.
Why do we need a complex graph when it can be handled using a simple phandle?
Maybe in your case you can handle it with simple phandle. Can you guarantee that it's enough for everyone, on all platforms?
The point of the ports/endpoint graph is to also support more complicated scenarios. If you now create ps8622 bindings that do not support those graphs, the no one else using ps8622 can use ports/endpoint graphs either.
Btw, is there an example how the bridge with these bindings is used in a board's .dts file? I couldn't find any example with a quick search. So it's unclear to me what the "simple phandle" actually is.
Tomi
Hi Tomi,
On Wed, Sep 17, 2014 at 9:52 PM, Tomi Valkeinen tomi.valkeinen@ti.com wrote:
On 17/09/14 17:29, Ajay kumar wrote:
Hi Tomi,
Thanks for your comments.
On Wed, Sep 17, 2014 at 5:22 PM, Tomi Valkeinen tomi.valkeinen@ti.com wrote:
On 27/08/14 17:39, Ajay Kumar wrote:
Add documentation for DT properties supported by ps8622/ps8625 eDP-LVDS converter.
Signed-off-by: Ajay Kumar ajaykumar.rs@samsung.com
.../devicetree/bindings/video/bridge/ps8622.txt | 20 ++++++++++++++++++++ 1 file changed, 20 insertions(+) create mode 100644 Documentation/devicetree/bindings/video/bridge/ps8622.txt
diff --git a/Documentation/devicetree/bindings/video/bridge/ps8622.txt b/Documentation/devicetree/bindings/video/bridge/ps8622.txt new file mode 100644 index 0000000..0ec8172 --- /dev/null +++ b/Documentation/devicetree/bindings/video/bridge/ps8622.txt @@ -0,0 +1,20 @@ +ps8622-bridge bindings
+Required properties:
- compatible: "parade,ps8622" or "parade,ps8625"
- reg: first i2c address of the bridge
- sleep-gpios: OF device-tree gpio specification for PD_ pin.
- reset-gpios: OF device-tree gpio specification for RST_ pin.
+Optional properties:
- lane-count: number of DP lanes to use
- use-external-pwm: backlight will be controlled by an external PWM
What does this mean? That the backlight support from ps8625 is not used? If so, maybe "disable-pwm" or something?
"use-external-pwm" or "disable-bridge-pwm" would be better.
Well, the properties are about the bridge. "use-external-pwm" means that the bridge uses an external PWM, which, if I understood correctly, is not what the property is about.
"disable-bridge-pwm" is ok, but the "bridge" there is extra. The properties are about the bridge, so it's implicit.
Ok. I will use "disable-pwm".
+Example:
lvds-bridge@48 {
compatible = "parade,ps8622";
reg = <0x48>;
sleep-gpios = <&gpc3 6 1 0 0>;
reset-gpios = <&gpc3 1 1 0 0>;
lane-count = <1>;
};
I wish all new display component bindings would use the video ports/endpoints to describe the connections. It will be very difficult to improve the display driver model later if we're missing such critical pieces from the DT bindings.
Why do we need a complex graph when it can be handled using a simple phandle?
Maybe in your case you can handle it with simple phandle. Can you guarantee that it's enough for everyone, on all platforms?
Yes, as of now exynos5420-peach-pit and exynos5250-spring boards use this. In case of both, the phandle to bridge node is passed to the exynos_dp node.
The point of the ports/endpoint graph is to also support more complicated scenarios. If you now create ps8622 bindings that do not support those graphs, the no one else using ps8622 can use ports/endpoint graphs either.
Btw, is there an example how the bridge with these bindings is used in a board's .dts file? I couldn't find any example with a quick search. So it's unclear to me what the "simple phandle" actually is.
Please refer to the following link: https://git.kernel.org/cgit/linux/kernel/git/kgene/linux-samsung.git/tree/ar... Let me know if you still think we would need to describe it as a complex graph!
Ajay
On 18/09/14 08:50, Ajay kumar wrote:
Why do we need a complex graph when it can be handled using a simple phandle?
Maybe in your case you can handle it with simple phandle. Can you guarantee that it's enough for everyone, on all platforms?
Yes, as of now exynos5420-peach-pit and exynos5250-spring boards use this. In case of both, the phandle to bridge node is passed to the exynos_dp node.
The point of the ports/endpoint graph is to also support more complicated scenarios. If you now create ps8622 bindings that do not support those graphs, the no one else using ps8622 can use ports/endpoint graphs either.
Btw, is there an example how the bridge with these bindings is used in a board's .dts file? I couldn't find any example with a quick search. So it's unclear to me what the "simple phandle" actually is.
Please refer to the following link: https://git.kernel.org/cgit/linux/kernel/git/kgene/linux-samsung.git/tree/ar... Let me know if you still think we would need to describe it as a complex graph!
Yes, I think so. I'm not the DRM maintainer, though.
I think we have two options:
1) Describe the video component connections with the ports/endpoints properly for all new display device bindings, and know that it's (hopefully) future proof and covers even the more complex boards that use the devices.
or
2) Use some simple methods to describe the links, like single phandle as you do, knowing that we can't support more complex boards in the future.
I see some exynos boards already using the ports/endpoints, like arch/arm/boot/dts/exynos4412-trats2.dts. Why not use it for all new display devices?
Tomi
On Fri, Sep 19, 2014 at 6:24 PM, Tomi Valkeinen tomi.valkeinen@ti.com wrote:
On 18/09/14 08:50, Ajay kumar wrote:
Why do we need a complex graph when it can be handled using a simple phandle?
Maybe in your case you can handle it with simple phandle. Can you guarantee that it's enough for everyone, on all platforms?
Yes, as of now exynos5420-peach-pit and exynos5250-spring boards use this. In case of both, the phandle to bridge node is passed to the exynos_dp node.
The point of the ports/endpoint graph is to also support more complicated scenarios. If you now create ps8622 bindings that do not support those graphs, the no one else using ps8622 can use ports/endpoint graphs either.
Btw, is there an example how the bridge with these bindings is used in a board's .dts file? I couldn't find any example with a quick search. So it's unclear to me what the "simple phandle" actually is.
Please refer to the following link: https://git.kernel.org/cgit/linux/kernel/git/kgene/linux-samsung.git/tree/ar... Let me know if you still think we would need to describe it as a complex graph!
Yes, I think so. I'm not the DRM maintainer, though.
I think we have two options:
- Describe the video component connections with the ports/endpoints
properly for all new display device bindings, and know that it's (hopefully) future proof and covers even the more complex boards that use the devices.
or
- Use some simple methods to describe the links, like single phandle as
you do, knowing that we can't support more complex boards in the future.
I am not really able to understand, what's stopping us from using this bridge on a board with "complex" display connections. To use ps8622 driver, one needs to "attach" it to the DRM framework. For this, the DRM driver would need the DT node for ps8622 bridge. For which I use a phandle.
If some XYZ platform wishes to pick the DT node via a different method, they are always welcome to do it. Just because I am not specifying a video port/endpoint in the DT binding example, would it mean that platform cannot make use of ports in future? If that is the case, I can add something like this: http://lxr.free-electrons.com/source/Documentation/devicetree/bindings/panel...
Regards, Ajay kumar
I see some exynos boards already using the ports/endpoints, like arch/arm/boot/dts/exynos4412-trats2.dts. Why not use it for all new display devices?
Tomi
On 19/09/14 16:59, Ajay kumar wrote:
I am not really able to understand, what's stopping us from using this bridge on a board with "complex" display connections. To use ps8622 driver, one needs to "attach" it to the DRM framework. For this, the DRM driver
Remember that when we talk about DT bindings, there's no such thing as DRM. We talk about hardware. The same bindings need to work on any operating system.
would need the DT node for ps8622 bridge. For which I use a phandle.
A complex one could be for example a case where you have two different panels connected to ps8622, and you can switch between the two panels with, say, a gpio. How do you present that with a simple phandle?
In the exynos5420-peach-pit.dts, which you linked earlier, I see a "panel" property in the ps8625 node. That's missing from the bindings in this patch. Why is that? How is the panel linked in this version?
If some XYZ platform wishes to pick the DT node via a different method, they are always welcome to do it. Just because I am not specifying a video port/endpoint in the DT binding example, would it mean that platform cannot make use of ports in future? If that is the case, I can add something
All the platforms share the same bindings for ps8622. If you now specify that ps8622 bindings use a simple phandle, then anyone who uses ps8622 should support that.
Of course the bindings can be extended in the future. In that case the drivers need to support both the old and the new bindings, which is always a hassle.
Generally speaking, I sense that we have different views of how display devices and drivers are structured. You say "If some XYZ platform wishes to pick the DT node via a different method, they are always welcome to do it.". This sounds to me that you see the connections between display devices as something handled by a platform specific driver.
I, on the other hand, see connections between display devices as common properties.
Say, we could have a display board, with a panel and an encoder and maybe some other components, which takes parallel RGB as input. The same display board could as well be connected to an OMAP board or to an Exynos board.
I think the exact same display-board.dtsi file, which describes the devices and connections in the display board, should be usable on both OMAP and Exynos platforms. This means we need to have a common way to describe video devices, just as we have for other things.
Tomi
On Fri, Sep 19, 2014 at 7:58 PM, Tomi Valkeinen tomi.valkeinen@ti.com wrote:
On 19/09/14 16:59, Ajay kumar wrote:
I am not really able to understand, what's stopping us from using this bridge on a board with "complex" display connections. To use ps8622 driver, one needs to "attach" it to the DRM framework. For this, the DRM driver
Remember that when we talk about DT bindings, there's no such thing as DRM. We talk about hardware. The same bindings need to work on any operating system.
Agreed.
would need the DT node for ps8622 bridge. For which I use a phandle.
A complex one could be for example a case where you have two different panels connected to ps8622, and you can switch between the two panels with, say, a gpio. How do you present that with a simple phandle?
In the exynos5420-peach-pit.dts, which you linked earlier, I see a "panel" property in the ps8625 node. That's missing from the bindings in this patch. Why is that? How is the panel linked in this version?
Oops, my bad! Panel should also be present in the DT binding(for which, I am using a phandle for panel node)
If some XYZ platform wishes to pick the DT node via a different method, they are always welcome to do it. Just because I am not specifying a video port/endpoint in the DT binding example, would it mean that platform cannot make use of ports in future? If that is the case, I can add something
All the platforms share the same bindings for ps8622. If you now specify that ps8622 bindings use a simple phandle, then anyone who uses ps8622 should support that.
Of course the bindings can be extended in the future. In that case the drivers need to support both the old and the new bindings, which is always a hassle.
Generally speaking, I sense that we have different views of how display devices and drivers are structured. You say "If some XYZ platform wishes to pick the DT node via a different method, they are always welcome to do it.". This sounds to me that you see the connections between display devices as something handled by a platform specific driver. I, on the other hand, see connections between display devices as common properties.
Well, I am okay with using video ports to describe the relationship between the encoder, bridge and the panel. But, its just that I need to make use of 2 functions when phandle does it using just one function ;) - panel_node = of_parse_phandle(dev->of_node, "panel", 0) + endpoint = of_graph_get_next_endpoint(dev->of_node, NULL); + if (!endpoint) + return -EPROBE_DEFER; + + panel_node = of_graph_get_remote_port_parent(endpoint); + if (!panel_node) + return -EPROBE_DEFER;
If nobody else has objections over using of_graph functions instead of phandles, I can respin this patchset by making use of video ports.
Ajay
Say, we could have a display board, with a panel and an encoder and maybe some other components, which takes parallel RGB as input. The same display board could as well be connected to an OMAP board or to an Exynos board.
I think the exact same display-board.dtsi file, which describes the devices and connections in the display board, should be usable on both OMAP and Exynos platforms. This means we need to have a common way to describe video devices, just as we have for other things.
Tomi
[adding Kukjin as cc]
Hello Ajay,
On Sat, Sep 20, 2014 at 1:22 PM, Ajay kumar ajaynumb@gmail.com wrote:
Generally speaking, I sense that we have different views of how display devices and drivers are structured. You say "If some XYZ platform wishes to pick the DT node via a different method, they are always welcome to do it.". This sounds to me that you see the connections between display devices as something handled by a platform specific driver. I, on the other hand, see connections between display devices as common properties.
Well, I am okay with using video ports to describe the relationship between the encoder, bridge and the panel. But, its just that I need to make use of 2 functions when phandle does it using just one function ;)
panel_node = of_parse_phandle(dev->of_node, "panel", 0)
endpoint = of_graph_get_next_endpoint(dev->of_node, NULL);
if (!endpoint)
return -EPROBE_DEFER;
panel_node = of_graph_get_remote_port_parent(endpoint);
if (!panel_node)
return -EPROBE_DEFER;
If nobody else has objections over using of_graph functions instead of phandles, I can respin this patchset by making use of video ports.
I certainly have no objections about re-using the video ports/endpoints DT bindings for the bridges but just wanted to point out that Kukjin has already merged on his tree the DTS changes for display on Snow and Peach Pit using the current binding and also sent the pull request [0] to ARM SoC maintainers since he probably was expecting this series to make ti for 3.18. So that should be handled somehow.
Tomi,
I see that Documentation/devicetree/bindings/video/ti,omap-dss.txt mentions that the Video Ports binding documentation is in Documentation/devicetree/bindings/video/video-ports.txt but I don't see that this file exists in the kernel [1]. I see though that is was included on your series adding DSS DT support [2].
Do you know what happened with this file?
Best regards, Javier
[0]: https://www.mail-archive.com/linux-samsung-soc@vger.kernel.org/msg36681.html [1]: https://git.kernel.org/cgit/linux/kernel/git/next/linux-next.git/tree/Docume... [2]: http://lists.infradead.org/pipermail/linux-arm-kernel/2014-January/227088.ht...
On Sat, Sep 20, 2014 at 8:57 PM, Javier Martinez Canillas javier@dowhile0.org wrote:
[adding Kukjin as cc]
Hello Ajay,
On Sat, Sep 20, 2014 at 1:22 PM, Ajay kumar ajaynumb@gmail.com wrote:
Generally speaking, I sense that we have different views of how display devices and drivers are structured. You say "If some XYZ platform wishes to pick the DT node via a different method, they are always welcome to do it.". This sounds to me that you see the connections between display devices as something handled by a platform specific driver. I, on the other hand, see connections between display devices as common properties.
Well, I am okay with using video ports to describe the relationship between the encoder, bridge and the panel. But, its just that I need to make use of 2 functions when phandle does it using just one function ;)
panel_node = of_parse_phandle(dev->of_node, "panel", 0)
endpoint = of_graph_get_next_endpoint(dev->of_node, NULL);
if (!endpoint)
return -EPROBE_DEFER;
panel_node = of_graph_get_remote_port_parent(endpoint);
if (!panel_node)
return -EPROBE_DEFER;
If nobody else has objections over using of_graph functions instead of phandles, I can respin this patchset by making use of video ports.
I certainly have no objections about re-using the video ports/endpoints DT bindings for the bridges but just wanted to point out that Kukjin has already merged on his tree the DTS changes for display on Snow and Peach Pit using the current binding and also sent the pull request [0] to ARM SoC maintainers since he probably was expecting this series to make ti for 3.18. So that should be handled somehow.
Kukjin, Can you do something about this? Or, I shall make video-port changes on top of whatever gets merged?
Ajay
Tomi,
I see that Documentation/devicetree/bindings/video/ti,omap-dss.txt mentions that the Video Ports binding documentation is in Documentation/devicetree/bindings/video/video-ports.txt but I don't see that this file exists in the kernel [1]. I see though that is was included on your series adding DSS DT support [2].
Do you know what happened with this file?
Best regards, Javier
[2]: http://lists.infradead.org/pipermail/linux-arm-kernel/2014-January/227088.ht...
To unsubscribe from this list: send the line "unsubscribe linux-samsung-soc" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On 20/09/14 18:27, Javier Martinez Canillas wrote:
I see that Documentation/devicetree/bindings/video/ti,omap-dss.txt mentions that the Video Ports binding documentation is in Documentation/devicetree/bindings/video/video-ports.txt but I don't see that this file exists in the kernel [1]. I see though that is was included on your series adding DSS DT support [2].
Do you know what happened with this file?
Ah, I need to update the link. This describes the graph:
Documentation/devicetree/bindings/graph.txt
Tomi
On 20/09/14 14:22, Ajay kumar wrote:
Well, I am okay with using video ports to describe the relationship between the encoder, bridge and the panel. But, its just that I need to make use of 2 functions when phandle does it using just one function ;)
panel_node = of_parse_phandle(dev->of_node, "panel", 0)
endpoint = of_graph_get_next_endpoint(dev->of_node, NULL);
if (!endpoint)
return -EPROBE_DEFER;
panel_node = of_graph_get_remote_port_parent(endpoint);
if (!panel_node)
return -EPROBE_DEFER;
If nobody else has objections over using of_graph functions instead of phandles, I can respin this patchset by making use of video ports.
The discussion did digress somewhat.
As a clarification, I'm in no way nack'ing this series because it doesn't use the graphs for video connections. I don't see the simple phandle bindings used here as broken as such.
Tomi
On Tue, Oct 7, 2014 at 4:00 PM, Tomi Valkeinen tomi.valkeinen@ti.com wrote:
On 20/09/14 14:22, Ajay kumar wrote:
Well, I am okay with using video ports to describe the relationship between the encoder, bridge and the panel. But, its just that I need to make use of 2 functions when phandle does it using just one function ;)
panel_node = of_parse_phandle(dev->of_node, "panel", 0)
endpoint = of_graph_get_next_endpoint(dev->of_node, NULL);
if (!endpoint)
return -EPROBE_DEFER;
panel_node = of_graph_get_remote_port_parent(endpoint);
if (!panel_node)
return -EPROBE_DEFER;
If nobody else has objections over using of_graph functions instead of phandles, I can respin this patchset by making use of video ports.
The discussion did digress somewhat.
As a clarification, I'm in no way nack'ing this series because it doesn't use the graphs for video connections. I don't see the simple phandle bindings used here as broken as such.
Well, I am okay with any approach you guys decide on. I desperately want this to get this in since it has been floating around for quite sometime. The more we drag this, the more rework for me since the number of platforms using bridge support is increasing daily!
Ajay
Hi Ajay,
On Tuesday 07 October 2014 16:06:55 Ajay kumar wrote:
On Tue, Oct 7, 2014 at 4:00 PM, Tomi Valkeinen wrote:
On 20/09/14 14:22, Ajay kumar wrote:
Well, I am okay with using video ports to describe the relationship between the encoder, bridge and the panel. But, its just that I need to make use of 2 functions when phandle does it using just one function ;)
panel_node = of_parse_phandle(dev->of_node, "panel", 0)
endpoint = of_graph_get_next_endpoint(dev->of_node, NULL);
if (!endpoint)
return -EPROBE_DEFER;
panel_node = of_graph_get_remote_port_parent(endpoint);
if (!panel_node)
return -EPROBE_DEFER;
If nobody else has objections over using of_graph functions instead of phandles, I can respin this patchset by making use of video ports.
The discussion did digress somewhat.
As a clarification, I'm in no way nack'ing this series because it doesn't use the graphs for video connections. I don't see the simple phandle bindings used here as broken as such.
Well, I am okay with any approach you guys decide on. I desperately want this to get this in since it has been floating around for quite sometime. The more we drag this, the more rework for me since the number of platforms using bridge support is increasing daily!
I won't nack this patch either. I'm however concerned that we'll run straight into the wall if we don't come up with an agreement on a standard way to describe connections in DT for display devices, which is why I would prefer the ps8622 bindings to use OF graph to describe connections.
On Tue, Oct 07, 2014 at 05:49:24PM +0300, Laurent Pinchart wrote:
Hi Ajay,
On Tuesday 07 October 2014 16:06:55 Ajay kumar wrote:
On Tue, Oct 7, 2014 at 4:00 PM, Tomi Valkeinen wrote:
On 20/09/14 14:22, Ajay kumar wrote:
Well, I am okay with using video ports to describe the relationship between the encoder, bridge and the panel. But, its just that I need to make use of 2 functions when phandle does it using just one function ;)
panel_node = of_parse_phandle(dev->of_node, "panel", 0)
endpoint = of_graph_get_next_endpoint(dev->of_node, NULL);
if (!endpoint)
return -EPROBE_DEFER;
panel_node = of_graph_get_remote_port_parent(endpoint);
if (!panel_node)
return -EPROBE_DEFER;
If nobody else has objections over using of_graph functions instead of phandles, I can respin this patchset by making use of video ports.
The discussion did digress somewhat.
As a clarification, I'm in no way nack'ing this series because it doesn't use the graphs for video connections. I don't see the simple phandle bindings used here as broken as such.
Well, I am okay with any approach you guys decide on. I desperately want this to get this in since it has been floating around for quite sometime. The more we drag this, the more rework for me since the number of platforms using bridge support is increasing daily!
I won't nack this patch either. I'm however concerned that we'll run straight into the wall if we don't come up with an agreement on a standard way to describe connections in DT for display devices, which is why I would prefer the ps8622 bindings to use OF graph to describe connections.
I think there's not really an easy way out here. It's pretty bold trying to come up with a common way to describe bridges when we have only a single one (and a single use-case at that). The worst that can happen is that we need to change the binding at some point, in which case we may have to special-case early drivers, but I really don't think that's as much of an issue as everybody seems to think.
This series has been floating around for months because we're being overly prudent to accept a binding that /may/ turn out to not be generic enough.
Thierry
On Wed, Oct 8, 2014 at 12:39 PM, Thierry Reding thierry.reding@gmail.com wrote:
On Tue, Oct 07, 2014 at 05:49:24PM +0300, Laurent Pinchart wrote:
Hi Ajay,
On Tuesday 07 October 2014 16:06:55 Ajay kumar wrote:
On Tue, Oct 7, 2014 at 4:00 PM, Tomi Valkeinen wrote:
On 20/09/14 14:22, Ajay kumar wrote:
Well, I am okay with using video ports to describe the relationship between the encoder, bridge and the panel. But, its just that I need to make use of 2 functions when phandle does it using just one function ;)
panel_node = of_parse_phandle(dev->of_node, "panel", 0)
endpoint = of_graph_get_next_endpoint(dev->of_node, NULL);
if (!endpoint)
return -EPROBE_DEFER;
panel_node = of_graph_get_remote_port_parent(endpoint);
if (!panel_node)
return -EPROBE_DEFER;
If nobody else has objections over using of_graph functions instead of phandles, I can respin this patchset by making use of video ports.
The discussion did digress somewhat.
As a clarification, I'm in no way nack'ing this series because it doesn't use the graphs for video connections. I don't see the simple phandle bindings used here as broken as such.
Well, I am okay with any approach you guys decide on. I desperately want this to get this in since it has been floating around for quite sometime. The more we drag this, the more rework for me since the number of platforms using bridge support is increasing daily!
I won't nack this patch either. I'm however concerned that we'll run straight into the wall if we don't come up with an agreement on a standard way to describe connections in DT for display devices, which is why I would prefer the ps8622 bindings to use OF graph to describe connections.
I think there's not really an easy way out here. It's pretty bold trying to come up with a common way to describe bridges when we have only a single one (and a single use-case at that). The worst that can happen is that we need to change the binding at some point, in which case we may have to special-case early drivers, but I really don't think that's as much of an issue as everybody seems to think.
This series has been floating around for months because we're being overly prudent to accept a binding that /may/ turn out to not be generic enough.
Right. It would be great if you guys come to agreement ASAP!
Ajay
ping!
On Fri, Oct 10, 2014 at 6:33 PM, Ajay kumar ajaynumb@gmail.com wrote:
On Wed, Oct 8, 2014 at 12:39 PM, Thierry Reding thierry.reding@gmail.com wrote:
On Tue, Oct 07, 2014 at 05:49:24PM +0300, Laurent Pinchart wrote:
Hi Ajay,
On Tuesday 07 October 2014 16:06:55 Ajay kumar wrote:
On Tue, Oct 7, 2014 at 4:00 PM, Tomi Valkeinen wrote:
On 20/09/14 14:22, Ajay kumar wrote:
Well, I am okay with using video ports to describe the relationship between the encoder, bridge and the panel. But, its just that I need to make use of 2 functions when phandle does it using just one function ;)
panel_node = of_parse_phandle(dev->of_node, "panel", 0)
endpoint = of_graph_get_next_endpoint(dev->of_node, NULL);
if (!endpoint)
return -EPROBE_DEFER;
panel_node = of_graph_get_remote_port_parent(endpoint);
if (!panel_node)
return -EPROBE_DEFER;
If nobody else has objections over using of_graph functions instead of phandles, I can respin this patchset by making use of video ports.
The discussion did digress somewhat.
As a clarification, I'm in no way nack'ing this series because it doesn't use the graphs for video connections. I don't see the simple phandle bindings used here as broken as such.
Well, I am okay with any approach you guys decide on. I desperately want this to get this in since it has been floating around for quite sometime. The more we drag this, the more rework for me since the number of platforms using bridge support is increasing daily!
I won't nack this patch either. I'm however concerned that we'll run straight into the wall if we don't come up with an agreement on a standard way to describe connections in DT for display devices, which is why I would prefer the ps8622 bindings to use OF graph to describe connections.
I think there's not really an easy way out here. It's pretty bold trying to come up with a common way to describe bridges when we have only a single one (and a single use-case at that). The worst that can happen is that we need to change the binding at some point, in which case we may have to special-case early drivers, but I really don't think that's as much of an issue as everybody seems to think.
This series has been floating around for months because we're being overly prudent to accept a binding that /may/ turn out to not be generic enough.
Right. It would be great if you guys come to agreement ASAP!
Ajay
Hi Ajay,
On Friday 10 October 2014 18:33:05 Ajay kumar wrote:
On Wed, Oct 8, 2014 at 12:39 PM, Thierry Reding wrote:
On Tue, Oct 07, 2014 at 05:49:24PM +0300, Laurent Pinchart wrote:
On Tuesday 07 October 2014 16:06:55 Ajay kumar wrote:
On Tue, Oct 7, 2014 at 4:00 PM, Tomi Valkeinen wrote:
On 20/09/14 14:22, Ajay kumar wrote:
Well, I am okay with using video ports to describe the relationship between the encoder, bridge and the panel. But, its just that I need to make use of 2 functions when phandle does it using just one function ;)
panel_node = of_parse_phandle(dev->of_node, "panel", 0)
endpoint = of_graph_get_next_endpoint(dev->of_node, NULL);
if (!endpoint)
return -EPROBE_DEFER;
panel_node = of_graph_get_remote_port_parent(endpoint);
if (!panel_node)
return -EPROBE_DEFER;
If nobody else has objections over using of_graph functions instead of phandles, I can respin this patchset by making use of video ports.
The discussion did digress somewhat.
As a clarification, I'm in no way nack'ing this series because it doesn't use the graphs for video connections. I don't see the simple phandle bindings used here as broken as such.
Well, I am okay with any approach you guys decide on. I desperately want this to get this in since it has been floating around for quite sometime. The more we drag this, the more rework for me since the number of platforms using bridge support is increasing daily!
I won't nack this patch either. I'm however concerned that we'll run straight into the wall if we don't come up with an agreement on a standard way to describe connections in DT for display devices, which is why I would prefer the ps8622 bindings to use OF graph to describe connections.
I think there's not really an easy way out here. It's pretty bold trying to come up with a common way to describe bridges when we have only a single one (and a single use-case at that). The worst that can happen is that we need to change the binding at some point, in which case we may have to special-case early drivers, but I really don't think that's as much of an issue as everybody seems to think.
This series has been floating around for months because we're being overly prudent to accept a binding that /may/ turn out to not be generic enough.
Right. It would be great if you guys come to agreement ASAP!
I don't think we'll agree any time soon, so I believe it's up to you to decide which option is best based on all arguments that have been presented.
If you ask me, of course, OF graph is best :-)
Hello Ajay,
On Thu, Oct 16, 2014 at 11:04 AM, Laurent Pinchart laurent.pinchart@ideasonboard.com wrote:
Right. It would be great if you guys come to agreement ASAP!
I don't think we'll agree any time soon, so I believe it's up to you to decide which option is best based on all arguments that have been presented.
Did you decide what's the correct approach? I don't have a strong opinion, I'm just asking because it would be a pity if your series miss another kernel release...
Best regards, Javier
On Tue, Oct 28, 2014 at 2:42 PM, Javier Martinez Canillas javier@dowhile0.org wrote:
Hello Ajay,
On Thu, Oct 16, 2014 at 11:04 AM, Laurent Pinchart laurent.pinchart@ideasonboard.com wrote:
Right. It would be great if you guys come to agreement ASAP!
I don't think we'll agree any time soon, so I believe it's up to you to decide which option is best based on all arguments that have been presented.
Did you decide what's the correct approach? I don't have a strong opinion, I'm just asking because it would be a pity if your series miss another kernel release...
I will be using graphs instead of phandles, and would send a series ASAP.
Ajay
On Fri, Sep 19, 2014 at 05:28:37PM +0300, Tomi Valkeinen wrote:
On 19/09/14 16:59, Ajay kumar wrote:
I am not really able to understand, what's stopping us from using this bridge on a board with "complex" display connections. To use ps8622 driver, one needs to "attach" it to the DRM framework. For this, the DRM driver
Remember that when we talk about DT bindings, there's no such thing as DRM. We talk about hardware. The same bindings need to work on any operating system.
would need the DT node for ps8622 bridge. For which I use a phandle.
A complex one could be for example a case where you have two different panels connected to ps8622, and you can switch between the two panels with, say, a gpio. How do you present that with a simple phandle?
How do you represent that with a graph? Whether you describe it using a graph or a simple phandle you'll need additional nodes somewhere in between. Perhaps something like this:
mux: ... { compatible = "gpio-mux-bridge";
gpio = <&gpio 42 GPIO_ACTIVE_HIGH>;
panel@0 { panel = <&panel0>; };
panel@1 { panel = <&panel1>; }; };
ps8622: ... { bridge = <&mux>; };
panel0: ... { ... };
panel1: ... { ... };
If some XYZ platform wishes to pick the DT node via a different method, they are always welcome to do it. Just because I am not specifying a video port/endpoint in the DT binding example, would it mean that platform cannot make use of ports in future? If that is the case, I can add something
All the platforms share the same bindings for ps8622. If you now specify that ps8622 bindings use a simple phandle, then anyone who uses ps8622 should support that.
Of course the bindings can be extended in the future. In that case the drivers need to support both the old and the new bindings, which is always a hassle.
Generally speaking, I sense that we have different views of how display devices and drivers are structured. You say "If some XYZ platform wishes to pick the DT node via a different method, they are always welcome to do it.". This sounds to me that you see the connections between display devices as something handled by a platform specific driver.
I, on the other hand, see connections between display devices as common properties.
Say, we could have a display board, with a panel and an encoder and maybe some other components, which takes parallel RGB as input. The same display board could as well be connected to an OMAP board or to an Exynos board.
I think the exact same display-board.dtsi file, which describes the devices and connections in the display board, should be usable on both OMAP and Exynos platforms. This means we need to have a common way to describe video devices, just as we have for other things.
A common way to describe devices in DT isn't going to give you that. The device tree is completely missing any information about how to access an extension board like that. The operating system defines the API by which the board can be accessed. I imagine that this would work by making the first component of the board a bridge of some sort and then every driver that supports that type of bridge (ideally just a generic drm_bridge) would also work with that display board.
Whether this is described using a single phandle to the bridge or a more complicated graph is irrelevant. What matters is that you get a phandle to the bridge. The job of the operating system is to give drivers a way to resolve that phandle to some object and provide an API to access that object.
Thierry
On 22/09/14 11:26, Thierry Reding wrote:
On Fri, Sep 19, 2014 at 05:28:37PM +0300, Tomi Valkeinen wrote:
On 19/09/14 16:59, Ajay kumar wrote:
I am not really able to understand, what's stopping us from using this bridge on a board with "complex" display connections. To use ps8622 driver, one needs to "attach" it to the DRM framework. For this, the DRM driver
Remember that when we talk about DT bindings, there's no such thing as DRM. We talk about hardware. The same bindings need to work on any operating system.
would need the DT node for ps8622 bridge. For which I use a phandle.
A complex one could be for example a case where you have two different panels connected to ps8622, and you can switch between the two panels with, say, a gpio. How do you present that with a simple phandle?
How do you represent that with a graph? Whether you describe it using a graph or a simple phandle you'll need additional nodes somewhere in between. Perhaps something like this:
mux: ... { compatible = "gpio-mux-bridge";
gpio = <&gpio 42 GPIO_ACTIVE_HIGH>; panel@0 { panel = <&panel0>; }; panel@1 { panel = <&panel1>; };
};
ps8622: ... { bridge = <&mux>; };
panel0: ... { ... };
panel1: ... { ... };
Yes, it's true we need a mux there. But we still have the complication that for panel0 we may need different ps8622 settings than for panel1.
If that's the case, then I think we'd need to have two output endpoints in ps8622, both going to the mux, and each having the settings for the respective panel.
If some XYZ platform wishes to pick the DT node via a different method, they are always welcome to do it. Just because I am not specifying a video port/endpoint in the DT binding example, would it mean that platform cannot make use of ports in future? If that is the case, I can add something
All the platforms share the same bindings for ps8622. If you now specify that ps8622 bindings use a simple phandle, then anyone who uses ps8622 should support that.
Of course the bindings can be extended in the future. In that case the drivers need to support both the old and the new bindings, which is always a hassle.
Generally speaking, I sense that we have different views of how display devices and drivers are structured. You say "If some XYZ platform wishes to pick the DT node via a different method, they are always welcome to do it.". This sounds to me that you see the connections between display devices as something handled by a platform specific driver.
I, on the other hand, see connections between display devices as common properties.
Say, we could have a display board, with a panel and an encoder and maybe some other components, which takes parallel RGB as input. The same display board could as well be connected to an OMAP board or to an Exynos board.
I think the exact same display-board.dtsi file, which describes the devices and connections in the display board, should be usable on both OMAP and Exynos platforms. This means we need to have a common way to describe video devices, just as we have for other things.
A common way to describe devices in DT isn't going to give you that. The device tree is completely missing any information about how to access an extension board like that. The operating system defines the API by which the board can be accessed. I imagine that this would work by making the first component of the board a bridge of some sort and then every driver that supports that type of bridge (ideally just a generic drm_bridge) would also work with that display board.
I'm not sure I follow.
Obviously there needs to be board specific .dts file that brings the board and the display board together. So, say, the display-board.dtsi has a i2c touchscreen node, but the board.dts will tell which i2c bus it's connected to.
Well, now as I wrote that, I wonder if that's possible... A node needs to have a parent, and for i2c that must be the i2c master. Is that something the DT overlays/fragments or such are supposed to handle?
But let's only think about the video path for now. We could have an encoder and a panel on the board. We could describe the video path between the encoder and the panel in the display-board.dts as that is fixed. Then all that's needed in the board.dts is to connect the board's video output to the encoders input with the video graph. Why would that not work?
Sure, there's more that's needed. Common encoder and panel drivers for one. But it all starts with a common way to describe the video devices and the connections in the DT. If we don't have that, we don't have anything.
Whether this is described using a single phandle to the bridge or a more complicated graph is irrelevant. What matters is that you get a phandle to the bridge. The job of the operating system is to give drivers a way to resolve that phandle to some object and provide an API to access that object.
I agree it's not relevant whether we use a simple phandle or complex graph. What matter is that we have a standard way to express the video paths, which everybody uses.
Tomi
On Mon, Sep 22, 2014 at 05:42:41PM +0300, Tomi Valkeinen wrote:
On 22/09/14 11:26, Thierry Reding wrote:
On Fri, Sep 19, 2014 at 05:28:37PM +0300, Tomi Valkeinen wrote:
On 19/09/14 16:59, Ajay kumar wrote:
I am not really able to understand, what's stopping us from using this bridge on a board with "complex" display connections. To use ps8622 driver, one needs to "attach" it to the DRM framework. For this, the DRM driver
Remember that when we talk about DT bindings, there's no such thing as DRM. We talk about hardware. The same bindings need to work on any operating system.
would need the DT node for ps8622 bridge. For which I use a phandle.
A complex one could be for example a case where you have two different panels connected to ps8622, and you can switch between the two panels with, say, a gpio. How do you present that with a simple phandle?
How do you represent that with a graph? Whether you describe it using a graph or a simple phandle you'll need additional nodes somewhere in between. Perhaps something like this:
mux: ... { compatible = "gpio-mux-bridge";
gpio = <&gpio 42 GPIO_ACTIVE_HIGH>; panel@0 { panel = <&panel0>; }; panel@1 { panel = <&panel1>; };
};
ps8622: ... { bridge = <&mux>; };
panel0: ... { ... };
panel1: ... { ... };
Yes, it's true we need a mux there. But we still have the complication that for panel0 we may need different ps8622 settings than for panel1.
Yes, and that's why the bridge should be querying the panel for the information it needs to determine the settings.
If that's the case, then I think we'd need to have two output endpoints in ps8622, both going to the mux, and each having the settings for the respective panel.
But we'd be lying in DT. It no longer describes the hardware properly. The device only has a single input and a single output with no means to mux anything. Hence the device tree shouldn't be faking multiple inputs or outputs.
If some XYZ platform wishes to pick the DT node via a different method, they are always welcome to do it. Just because I am not specifying a video port/endpoint in the DT binding example, would it mean that platform cannot make use of ports in future? If that is the case, I can add something
All the platforms share the same bindings for ps8622. If you now specify that ps8622 bindings use a simple phandle, then anyone who uses ps8622 should support that.
Of course the bindings can be extended in the future. In that case the drivers need to support both the old and the new bindings, which is always a hassle.
Generally speaking, I sense that we have different views of how display devices and drivers are structured. You say "If some XYZ platform wishes to pick the DT node via a different method, they are always welcome to do it.". This sounds to me that you see the connections between display devices as something handled by a platform specific driver.
I, on the other hand, see connections between display devices as common properties.
Say, we could have a display board, with a panel and an encoder and maybe some other components, which takes parallel RGB as input. The same display board could as well be connected to an OMAP board or to an Exynos board.
I think the exact same display-board.dtsi file, which describes the devices and connections in the display board, should be usable on both OMAP and Exynos platforms. This means we need to have a common way to describe video devices, just as we have for other things.
A common way to describe devices in DT isn't going to give you that. The device tree is completely missing any information about how to access an extension board like that. The operating system defines the API by which the board can be accessed. I imagine that this would work by making the first component of the board a bridge of some sort and then every driver that supports that type of bridge (ideally just a generic drm_bridge) would also work with that display board.
I'm not sure I follow.
Obviously there needs to be board specific .dts file that brings the board and the display board together. So, say, the display-board.dtsi has a i2c touchscreen node, but the board.dts will tell which i2c bus it's connected to.
Well, now as I wrote that, I wonder if that's possible... A node needs to have a parent, and for i2c that must be the i2c master. Is that something the DT overlays/fragments or such are supposed to handle?
But let's only think about the video path for now. We could have an encoder and a panel on the board. We could describe the video path between the encoder and the panel in the display-board.dts as that is fixed. Then all that's needed in the board.dts is to connect the board's video output to the encoders input with the video graph. Why would that not work?
My point is that the video graph isn't the solution to that problem. Having an OS abstraction for the devices involved is. DT is only the means to connect those devices.
Sure, there's more that's needed. Common encoder and panel drivers for one. But it all starts with a common way to describe the video devices and the connections in the DT. If we don't have that, we don't have anything.
I don't think we need to have a common way to describe video devices. In my opinion DT bindings are much better if they are specifically tailored towards the device that they describe. We'll provide a driver for that device anyway, so we should be creating appropriate abstractions at the OS level to properly handle them.
To stay with the example of the board/display, I'd think that the final component of the board DT would implement a bridge. The first component of the display DT would similarly implement a bridge. Now if we have a way of chaining bridges and controlling a chain of bridges, then there is no need for anything more complex than a plain phandle in a property from the board bridge to the display bridge.
Whether this is described using a single phandle to the bridge or a more complicated graph is irrelevant. What matters is that you get a phandle to the bridge. The job of the operating system is to give drivers a way to resolve that phandle to some object and provide an API to access that object.
I agree it's not relevant whether we use a simple phandle or complex graph. What matter is that we have a standard way to express the video paths, which everybody uses.
Not necessarily. Consistency is always good, but I think simplicity trumps consistency. What matters isn't how the phandle is referenced in the device tree, what matters is that it is referenced and that it makes sense in the context of the specific device. Anything else is the job of the OS.
While there are probably legitimate cases where the video graph is useful and makes sense, in many cases terms like ports and endpoints are simply confusing.
Thierry
On 23/09/14 08:53, Thierry Reding wrote:
Yes, it's true we need a mux there. But we still have the complication that for panel0 we may need different ps8622 settings than for panel1.
Yes, and that's why the bridge should be querying the panel for the information it needs to determine the settings.
That only works if the settings to be queried are standard ones.
But, for example, let's say that the board is designed in a way that for panel0 the bridge needs to output a bit higher level voltages than for panel1. That's not a property of the panel, so it can't be queried from the panel.
That feature (varying the voltage level) is specific to the bridge, and thus the properties should be in the bridge's DT data. So we need two different configurations in the bridge, one for panel0 and one for panel1. This is what endpoints give us.
If that's the case, then I think we'd need to have two output endpoints in ps8622, both going to the mux, and each having the settings for the respective panel.
But we'd be lying in DT. It no longer describes the hardware properly. The device only has a single input and a single output with no means to mux anything. Hence the device tree shouldn't be faking multiple inputs or outputs.
No, a port is a single physical output. An endpoint is a logical entity on that single physical output.
So a bridge with a single output has one output port, but it may have as many endpoints on that port as needed. Only one endpoint of a port can be active at a time.
I don't think we need to have a common way to describe video devices. In my opinion DT bindings are much better if they are specifically tailored towards the device that they describe. We'll provide a driver for that device anyway, so we should be creating appropriate abstractions at the OS level to properly handle them.
I disagree. If we don't have a common way to describe the connections between video devices, we cannot:
- have a common helper library to parse the connections - study the connections before the drivers are loaded - handle the connections anywhere else than the specific driver for the component
To stay with the example of the board/display, I'd think that the final component of the board DT would implement a bridge. The first component of the display DT would similarly implement a bridge. Now if we have a way of chaining bridges and controlling a chain of bridges, then there is no need for anything more complex than a plain phandle in a property from the board bridge to the display bridge.
Right, that works for many cases. Of course, nothing says there's a bridge on the main board, it could be connected directly to the SoC.
Whether this is described using a single phandle to the bridge or a more complicated graph is irrelevant. What matters is that you get a phandle to the bridge. The job of the operating system is to give drivers a way to resolve that phandle to some object and provide an API to access that object.
I agree it's not relevant whether we use a simple phandle or complex graph. What matter is that we have a standard way to express the video paths, which everybody uses.
Not necessarily. Consistency is always good, but I think simplicity trumps consistency. What matters isn't how the phandle is referenced in the device tree, what matters is that it is referenced and that it makes sense in the context of the specific device. Anything else is the job of the OS.
While there are probably legitimate cases where the video graph is useful and makes sense, in many cases terms like ports and endpoints are simply confusing.
I again agree that I'd like to have a simpler representation than video graphs for the simpler use cases. But again, I think it's important to have a standard representation for those connections.
Why do you think it wouldn't be good to have a standard for the simpler connections (in addition to the video graphs)? What kind of device specific variations do you see that would be needed?
Even if the points I gave about why a common way to describe connections is important would not be relevant today, it sounds much safer to have a standard representation in the DT for the connections. I'd only go for device specific custom descriptions if there's a very important reason for that. And I can't come up with any reason.
Tomi
On Tue, Sep 23, 2014 at 11:41:52AM +0300, Tomi Valkeinen wrote:
On 23/09/14 08:53, Thierry Reding wrote:
Yes, it's true we need a mux there. But we still have the complication that for panel0 we may need different ps8622 settings than for panel1.
Yes, and that's why the bridge should be querying the panel for the information it needs to determine the settings.
That only works if the settings to be queried are standard ones.
But, for example, let's say that the board is designed in a way that for panel0 the bridge needs to output a bit higher level voltages than for panel1. That's not a property of the panel, so it can't be queried from the panel.
That feature (varying the voltage level) is specific to the bridge, and thus the properties should be in the bridge's DT data. So we need two different configurations in the bridge, one for panel0 and one for panel1. This is what endpoints give us.
You could get the same by moving the mux in front of the bridge instead. Trying to do this within the bridge's node directly has two flaws:
1) It violates the DT principle of describing hardware. The device itself does not know anything about multiple streams and deals only with a single input and output. You'd be describing a board specific aspect in the device binding.
2) Such a solution would have to be implemented for all bridge devices that can potentially be muxed (really all of them). If you describe it accurately in a separate mux node you get genericity for free and it will work for all bridges.
If that's the case, then I think we'd need to have two output endpoints in ps8622, both going to the mux, and each having the settings for the respective panel.
But we'd be lying in DT. It no longer describes the hardware properly. The device only has a single input and a single output with no means to mux anything. Hence the device tree shouldn't be faking multiple inputs or outputs.
No, a port is a single physical output. An endpoint is a logical entity on that single physical output.
So a bridge with a single output has one output port, but it may have as many endpoints on that port as needed. Only one endpoint of a port can be active at a time.
As I mentioned above, this seems to me to be the wrong point of abstraction.
I don't think we need to have a common way to describe video devices. In my opinion DT bindings are much better if they are specifically tailored towards the device that they describe. We'll provide a driver for that device anyway, so we should be creating appropriate abstractions at the OS level to properly handle them.
I disagree. If we don't have a common way to describe the connections between video devices, we cannot:
- have a common helper library to parse the connections
We have a common helper already. It's called of_parse_phandle().
- study the connections before the drivers are loaded
Why would you need that?
- handle the connections anywhere else than the specific driver for the
component
Why should we do that?
To stay with the example of the board/display, I'd think that the final component of the board DT would implement a bridge. The first component of the display DT would similarly implement a bridge. Now if we have a way of chaining bridges and controlling a chain of bridges, then there is no need for anything more complex than a plain phandle in a property from the board bridge to the display bridge.
Right, that works for many cases. Of course, nothing says there's a bridge on the main board, it could be connected directly to the SoC.
In either case the SoC driver would probably know how to talk to a bridge anyway, whether it be one on the main board or on the display board.
Whether this is described using a single phandle to the bridge or a more complicated graph is irrelevant. What matters is that you get a phandle to the bridge. The job of the operating system is to give drivers a way to resolve that phandle to some object and provide an API to access that object.
I agree it's not relevant whether we use a simple phandle or complex graph. What matter is that we have a standard way to express the video paths, which everybody uses.
Not necessarily. Consistency is always good, but I think simplicity trumps consistency. What matters isn't how the phandle is referenced in the device tree, what matters is that it is referenced and that it makes sense in the context of the specific device. Anything else is the job of the OS.
While there are probably legitimate cases where the video graph is useful and makes sense, in many cases terms like ports and endpoints are simply confusing.
I again agree that I'd like to have a simpler representation than video graphs for the simpler use cases. But again, I think it's important to have a standard representation for those connections.
Why do you think it wouldn't be good to have a standard for the simpler connections (in addition to the video graphs)? What kind of device specific variations do you see that would be needed?
What do you mean by standard? I agree that it would make sense to give properties standard names, but I don't think we need to go any further. I mean, in the case of a simple phandle there's not much more to it anyway. But I fail to understand why standard properties should be a hard requirement.
Having a standard representation only matters if you want to be able to parse the pipeline without knowing about the device specifics. But I'm not sure I understand why you would want to do that. This sounds like you'd want some generic code to be able to construct a pipeline. But at the same time you can't do anything meaningful with that pipeline because the generic code doesn't know how to program the pipeline. The only thing you'll get is a list of devices in the pipeline, but you can do that by looking at the bindings and the DT content, no matter what the properties are named
Even if the points I gave about why a common way to describe connections is important would not be relevant today, it sounds much safer to have a standard representation in the DT for the connections. I'd only go for device specific custom descriptions if there's a very important reason for that. And I can't come up with any reason.
One of the good practices in DT is to keep property names similar to the signal names of the chip to make it easy to correlate. So if you have a bridge that converts RGB to LVDS and DSI for example, you'd have to describe two outputs. That's kind of difficult to do with standard property names. Well, it depends, you could do this:
bridge { panels = <&lvds &dsi>; };
Alternatively:
bridge { lvds-panel = <&lvds>; dsi-panel = <&dsi>; };
Or I guess you could do the same with ports/endpoints given that there are actually two outputs here. With endpoints it's of course difficult to correlate them without adding extra properties.
bridge { ports { port@0 { endpoint@0 { remote-endpoint = <&lvds>; }; };
port@1 { endpoint@0 { remote-endpoint = <&dsi>; }; }; }; };
Note that you could also write it without graph but with a more similar structure:
bridge { lvds { panel = <&lvds>; };
dsi { panel = <&dsi>; }; };
While it's true that it'd be more difficult to parse that in a generic way I also think it's a whole lot more readable than some abstract graph where a lot of context is simply lost.
Thierry
On 23/09/14 12:28, Thierry Reding wrote:
But, for example, let's say that the board is designed in a way that for panel0 the bridge needs to output a bit higher level voltages than for panel1. That's not a property of the panel, so it can't be queried from the panel.
That feature (varying the voltage level) is specific to the bridge, and thus the properties should be in the bridge's DT data. So we need two different configurations in the bridge, one for panel0 and one for panel1. This is what endpoints give us.
You could get the same by moving the mux in front of the bridge instead.
I didn't understand. Moving the mux between the SoC and the bridge? How would that solve the problem. Or what do you mean with "in the front of the bridge"?
Trying to do this within the bridge's node directly has two flaws:
- It violates the DT principle of describing hardware. The device itself does not know anything about multiple streams and deals only with a single input and output. You'd be describing a board specific aspect in the device binding.
We _are_ describing board specific aspects in the board.dts file. That's what it is about.
If the board's hardware is such that the bridge's voltage level needs to be higher when using panel0, that's very much describing hardware.
- Such a solution would have to be implemented for all bridge devices that can potentially be muxed (really all of them). If you describe it accurately in a separate mux node you get genericity for free and it will work for all bridges.
I do agree that a generic mux is better if there's nothing special to be configured. But how do you use a generic mux node for bridge device specific features?
Well, I think it'd be great to have all bindings use ports/endpoints (or a standard simpler representation), and have good helpers for those. Then, the bridge driver would not need to know or care about endpoints as such, it would just be told that this port & endpoint should be enabled, and the bridge driver would get its configuration for that endpoint, which it would program to the HW. And the same would happen with or without complex video graph.
But that's not strictly required, the bridge driver could as well support only single configuration. When someone uses the bridge in a setup that requires multiple endpoints, he can then extend the driver to support multiple endpoints.
I disagree. If we don't have a common way to describe the connections between video devices, we cannot:
- have a common helper library to parse the connections
We have a common helper already. It's called of_parse_phandle().
Yes, but what is the name of the property, and where does it go? Does the panel have a phandle to the bridge? Does the bridge have a phandle to the panel?
- study the connections before the drivers are loaded
Why would you need that?
I think this was discussed earlier, maybe Laurent remembers the use cases.
I think one possibility here is to have an understanding of the pipelines even if the modules have not been loaded or a single device has failed to probe.
- handle the connections anywhere else than the specific driver for the
component
Why should we do that?
We could, for example, have a single component (common or SoC specific) that manages the video pipelines. The drivers for the encoders/bridges/panels would just probe the devices, without doing anything else. The master component would then parse the links and connect the devices in whatever way it sees best.
While there are probably legitimate cases where the video graph is useful and makes sense, in many cases terms like ports and endpoints are simply confusing.
I again agree that I'd like to have a simpler representation than video graphs for the simpler use cases. But again, I think it's important to have a standard representation for those connections.
Why do you think it wouldn't be good to have a standard for the simpler connections (in addition to the video graphs)? What kind of device specific variations do you see that would be needed?
What do you mean by standard? I agree that it would make sense to give properties standard names, but I don't think we need to go any further.
Standard names and standard direction. I don't think there's anything else to simple phandles.
I mean, in the case of a simple phandle there's not much more to it anyway. But I fail to understand why standard properties should be a hard requirement.
And I fail to see why they should not be a hard requirement =).
Having a standard representation only matters if you want to be able to parse the pipeline without knowing about the device specifics. But I'm not sure I understand why you would want to do that. This sounds like you'd want some generic code to be able to construct a pipeline. But at the same time you can't do anything meaningful with that pipeline because the generic code doesn't know how to program the pipeline. The only thing you'll get is a list of devices in the pipeline, but you can do that by looking at the bindings and the DT content, no matter what the properties are named
You can, but then you need to know all the possible variations and ways the phandles are used.
Even if the points I gave about why a common way to describe connections is important would not be relevant today, it sounds much safer to have a standard representation in the DT for the connections. I'd only go for device specific custom descriptions if there's a very important reason for that. And I can't come up with any reason.
One of the good practices in DT is to keep property names similar to the signal names of the chip to make it easy to correlate. So if you have a bridge that converts RGB to LVDS and DSI for example, you'd have to describe two outputs. That's kind of difficult to do with standard property names. Well, it depends, you could do this:
bridge { panels = <&lvds &dsi>; };
Alternatively:
bridge { lvds-panel = <&lvds>; dsi-panel = <&dsi>; };
Nothing says the bridge is connected to a panel, so "output" or such would probably be better there.
Or I guess you could do the same with ports/endpoints given that there are actually two outputs here. With endpoints it's of course difficult to correlate them without adding extra properties.
bridge { ports { port@0 { endpoint@0 { remote-endpoint = <&lvds>; }; };
port@1 { endpoint@0 { remote-endpoint = <&dsi>; }; }; };
};
Note that you could also write it without graph but with a more similar structure:
bridge { lvds { panel = <&lvds>; };
dsi { panel = <&dsi>; };
};
While it's true that it'd be more difficult to parse that in a generic way I also think it's a whole lot more readable than some abstract graph where a lot of context is simply lost.
Yes, your examples are easier to read. Then again, it's simple to add /* lvds */ comment on the full graph for clarity.
Also, I don't know if anything stops us from naming "port@0" to "lvds". All the sub-nodes of "ports" are ports, so it's implicit. But then again, I don't see that really buying much, as a simple comment does the trick also.
Tomi
On Tue, Sep 23, 2014 at 02:15:54PM +0300, Tomi Valkeinen wrote:
On 23/09/14 12:28, Thierry Reding wrote:
But, for example, let's say that the board is designed in a way that for panel0 the bridge needs to output a bit higher level voltages than for panel1. That's not a property of the panel, so it can't be queried from the panel.
That feature (varying the voltage level) is specific to the bridge, and thus the properties should be in the bridge's DT data. So we need two different configurations in the bridge, one for panel0 and one for panel1. This is what endpoints give us.
You could get the same by moving the mux in front of the bridge instead.
I didn't understand. Moving the mux between the SoC and the bridge? How would that solve the problem. Or what do you mean with "in the front of the bridge"?
Because then you'd have multiple instances of the bridge that could describe which parameters it wants. But writing it out loud this is equally bad. More below.
Trying to do this within the bridge's node directly has two flaws:
- It violates the DT principle of describing hardware. The device itself does not know anything about multiple streams and deals only with a single input and output. You'd be describing a board specific aspect in the device binding.
We _are_ describing board specific aspects in the board.dts file. That's what it is about.
If the board's hardware is such that the bridge's voltage level needs to be higher when using panel0, that's very much describing hardware.
You misunderstood. It's describing a use-case rather than the device itself. You describe, *within the device node*, that it can be muxed whereas the device itself doesn't know anything about muxing. That's where the violation is. You require a board-integration issue to be described in the binding of a specific device.
- Such a solution would have to be implemented for all bridge devices that can potentially be muxed (really all of them). If you describe it accurately in a separate mux node you get genericity for free and it will work for all bridges.
I do agree that a generic mux is better if there's nothing special to be configured. But how do you use a generic mux node for bridge device specific features?
Good question. I don't have an immediate answer to that and would have to think about it for some more.
I disagree. If we don't have a common way to describe the connections between video devices, we cannot:
- have a common helper library to parse the connections
We have a common helper already. It's called of_parse_phandle().
Yes, but what is the name of the property, and where does it go? Does the panel have a phandle to the bridge? Does the bridge have a phandle to the panel?
My point was that we don't need a common helper library if we have a way of getting at the phandle and resolve the phandle to a bridge or a panel. While I think standard names can be advantageous, using the regular of_parse_phandle() and a lookup function we don't even need them.
- study the connections before the drivers are loaded
Why would you need that?
I think this was discussed earlier, maybe Laurent remembers the use cases.
I think one possibility here is to have an understanding of the pipelines even if the modules have not been loaded or a single device has failed to probe.
I don't see how that would be useful.
- handle the connections anywhere else than the specific driver for the
component
Why should we do that?
We could, for example, have a single component (common or SoC specific) that manages the video pipelines. The drivers for the encoders/bridges/panels would just probe the devices, without doing anything else. The master component would then parse the links and connect the devices in whatever way it sees best.
You don't need any of that. The SoC driver simply obtains a phandle to the first bridge connected to it. That bridge will already have done the same for any chained bridges. There's no need to connect them in any way anymore because they are already properly set up.
Having a standard representation only matters if you want to be able to parse the pipeline without knowing about the device specifics. But I'm not sure I understand why you would want to do that. This sounds like you'd want some generic code to be able to construct a pipeline. But at the same time you can't do anything meaningful with that pipeline because the generic code doesn't know how to program the pipeline. The only thing you'll get is a list of devices in the pipeline, but you can do that by looking at the bindings and the DT content, no matter what the properties are named
You can, but then you need to know all the possible variations and ways the phandles are used.
Which is described in the DT bindings. Really, the DT should be describing the device in whatever is the most natural way for that device. It is the job of the OS to abstract things away so that components can be seemlessly integrated. The DT doesn't need to (and shouldn't) have that kind of abstraction.
Even if the points I gave about why a common way to describe connections is important would not be relevant today, it sounds much safer to have a standard representation in the DT for the connections. I'd only go for device specific custom descriptions if there's a very important reason for that. And I can't come up with any reason.
One of the good practices in DT is to keep property names similar to the signal names of the chip to make it easy to correlate. So if you have a bridge that converts RGB to LVDS and DSI for example, you'd have to describe two outputs. That's kind of difficult to do with standard property names. Well, it depends, you could do this:
bridge { panels = <&lvds &dsi>; };
Alternatively:
bridge { lvds-panel = <&lvds>; dsi-panel = <&dsi>; };
Nothing says the bridge is connected to a panel, so "output" or such would probably be better there.
Hmm? How does the above not say that the bridge is connected to a panel?
Thierry
On 23/09/14 17:29, Thierry Reding wrote:
Trying to do this within the bridge's node directly has two flaws:
- It violates the DT principle of describing hardware. The device itself does not know anything about multiple streams and deals only with a single input and output. You'd be describing a board specific aspect in the device binding.
We _are_ describing board specific aspects in the board.dts file. That's what it is about.
If the board's hardware is such that the bridge's voltage level needs to be higher when using panel0, that's very much describing hardware.
You misunderstood. It's describing a use-case rather than the device itself. You describe, *within the device node*, that it can be muxed whereas the device itself doesn't know anything about muxing. That's where the violation is. You require a board-integration issue to be described in the binding of a specific device.
The .dts files are not only about hard HW features. The .dts files are full of configuration properties and such. So while the aim should be to describe only the HW, I think it's well within the limits to describe things like whether to enable/disable HW features for a particular output.
I guess there could be an external mux node, and that mux node could contain HW specific properties for both the inputs and outputs of the mux. The input and output drivers could fetch their features from that mux's node.
But that sounds much worse, than having ports/endpoints and HW specific properties there. (or actually, does it... it would simplify some things, but how would the bridge driver get its properties...).
And I don't quite see your point. Presuming we have a property in the bridge for, say, "higher-voltage-level", which with simple phandles would be present in the main bridge node, and presuming that's ok. (Is it in your opinion? That is describing configuration, not HW.)
If so, I don't think it's different than having two endpoints, each with their own property. In that case we just have the wires from the single output going into to destinations, and so we need to different places for the property.
We have a common helper already. It's called of_parse_phandle().
Yes, but what is the name of the property, and where does it go? Does the panel have a phandle to the bridge? Does the bridge have a phandle to the panel?
My point was that we don't need a common helper library if we have a way of getting at the phandle and resolve the phandle to a bridge or a panel. While I think standard names can be advantageous, using the regular of_parse_phandle() and a lookup function we don't even need them.
Well, we have also cases where the more complex video graphs are needed (or, at least, are not too complex, so they could well be used). If so, it'd be nice to have the driver manage the connections via a single API, which would underneath manage both the video graphs and simple phandles, depending on what is used in the .dts file.
- study the connections before the drivers are loaded
Why would you need that?
I think this was discussed earlier, maybe Laurent remembers the use cases.
I think one possibility here is to have an understanding of the pipelines even if the modules have not been loaded or a single device has failed to probe.
I don't see how that would be useful.
Ok. But it's still something that can be done with standardized bindings, and cannot be done with non-standardized.
- handle the connections anywhere else than the specific driver for the
component
Why should we do that?
We could, for example, have a single component (common or SoC specific) that manages the video pipelines. The drivers for the encoders/bridges/panels would just probe the devices, without doing anything else. The master component would then parse the links and connect the devices in whatever way it sees best.
You don't need any of that. The SoC driver simply obtains a phandle to the first bridge connected to it. That bridge will already have done the same for any chained bridges. There's no need to connect them in any way anymore because they are already properly set up.
Yes, there are different ways to do this in the SW side. The point is, with standard bindings we have many different options how to do it. With non-standard bindings we don't.
If there are no strong reasons to use non-standard bindings, I don't see why we would not standardize them.
Having a standard representation only matters if you want to be able to parse the pipeline without knowing about the device specifics. But I'm not sure I understand why you would want to do that. This sounds like you'd want some generic code to be able to construct a pipeline. But at the same time you can't do anything meaningful with that pipeline because the generic code doesn't know how to program the pipeline. The only thing you'll get is a list of devices in the pipeline, but you can do that by looking at the bindings and the DT content, no matter what the properties are named
You can, but then you need to know all the possible variations and ways the phandles are used.
Which is described in the DT bindings. Really, the DT should be describing the device in whatever is the most natural way for that device. It is the job of the OS to abstract things away so that components can be seemlessly integrated. The DT doesn't need to (and shouldn't) have that kind of abstraction.
But here we're discussing about how the device connects to other devices. That's not internal to the device in question.
And really, I don't understand your reluctance for standardizing the connection bindings. The only argument so far I've heard was that property "lvds" is easier to understand for a human when reading the .dts file than "output0".
You must agree that standardizing enables the uses I've been describing, even if you may not agree that they should be implemented. So, having them standardized is safer option. If the only drawback is that you may need to write /* lvds */ comment in the .dts to be human-friendly, well, I'm for it.
Even if the points I gave about why a common way to describe connections is important would not be relevant today, it sounds much safer to have a standard representation in the DT for the connections. I'd only go for device specific custom descriptions if there's a very important reason for that. And I can't come up with any reason.
One of the good practices in DT is to keep property names similar to the signal names of the chip to make it easy to correlate. So if you have a bridge that converts RGB to LVDS and DSI for example, you'd have to describe two outputs. That's kind of difficult to do with standard property names. Well, it depends, you could do this:
bridge { panels = <&lvds &dsi>; };
Alternatively:
bridge { lvds-panel = <&lvds>; dsi-panel = <&dsi>; };
Nothing says the bridge is connected to a panel, so "output" or such would probably be better there.
Hmm? How does the above not say that the bridge is connected to a panel?
Sorry, let me rephrase: nothing says the bridge must be connected to a panel. The bridge could be connected to another encoder on some board. And I don't think the bridge should know or care what kind of device it is connected to.
Tomi
On Thu, Sep 18, 2014 at 11:20:40AM +0530, Ajay kumar wrote:
Hi Tomi,
On Wed, Sep 17, 2014 at 9:52 PM, Tomi Valkeinen tomi.valkeinen@ti.com wrote:
On 17/09/14 17:29, Ajay kumar wrote:
Hi Tomi,
Thanks for your comments.
On Wed, Sep 17, 2014 at 5:22 PM, Tomi Valkeinen tomi.valkeinen@ti.com wrote:
On 27/08/14 17:39, Ajay Kumar wrote:
Add documentation for DT properties supported by ps8622/ps8625 eDP-LVDS converter.
Signed-off-by: Ajay Kumar ajaykumar.rs@samsung.com
.../devicetree/bindings/video/bridge/ps8622.txt | 20 ++++++++++++++++++++ 1 file changed, 20 insertions(+) create mode 100644 Documentation/devicetree/bindings/video/bridge/ps8622.txt
diff --git a/Documentation/devicetree/bindings/video/bridge/ps8622.txt b/Documentation/devicetree/bindings/video/bridge/ps8622.txt new file mode 100644 index 0000000..0ec8172 --- /dev/null +++ b/Documentation/devicetree/bindings/video/bridge/ps8622.txt @@ -0,0 +1,20 @@ +ps8622-bridge bindings
+Required properties:
- compatible: "parade,ps8622" or "parade,ps8625"
- reg: first i2c address of the bridge
- sleep-gpios: OF device-tree gpio specification for PD_ pin.
- reset-gpios: OF device-tree gpio specification for RST_ pin.
+Optional properties:
- lane-count: number of DP lanes to use
- use-external-pwm: backlight will be controlled by an external PWM
What does this mean? That the backlight support from ps8625 is not used? If so, maybe "disable-pwm" or something?
"use-external-pwm" or "disable-bridge-pwm" would be better.
Well, the properties are about the bridge. "use-external-pwm" means that the bridge uses an external PWM, which, if I understood correctly, is not what the property is about.
"disable-bridge-pwm" is ok, but the "bridge" there is extra. The properties are about the bridge, so it's implicit.
Ok. I will use "disable-pwm".
Why is this even necessary? According to the datasheet this device has circuitry for backlight control. If so, I'd expect it to expose either a backlight device or a PWM device. That way unless somebody is using the backlight/PWM exposed by the bridge the bridge can simply disable PWM.
Thierry
Hi Thierry,
On Mon, Sep 22, 2014 at 1:40 PM, Thierry Reding thierry.reding@gmail.com wrote:
On Thu, Sep 18, 2014 at 11:20:40AM +0530, Ajay kumar wrote:
Hi Tomi,
On Wed, Sep 17, 2014 at 9:52 PM, Tomi Valkeinen tomi.valkeinen@ti.com wrote:
On 17/09/14 17:29, Ajay kumar wrote:
Hi Tomi,
Thanks for your comments.
On Wed, Sep 17, 2014 at 5:22 PM, Tomi Valkeinen tomi.valkeinen@ti.com wrote:
On 27/08/14 17:39, Ajay Kumar wrote:
Add documentation for DT properties supported by ps8622/ps8625 eDP-LVDS converter.
Signed-off-by: Ajay Kumar ajaykumar.rs@samsung.com
.../devicetree/bindings/video/bridge/ps8622.txt | 20 ++++++++++++++++++++ 1 file changed, 20 insertions(+) create mode 100644 Documentation/devicetree/bindings/video/bridge/ps8622.txt
diff --git a/Documentation/devicetree/bindings/video/bridge/ps8622.txt b/Documentation/devicetree/bindings/video/bridge/ps8622.txt new file mode 100644 index 0000000..0ec8172 --- /dev/null +++ b/Documentation/devicetree/bindings/video/bridge/ps8622.txt @@ -0,0 +1,20 @@ +ps8622-bridge bindings
+Required properties:
- compatible: "parade,ps8622" or "parade,ps8625"
- reg: first i2c address of the bridge
- sleep-gpios: OF device-tree gpio specification for PD_ pin.
- reset-gpios: OF device-tree gpio specification for RST_ pin.
+Optional properties:
- lane-count: number of DP lanes to use
- use-external-pwm: backlight will be controlled by an external PWM
What does this mean? That the backlight support from ps8625 is not used? If so, maybe "disable-pwm" or something?
"use-external-pwm" or "disable-bridge-pwm" would be better.
Well, the properties are about the bridge. "use-external-pwm" means that the bridge uses an external PWM, which, if I understood correctly, is not what the property is about.
"disable-bridge-pwm" is ok, but the "bridge" there is extra. The properties are about the bridge, so it's implicit.
Ok. I will use "disable-pwm".
Why is this even necessary? According to the datasheet this device has circuitry for backlight control. If so, I'd expect it to expose either a backlight device or a PWM device. That way unless somebody is using the backlight/PWM exposed by the bridge the bridge can simply disable PWM.
The driver does expose a backlight device. And, the decision(whether to expose a backlight device or not) is made based on the DT flag "use-external-pwm". This was discussed before, and you suggested to use the boolean property, refer to this link: http://lists.freedesktop.org/archives/dri-devel/2014-July/065048.html
Ajay
On Mon, Sep 22, 2014 at 02:01:38PM +0530, Ajay kumar wrote:
Hi Thierry,
On Mon, Sep 22, 2014 at 1:40 PM, Thierry Reding thierry.reding@gmail.com wrote:
On Thu, Sep 18, 2014 at 11:20:40AM +0530, Ajay kumar wrote:
Hi Tomi,
On Wed, Sep 17, 2014 at 9:52 PM, Tomi Valkeinen tomi.valkeinen@ti.com wrote:
On 17/09/14 17:29, Ajay kumar wrote:
Hi Tomi,
Thanks for your comments.
On Wed, Sep 17, 2014 at 5:22 PM, Tomi Valkeinen tomi.valkeinen@ti.com wrote:
On 27/08/14 17:39, Ajay Kumar wrote: > Add documentation for DT properties supported by ps8622/ps8625 > eDP-LVDS converter. > > Signed-off-by: Ajay Kumar ajaykumar.rs@samsung.com > --- > .../devicetree/bindings/video/bridge/ps8622.txt | 20 ++++++++++++++++++++ > 1 file changed, 20 insertions(+) > create mode 100644 Documentation/devicetree/bindings/video/bridge/ps8622.txt > > diff --git a/Documentation/devicetree/bindings/video/bridge/ps8622.txt b/Documentation/devicetree/bindings/video/bridge/ps8622.txt > new file mode 100644 > index 0000000..0ec8172 > --- /dev/null > +++ b/Documentation/devicetree/bindings/video/bridge/ps8622.txt > @@ -0,0 +1,20 @@ > +ps8622-bridge bindings > + > +Required properties: > + - compatible: "parade,ps8622" or "parade,ps8625" > + - reg: first i2c address of the bridge > + - sleep-gpios: OF device-tree gpio specification for PD_ pin. > + - reset-gpios: OF device-tree gpio specification for RST_ pin. > + > +Optional properties: > + - lane-count: number of DP lanes to use > + - use-external-pwm: backlight will be controlled by an external PWM
What does this mean? That the backlight support from ps8625 is not used? If so, maybe "disable-pwm" or something?
"use-external-pwm" or "disable-bridge-pwm" would be better.
Well, the properties are about the bridge. "use-external-pwm" means that the bridge uses an external PWM, which, if I understood correctly, is not what the property is about.
"disable-bridge-pwm" is ok, but the "bridge" there is extra. The properties are about the bridge, so it's implicit.
Ok. I will use "disable-pwm".
Why is this even necessary? According to the datasheet this device has circuitry for backlight control. If so, I'd expect it to expose either a backlight device or a PWM device. That way unless somebody is using the backlight/PWM exposed by the bridge the bridge can simply disable PWM.
The driver does expose a backlight device. And, the decision(whether to expose a backlight device or not) is made based on the DT flag "use-external-pwm". This was discussed before, and you suggested to use the boolean property, refer to this link: http://lists.freedesktop.org/archives/dri-devel/2014-July/065048.html
I think you misunderstood what I said, or maybe I didn't explain clearly what I meant. If the PS8622 provides a backlight there's nothing wrong with always exposing it. The bridge itself isn't going to be using the backlight anyway. Rather the panel itself should be using the backlight device exposed by PS8622 or some separate backlight device.
To illustrate by an example:
ps8622: ... { compatible = "parade,ps8622"; ... };
panel { ... backlight = <&ps8622>; ... };
Or:
backlight: ... { compatible = "pwm-backlight"; ... };
panel { ... backlight = <&backlight>; ... };
What you did in v6 of this series was look up a backlight device and then not use it. That seemed unnecessary. Looking at v6 again the reason for getting a phandle to the backlight was so that the device itself did not expose its own backlight controlling circuitry if an external one was being used. But since the bridge has no business controlling the backlight, having the backlight phandle in the bridge is not correct.
So I think what you could do in the driver instead is always expose the backlight device. If the panel used a different backlight, the PS8622's internal on simply wouldn't be accessed. It would still be possible to control the backlight in sysfs, but that shouldn't be a problem (only root can access it).
That said, I have no strong objections to the boolean property if you feel like it's really necessary.
Thierry
On Mon, Sep 22, 2014 at 4:11 PM, Thierry Reding thierry.reding@gmail.com wrote:
On Mon, Sep 22, 2014 at 02:01:38PM +0530, Ajay kumar wrote:
Hi Thierry,
On Mon, Sep 22, 2014 at 1:40 PM, Thierry Reding thierry.reding@gmail.com wrote:
On Thu, Sep 18, 2014 at 11:20:40AM +0530, Ajay kumar wrote:
Hi Tomi,
On Wed, Sep 17, 2014 at 9:52 PM, Tomi Valkeinen tomi.valkeinen@ti.com wrote:
On 17/09/14 17:29, Ajay kumar wrote:
Hi Tomi,
Thanks for your comments.
On Wed, Sep 17, 2014 at 5:22 PM, Tomi Valkeinen tomi.valkeinen@ti.com wrote: > On 27/08/14 17:39, Ajay Kumar wrote: >> Add documentation for DT properties supported by ps8622/ps8625 >> eDP-LVDS converter. >> >> Signed-off-by: Ajay Kumar ajaykumar.rs@samsung.com >> --- >> .../devicetree/bindings/video/bridge/ps8622.txt | 20 ++++++++++++++++++++ >> 1 file changed, 20 insertions(+) >> create mode 100644 Documentation/devicetree/bindings/video/bridge/ps8622.txt >> >> diff --git a/Documentation/devicetree/bindings/video/bridge/ps8622.txt b/Documentation/devicetree/bindings/video/bridge/ps8622.txt >> new file mode 100644 >> index 0000000..0ec8172 >> --- /dev/null >> +++ b/Documentation/devicetree/bindings/video/bridge/ps8622.txt >> @@ -0,0 +1,20 @@ >> +ps8622-bridge bindings >> + >> +Required properties: >> + - compatible: "parade,ps8622" or "parade,ps8625" >> + - reg: first i2c address of the bridge >> + - sleep-gpios: OF device-tree gpio specification for PD_ pin. >> + - reset-gpios: OF device-tree gpio specification for RST_ pin. >> + >> +Optional properties: >> + - lane-count: number of DP lanes to use >> + - use-external-pwm: backlight will be controlled by an external PWM > > What does this mean? That the backlight support from ps8625 is not used? > If so, maybe "disable-pwm" or something? "use-external-pwm" or "disable-bridge-pwm" would be better.
Well, the properties are about the bridge. "use-external-pwm" means that the bridge uses an external PWM, which, if I understood correctly, is not what the property is about.
"disable-bridge-pwm" is ok, but the "bridge" there is extra. The properties are about the bridge, so it's implicit.
Ok. I will use "disable-pwm".
Why is this even necessary? According to the datasheet this device has circuitry for backlight control. If so, I'd expect it to expose either a backlight device or a PWM device. That way unless somebody is using the backlight/PWM exposed by the bridge the bridge can simply disable PWM.
The driver does expose a backlight device. And, the decision(whether to expose a backlight device or not) is made based on the DT flag "use-external-pwm". This was discussed before, and you suggested to use the boolean property, refer to this link: http://lists.freedesktop.org/archives/dri-devel/2014-July/065048.html
I think you misunderstood what I said, or maybe I didn't explain clearly what I meant. If the PS8622 provides a backlight there's nothing wrong with always exposing it. The bridge itself isn't going to be using the backlight anyway. Rather the panel itself should be using the backlight device exposed by PS8622 or some separate backlight device.
To illustrate by an example:
ps8622: ... { compatible = "parade,ps8622"; ... }; panel { ... backlight = <&ps8622>; ... };
No, if ps8622 backlight control is used, we need not specify the backlight phandle for the panel driver. Somehow, ps8622 internal circuitry keeps the bootup glitch free :) Backlight control and panel controls can be separate then.
Or:
backlight: ... { compatible = "pwm-backlight"; ... }; panel { ... backlight = <&backlight>; ... };
This is the way it is for peach_pit.
What you did in v6 of this series was look up a backlight device and then not use it. That seemed unnecessary. Looking at v6 again the reason for getting a phandle to the backlight was so that the device itself did not expose its own backlight controlling circuitry if an external one was being used. But since the bridge has no business controlling the backlight, having the backlight phandle in the bridge is not correct.
So I think what you could do in the driver instead is always expose the backlight device. If the panel used a different backlight, the PS8622's internal on simply wouldn't be accessed. It would still be possible to control the backlight in sysfs, but that shouldn't be a problem (only root can access it)
That would be like simple exposing a feature which cannot be used by the user, ideally which "should not be" used by the user.
That said, I have no strong objections to the boolean property if you feel like it's really necessary.
Won't you think having a boolean property for an optional feature of the device, is better than all these?
Ajay
On Mon, Sep 22, 2014 at 04:53:22PM +0530, Ajay kumar wrote:
On Mon, Sep 22, 2014 at 4:11 PM, Thierry Reding thierry.reding@gmail.com wrote:
On Mon, Sep 22, 2014 at 02:01:38PM +0530, Ajay kumar wrote:
Hi Thierry,
On Mon, Sep 22, 2014 at 1:40 PM, Thierry Reding thierry.reding@gmail.com wrote:
On Thu, Sep 18, 2014 at 11:20:40AM +0530, Ajay kumar wrote:
Hi Tomi,
On Wed, Sep 17, 2014 at 9:52 PM, Tomi Valkeinen tomi.valkeinen@ti.com wrote:
On 17/09/14 17:29, Ajay kumar wrote: > Hi Tomi, > > Thanks for your comments. > > On Wed, Sep 17, 2014 at 5:22 PM, Tomi Valkeinen tomi.valkeinen@ti.com wrote: >> On 27/08/14 17:39, Ajay Kumar wrote: >>> Add documentation for DT properties supported by ps8622/ps8625 >>> eDP-LVDS converter. >>> >>> Signed-off-by: Ajay Kumar ajaykumar.rs@samsung.com >>> --- >>> .../devicetree/bindings/video/bridge/ps8622.txt | 20 ++++++++++++++++++++ >>> 1 file changed, 20 insertions(+) >>> create mode 100644 Documentation/devicetree/bindings/video/bridge/ps8622.txt >>> >>> diff --git a/Documentation/devicetree/bindings/video/bridge/ps8622.txt b/Documentation/devicetree/bindings/video/bridge/ps8622.txt >>> new file mode 100644 >>> index 0000000..0ec8172 >>> --- /dev/null >>> +++ b/Documentation/devicetree/bindings/video/bridge/ps8622.txt >>> @@ -0,0 +1,20 @@ >>> +ps8622-bridge bindings >>> + >>> +Required properties: >>> + - compatible: "parade,ps8622" or "parade,ps8625" >>> + - reg: first i2c address of the bridge >>> + - sleep-gpios: OF device-tree gpio specification for PD_ pin. >>> + - reset-gpios: OF device-tree gpio specification for RST_ pin. >>> + >>> +Optional properties: >>> + - lane-count: number of DP lanes to use >>> + - use-external-pwm: backlight will be controlled by an external PWM >> >> What does this mean? That the backlight support from ps8625 is not used? >> If so, maybe "disable-pwm" or something? > "use-external-pwm" or "disable-bridge-pwm" would be better.
Well, the properties are about the bridge. "use-external-pwm" means that the bridge uses an external PWM, which, if I understood correctly, is not what the property is about.
"disable-bridge-pwm" is ok, but the "bridge" there is extra. The properties are about the bridge, so it's implicit.
Ok. I will use "disable-pwm".
Why is this even necessary? According to the datasheet this device has circuitry for backlight control. If so, I'd expect it to expose either a backlight device or a PWM device. That way unless somebody is using the backlight/PWM exposed by the bridge the bridge can simply disable PWM.
The driver does expose a backlight device. And, the decision(whether to expose a backlight device or not) is made based on the DT flag "use-external-pwm". This was discussed before, and you suggested to use the boolean property, refer to this link: http://lists.freedesktop.org/archives/dri-devel/2014-July/065048.html
I think you misunderstood what I said, or maybe I didn't explain clearly what I meant. If the PS8622 provides a backlight there's nothing wrong with always exposing it. The bridge itself isn't going to be using the backlight anyway. Rather the panel itself should be using the backlight device exposed by PS8622 or some separate backlight device.
To illustrate by an example:
ps8622: ... { compatible = "parade,ps8622"; ... }; panel { ... backlight = <&ps8622>; ... };
No, if ps8622 backlight control is used, we need not specify the backlight phandle for the panel driver. Somehow, ps8622 internal circuitry keeps the bootup glitch free :) Backlight control and panel controls can be separate then.
But they shouldn't. Your panel driver should always be the one to control backlight. How else is the bridge supposed to know when to turn backlight on or off?
What you did in v6 of this series was look up a backlight device and then not use it. That seemed unnecessary. Looking at v6 again the reason for getting a phandle to the backlight was so that the device itself did not expose its own backlight controlling circuitry if an external one was being used. But since the bridge has no business controlling the backlight, having the backlight phandle in the bridge is not correct.
So I think what you could do in the driver instead is always expose the backlight device. If the panel used a different backlight, the PS8622's internal on simply wouldn't be accessed. It would still be possible to control the backlight in sysfs, but that shouldn't be a problem (only root can access it)
That would be like simple exposing a feature which cannot be used by the user, ideally which "should not be" used by the user.
And it won't be used unless they access the sysfs files directly. There are a lot of cases where we expose functionality that cannot be meaningfully used by the user. For example, a GPIO may not be routed to anything on a board, yet we don't explicitly hide any specific GPIOs from users.
That said, I have no strong objections to the boolean property if you feel like it's really necessary.
Won't you think having a boolean property for an optional feature of the device, is better than all these?
Like I said, I'm indifferent on the matter. I don't think it's necessary to hide the backlight device, but if you want to, please feel free to do so.
Another alternative would be not to expose it at all and not have the boolean property since you don't seem to have a way to test it in the first place (or at least there's no device support upstream where it's needed).
Thierry
On Mon, Sep 22, 2014 at 5:05 PM, Thierry Reding thierry.reding@gmail.com wrote:
On Mon, Sep 22, 2014 at 04:53:22PM +0530, Ajay kumar wrote:
On Mon, Sep 22, 2014 at 4:11 PM, Thierry Reding thierry.reding@gmail.com wrote:
On Mon, Sep 22, 2014 at 02:01:38PM +0530, Ajay kumar wrote:
Hi Thierry,
On Mon, Sep 22, 2014 at 1:40 PM, Thierry Reding thierry.reding@gmail.com wrote:
On Thu, Sep 18, 2014 at 11:20:40AM +0530, Ajay kumar wrote:
Hi Tomi,
On Wed, Sep 17, 2014 at 9:52 PM, Tomi Valkeinen tomi.valkeinen@ti.com wrote: > On 17/09/14 17:29, Ajay kumar wrote: >> Hi Tomi, >> >> Thanks for your comments. >> >> On Wed, Sep 17, 2014 at 5:22 PM, Tomi Valkeinen tomi.valkeinen@ti.com wrote: >>> On 27/08/14 17:39, Ajay Kumar wrote: >>>> Add documentation for DT properties supported by ps8622/ps8625 >>>> eDP-LVDS converter. >>>> >>>> Signed-off-by: Ajay Kumar ajaykumar.rs@samsung.com >>>> --- >>>> .../devicetree/bindings/video/bridge/ps8622.txt | 20 ++++++++++++++++++++ >>>> 1 file changed, 20 insertions(+) >>>> create mode 100644 Documentation/devicetree/bindings/video/bridge/ps8622.txt >>>> >>>> diff --git a/Documentation/devicetree/bindings/video/bridge/ps8622.txt b/Documentation/devicetree/bindings/video/bridge/ps8622.txt >>>> new file mode 100644 >>>> index 0000000..0ec8172 >>>> --- /dev/null >>>> +++ b/Documentation/devicetree/bindings/video/bridge/ps8622.txt >>>> @@ -0,0 +1,20 @@ >>>> +ps8622-bridge bindings >>>> + >>>> +Required properties: >>>> + - compatible: "parade,ps8622" or "parade,ps8625" >>>> + - reg: first i2c address of the bridge >>>> + - sleep-gpios: OF device-tree gpio specification for PD_ pin. >>>> + - reset-gpios: OF device-tree gpio specification for RST_ pin. >>>> + >>>> +Optional properties: >>>> + - lane-count: number of DP lanes to use >>>> + - use-external-pwm: backlight will be controlled by an external PWM >>> >>> What does this mean? That the backlight support from ps8625 is not used? >>> If so, maybe "disable-pwm" or something? >> "use-external-pwm" or "disable-bridge-pwm" would be better. > > Well, the properties are about the bridge. "use-external-pwm" means that > the bridge uses an external PWM, which, if I understood correctly, is > not what the property is about. > > "disable-bridge-pwm" is ok, but the "bridge" there is extra. The > properties are about the bridge, so it's implicit. Ok. I will use "disable-pwm".
Why is this even necessary? According to the datasheet this device has circuitry for backlight control. If so, I'd expect it to expose either a backlight device or a PWM device. That way unless somebody is using the backlight/PWM exposed by the bridge the bridge can simply disable PWM.
The driver does expose a backlight device. And, the decision(whether to expose a backlight device or not) is made based on the DT flag "use-external-pwm". This was discussed before, and you suggested to use the boolean property, refer to this link: http://lists.freedesktop.org/archives/dri-devel/2014-July/065048.html
I think you misunderstood what I said, or maybe I didn't explain clearly what I meant. If the PS8622 provides a backlight there's nothing wrong with always exposing it. The bridge itself isn't going to be using the backlight anyway. Rather the panel itself should be using the backlight device exposed by PS8622 or some separate backlight device.
To illustrate by an example:
ps8622: ... { compatible = "parade,ps8622"; ... }; panel { ... backlight = <&ps8622>; ... };
No, if ps8622 backlight control is used, we need not specify the backlight phandle for the panel driver. Somehow, ps8622 internal circuitry keeps the bootup glitch free :) Backlight control and panel controls can be separate then.
But they shouldn't. Your panel driver should always be the one to control backlight. How else is the bridge supposed to know when to turn backlight on or off?
In internal pwm case, we keep the backlight on in probe, and from userspace its upto the user to control it via sysfs. And, ps8622 generates backlight only if video data is coming from the encoder. Backlight is synced with video data using an internal circuit, I think. Since internal pwm is tied to video data, but not to any of the panel controls, we need not do any backlight control in panel driver.
What you did in v6 of this series was look up a backlight device and then not use it. That seemed unnecessary. Looking at v6 again the reason for getting a phandle to the backlight was so that the device itself did not expose its own backlight controlling circuitry if an external one was being used. But since the bridge has no business controlling the backlight, having the backlight phandle in the bridge is not correct.
So I think what you could do in the driver instead is always expose the backlight device. If the panel used a different backlight, the PS8622's internal on simply wouldn't be accessed. It would still be possible to control the backlight in sysfs, but that shouldn't be a problem (only root can access it)
That would be like simple exposing a feature which cannot be used by the user, ideally which "should not be" used by the user.
And it won't be used unless they access the sysfs files directly. There are a lot of cases where we expose functionality that cannot be meaningfully used by the user. For example, a GPIO may not be routed to anything on a board, yet we don't explicitly hide any specific GPIOs from users.
That said, I have no strong objections to the boolean property if you feel like it's really necessary.
Won't you think having a boolean property for an optional feature of the device, is better than all these?
Like I said, I'm indifferent on the matter. I don't think it's necessary to hide the backlight device, but if you want to, please feel free to do so. Another alternative would be not to expose it at all and not have the boolean property since you don't seem to have a way to test it in the first place (or at least there's no device support upstream where it's needed).
Well, I am okay doing this. But again, we would be discussing this topic when somebody needs to use internal pwm os ps8622!
Also, can you check the other patches and get them merged? This patch is anyhow independent and can come later with the changes you mentioned.
Ajay
On Monday 22 September 2014 13:35:15 Thierry Reding wrote:
On Mon, Sep 22, 2014 at 04:53:22PM +0530, Ajay kumar wrote:
On Mon, Sep 22, 2014 at 4:11 PM, Thierry Reding wrote:
On Mon, Sep 22, 2014 at 02:01:38PM +0530, Ajay kumar wrote:
On Mon, Sep 22, 2014 at 1:40 PM, Thierry Reding wrote:
On Thu, Sep 18, 2014 at 11:20:40AM +0530, Ajay kumar wrote:
On Wed, Sep 17, 2014 at 9:52 PM, Tomi Valkeinen wrote: > On 17/09/14 17:29, Ajay kumar wrote: >> On Wed, Sep 17, 2014 at 5:22 PM, Tomi Valkeinen wrote: >>> On 27/08/14 17:39, Ajay Kumar wrote: >>>> Add documentation for DT properties supported by ps8622/ps8625 >>>> eDP-LVDS converter. >>>> >>>> Signed-off-by: Ajay Kumar ajaykumar.rs@samsung.com >>>> --- >>>> >>>> .../devicetree/bindings/video/bridge/ps8622.txt | 20 >>>> ++++++++++++++++++++ 1 file changed, 20 insertions(+) >>>> create mode 100644 >>>> Documentation/devicetree/bindings/video/bridge/ps8622.txt >>>> >>>> diff --git >>>> a/Documentation/devicetree/bindings/video/bridge/ps8622.txt >>>> b/Documentation/devicetree/bindings/video/bridge/ps8622.txt >>>> new file mode 100644 >>>> index 0000000..0ec8172 >>>> --- /dev/null >>>> +++ b/Documentation/devicetree/bindings/video/bridge/ps8622.txt >>>> @@ -0,0 +1,20 @@ >>>> +ps8622-bridge bindings >>>> + >>>> +Required properties: >>>> + - compatible: "parade,ps8622" or "parade,ps8625" >>>> + - reg: first i2c address of the bridge >>>> + - sleep-gpios: OF device-tree gpio specification for PD_ >>>> pin. >>>> + - reset-gpios: OF device-tree gpio specification for RST_ >>>> pin. >>>> + >>>> +Optional properties: >>>> + - lane-count: number of DP lanes to use >>>> + - use-external-pwm: backlight will be controlled by an >>>> external PWM >>> >>> What does this mean? That the backlight support from ps8625 is >>> not used? If so, maybe "disable-pwm" or something? >> >> "use-external-pwm" or "disable-bridge-pwm" would be better. > > Well, the properties are about the bridge. "use-external-pwm" > means that the bridge uses an external PWM, which, if I understood > correctly, is not what the property is about. > > "disable-bridge-pwm" is ok, but the "bridge" there is extra. The > properties are about the bridge, so it's implicit.
Ok. I will use "disable-pwm".
Why is this even necessary? According to the datasheet this device has circuitry for backlight control. If so, I'd expect it to expose either a backlight device or a PWM device. That way unless somebody is using the backlight/PWM exposed by the bridge the bridge can simply disable PWM.
The driver does expose a backlight device. And, the decision(whether to expose a backlight device or not) is made based on the DT flag "use-external-pwm". This was discussed before, and you suggested to use the boolean property, refer to this link: http://lists.freedesktop.org/archives/dri-devel/2014-July/065048.html
I think you misunderstood what I said, or maybe I didn't explain clearly what I meant. If the PS8622 provides a backlight there's nothing wrong with always exposing it. The bridge itself isn't going to be using the backlight anyway. Rather the panel itself should be using the backlight device exposed by PS8622 or some separate backlight device.
To illustrate by an example: ps8622: ... {
compatible = "parade,ps8622"; ... }; panel { ... backlight = <&ps8622>; ... };
No, if ps8622 backlight control is used, we need not specify the backlight phandle for the panel driver. Somehow, ps8622 internal circuitry keeps the bootup glitch free :) Backlight control and panel controls can be separate then.
But they shouldn't. Your panel driver should always be the one to control backlight. How else is the bridge supposed to know when to turn backlight on or off?
What you did in v6 of this series was look up a backlight device and then not use it. That seemed unnecessary. Looking at v6 again the reason for getting a phandle to the backlight was so that the device itself did not expose its own backlight controlling circuitry if an external one was being used. But since the bridge has no business controlling the backlight, having the backlight phandle in the bridge is not correct.
So I think what you could do in the driver instead is always expose the backlight device. If the panel used a different backlight, the PS8622's internal on simply wouldn't be accessed. It would still be possible to control the backlight in sysfs, but that shouldn't be a problem (only root can access it)
That would be like simple exposing a feature which cannot be used by the user, ideally which "should not be" used by the user.
And it won't be used unless they access the sysfs files directly. There are a lot of cases where we expose functionality that cannot be meaningfully used by the user. For example, a GPIO may not be routed to anything on a board, yet we don't explicitly hide any specific GPIOs from users.
That said, I have no strong objections to the boolean property if you feel like it's really necessary.
Won't you think having a boolean property for an optional feature of the device, is better than all these?
Like I said, I'm indifferent on the matter. I don't think it's necessary to hide the backlight device, but if you want to, please feel free to do so.
DT typically use
status = "disabled"
to disable devices. In this case we don't want to disable the ps8622 completely, but just one of its functions. Maybe a "backlight-status" property could be used for that ? If that's considered too verbose, I would be fine with a "disable-<feature>" boolean property too.
Another alternative would be not to expose it at all and not have the boolean property since you don't seem to have a way to test it in the first place (or at least there's no device support upstream where it's needed).
On Tue, Sep 23, 2014 at 03:00:37AM +0300, Laurent Pinchart wrote:
On Monday 22 September 2014 13:35:15 Thierry Reding wrote:
On Mon, Sep 22, 2014 at 04:53:22PM +0530, Ajay kumar wrote:
On Mon, Sep 22, 2014 at 4:11 PM, Thierry Reding wrote:
On Mon, Sep 22, 2014 at 02:01:38PM +0530, Ajay kumar wrote:
On Mon, Sep 22, 2014 at 1:40 PM, Thierry Reding wrote:
On Thu, Sep 18, 2014 at 11:20:40AM +0530, Ajay kumar wrote: > On Wed, Sep 17, 2014 at 9:52 PM, Tomi Valkeinen wrote: > > On 17/09/14 17:29, Ajay kumar wrote: > >> On Wed, Sep 17, 2014 at 5:22 PM, Tomi Valkeinen wrote: > >>> On 27/08/14 17:39, Ajay Kumar wrote: > >>>> Add documentation for DT properties supported by ps8622/ps8625 > >>>> eDP-LVDS converter. > >>>> > >>>> Signed-off-by: Ajay Kumar ajaykumar.rs@samsung.com > >>>> --- > >>>> > >>>> .../devicetree/bindings/video/bridge/ps8622.txt | 20 > >>>> ++++++++++++++++++++ 1 file changed, 20 insertions(+) > >>>> create mode 100644 > >>>> Documentation/devicetree/bindings/video/bridge/ps8622.txt > >>>> > >>>> diff --git > >>>> a/Documentation/devicetree/bindings/video/bridge/ps8622.txt > >>>> b/Documentation/devicetree/bindings/video/bridge/ps8622.txt > >>>> new file mode 100644 > >>>> index 0000000..0ec8172 > >>>> --- /dev/null > >>>> +++ b/Documentation/devicetree/bindings/video/bridge/ps8622.txt > >>>> @@ -0,0 +1,20 @@ > >>>> +ps8622-bridge bindings > >>>> + > >>>> +Required properties: > >>>> + - compatible: "parade,ps8622" or "parade,ps8625" > >>>> + - reg: first i2c address of the bridge > >>>> + - sleep-gpios: OF device-tree gpio specification for PD_ > >>>> pin. > >>>> + - reset-gpios: OF device-tree gpio specification for RST_ > >>>> pin. > >>>> + > >>>> +Optional properties: > >>>> + - lane-count: number of DP lanes to use > >>>> + - use-external-pwm: backlight will be controlled by an > >>>> external PWM > >>> > >>> What does this mean? That the backlight support from ps8625 is > >>> not used? If so, maybe "disable-pwm" or something? > >> > >> "use-external-pwm" or "disable-bridge-pwm" would be better. > > > > Well, the properties are about the bridge. "use-external-pwm" > > means that the bridge uses an external PWM, which, if I understood > > correctly, is not what the property is about. > > > > "disable-bridge-pwm" is ok, but the "bridge" there is extra. The > > properties are about the bridge, so it's implicit. > > Ok. I will use "disable-pwm".
Why is this even necessary? According to the datasheet this device has circuitry for backlight control. If so, I'd expect it to expose either a backlight device or a PWM device. That way unless somebody is using the backlight/PWM exposed by the bridge the bridge can simply disable PWM.
The driver does expose a backlight device. And, the decision(whether to expose a backlight device or not) is made based on the DT flag "use-external-pwm". This was discussed before, and you suggested to use the boolean property, refer to this link: http://lists.freedesktop.org/archives/dri-devel/2014-July/065048.html
I think you misunderstood what I said, or maybe I didn't explain clearly what I meant. If the PS8622 provides a backlight there's nothing wrong with always exposing it. The bridge itself isn't going to be using the backlight anyway. Rather the panel itself should be using the backlight device exposed by PS8622 or some separate backlight device.
To illustrate by an example: ps8622: ... {
compatible = "parade,ps8622"; ... }; panel { ... backlight = <&ps8622>; ... };
No, if ps8622 backlight control is used, we need not specify the backlight phandle for the panel driver. Somehow, ps8622 internal circuitry keeps the bootup glitch free :) Backlight control and panel controls can be separate then.
But they shouldn't. Your panel driver should always be the one to control backlight. How else is the bridge supposed to know when to turn backlight on or off?
What you did in v6 of this series was look up a backlight device and then not use it. That seemed unnecessary. Looking at v6 again the reason for getting a phandle to the backlight was so that the device itself did not expose its own backlight controlling circuitry if an external one was being used. But since the bridge has no business controlling the backlight, having the backlight phandle in the bridge is not correct.
So I think what you could do in the driver instead is always expose the backlight device. If the panel used a different backlight, the PS8622's internal on simply wouldn't be accessed. It would still be possible to control the backlight in sysfs, but that shouldn't be a problem (only root can access it)
That would be like simple exposing a feature which cannot be used by the user, ideally which "should not be" used by the user.
And it won't be used unless they access the sysfs files directly. There are a lot of cases where we expose functionality that cannot be meaningfully used by the user. For example, a GPIO may not be routed to anything on a board, yet we don't explicitly hide any specific GPIOs from users.
That said, I have no strong objections to the boolean property if you feel like it's really necessary.
Won't you think having a boolean property for an optional feature of the device, is better than all these?
Like I said, I'm indifferent on the matter. I don't think it's necessary to hide the backlight device, but if you want to, please feel free to do so.
DT typically use
status = "disabled"
to disable devices. In this case we don't want to disable the ps8622 completely, but just one of its functions. Maybe a "backlight-status" property could be used for that ? If that's considered too verbose, I would be fine with a "disable-<feature>" boolean property too.
Another alternative would be to make the backlight a subnode:
ps8622: bridge@... { compatible = "parade,ps8622"; ...
backlight { ... status = "disabled"; ... }; };
Thierry
On Tue, Sep 23, 2014 at 11:25 AM, Thierry Reding thierry.reding@gmail.com wrote:
On Tue, Sep 23, 2014 at 03:00:37AM +0300, Laurent Pinchart wrote:
On Monday 22 September 2014 13:35:15 Thierry Reding wrote:
On Mon, Sep 22, 2014 at 04:53:22PM +0530, Ajay kumar wrote:
On Mon, Sep 22, 2014 at 4:11 PM, Thierry Reding wrote:
On Mon, Sep 22, 2014 at 02:01:38PM +0530, Ajay kumar wrote:
On Mon, Sep 22, 2014 at 1:40 PM, Thierry Reding wrote: > On Thu, Sep 18, 2014 at 11:20:40AM +0530, Ajay kumar wrote: >> On Wed, Sep 17, 2014 at 9:52 PM, Tomi Valkeinen wrote: >> > On 17/09/14 17:29, Ajay kumar wrote: >> >> On Wed, Sep 17, 2014 at 5:22 PM, Tomi Valkeinen wrote: >> >>> On 27/08/14 17:39, Ajay Kumar wrote: >> >>>> Add documentation for DT properties supported by ps8622/ps8625 >> >>>> eDP-LVDS converter. >> >>>> >> >>>> Signed-off-by: Ajay Kumar ajaykumar.rs@samsung.com >> >>>> --- >> >>>> >> >>>> .../devicetree/bindings/video/bridge/ps8622.txt | 20 >> >>>> ++++++++++++++++++++ 1 file changed, 20 insertions(+) >> >>>> create mode 100644 >> >>>> Documentation/devicetree/bindings/video/bridge/ps8622.txt >> >>>> >> >>>> diff --git >> >>>> a/Documentation/devicetree/bindings/video/bridge/ps8622.txt >> >>>> b/Documentation/devicetree/bindings/video/bridge/ps8622.txt >> >>>> new file mode 100644 >> >>>> index 0000000..0ec8172 >> >>>> --- /dev/null >> >>>> +++ b/Documentation/devicetree/bindings/video/bridge/ps8622.txt >> >>>> @@ -0,0 +1,20 @@ >> >>>> +ps8622-bridge bindings >> >>>> + >> >>>> +Required properties: >> >>>> + - compatible: "parade,ps8622" or "parade,ps8625" >> >>>> + - reg: first i2c address of the bridge >> >>>> + - sleep-gpios: OF device-tree gpio specification for PD_ >> >>>> pin. >> >>>> + - reset-gpios: OF device-tree gpio specification for RST_ >> >>>> pin. >> >>>> + >> >>>> +Optional properties: >> >>>> + - lane-count: number of DP lanes to use >> >>>> + - use-external-pwm: backlight will be controlled by an >> >>>> external PWM >> >>> >> >>> What does this mean? That the backlight support from ps8625 is >> >>> not used? If so, maybe "disable-pwm" or something? >> >> >> >> "use-external-pwm" or "disable-bridge-pwm" would be better. >> > >> > Well, the properties are about the bridge. "use-external-pwm" >> > means that the bridge uses an external PWM, which, if I understood >> > correctly, is not what the property is about. >> > >> > "disable-bridge-pwm" is ok, but the "bridge" there is extra. The >> > properties are about the bridge, so it's implicit. >> >> Ok. I will use "disable-pwm". > > Why is this even necessary? According to the datasheet this device > has circuitry for backlight control. If so, I'd expect it to expose > either a backlight device or a PWM device. That way unless somebody > is using the backlight/PWM exposed by the bridge the bridge can > simply disable PWM.
The driver does expose a backlight device. And, the decision(whether to expose a backlight device or not) is made based on the DT flag "use-external-pwm". This was discussed before, and you suggested to use the boolean property, refer to this link: http://lists.freedesktop.org/archives/dri-devel/2014-July/065048.html
I think you misunderstood what I said, or maybe I didn't explain clearly what I meant. If the PS8622 provides a backlight there's nothing wrong with always exposing it. The bridge itself isn't going to be using the backlight anyway. Rather the panel itself should be using the backlight device exposed by PS8622 or some separate backlight device.
To illustrate by an example: ps8622: ... {
compatible = "parade,ps8622"; ... }; panel { ... backlight = <&ps8622>; ... };
No, if ps8622 backlight control is used, we need not specify the backlight phandle for the panel driver. Somehow, ps8622 internal circuitry keeps the bootup glitch free :) Backlight control and panel controls can be separate then.
But they shouldn't. Your panel driver should always be the one to control backlight. How else is the bridge supposed to know when to turn backlight on or off?
What you did in v6 of this series was look up a backlight device and then not use it. That seemed unnecessary. Looking at v6 again the reason for getting a phandle to the backlight was so that the device itself did not expose its own backlight controlling circuitry if an external one was being used. But since the bridge has no business controlling the backlight, having the backlight phandle in the bridge is not correct.
So I think what you could do in the driver instead is always expose the backlight device. If the panel used a different backlight, the PS8622's internal on simply wouldn't be accessed. It would still be possible to control the backlight in sysfs, but that shouldn't be a problem (only root can access it)
That would be like simple exposing a feature which cannot be used by the user, ideally which "should not be" used by the user.
And it won't be used unless they access the sysfs files directly. There are a lot of cases where we expose functionality that cannot be meaningfully used by the user. For example, a GPIO may not be routed to anything on a board, yet we don't explicitly hide any specific GPIOs from users.
That said, I have no strong objections to the boolean property if you feel like it's really necessary.
Won't you think having a boolean property for an optional feature of the device, is better than all these?
Like I said, I'm indifferent on the matter. I don't think it's necessary to hide the backlight device, but if you want to, please feel free to do so.
DT typically use
status = "disabled"
to disable devices. In this case we don't want to disable the ps8622 completely, but just one of its functions. Maybe a "backlight-status" property could be used for that ? If that's considered too verbose, I would be fine with a "disable-<feature>" boolean property too.
Another alternative would be to make the backlight a subnode:
ps8622: bridge@... { compatible = "parade,ps8622"; ... backlight { ... status = "disabled"; ... };
In the above case, how do I query the status of backlight sub node in the driver?
Ajay
On Tue, Sep 23, 2014 at 11:41:33AM +0530, Ajay kumar wrote:
On Tue, Sep 23, 2014 at 11:25 AM, Thierry Reding thierry.reding@gmail.com wrote:
On Tue, Sep 23, 2014 at 03:00:37AM +0300, Laurent Pinchart wrote:
On Monday 22 September 2014 13:35:15 Thierry Reding wrote:
On Mon, Sep 22, 2014 at 04:53:22PM +0530, Ajay kumar wrote:
On Mon, Sep 22, 2014 at 4:11 PM, Thierry Reding wrote:
On Mon, Sep 22, 2014 at 02:01:38PM +0530, Ajay kumar wrote: > On Mon, Sep 22, 2014 at 1:40 PM, Thierry Reding wrote: > > On Thu, Sep 18, 2014 at 11:20:40AM +0530, Ajay kumar wrote: > >> On Wed, Sep 17, 2014 at 9:52 PM, Tomi Valkeinen wrote: > >> > On 17/09/14 17:29, Ajay kumar wrote: > >> >> On Wed, Sep 17, 2014 at 5:22 PM, Tomi Valkeinen wrote: > >> >>> On 27/08/14 17:39, Ajay Kumar wrote: > >> >>>> Add documentation for DT properties supported by ps8622/ps8625 > >> >>>> eDP-LVDS converter. > >> >>>> > >> >>>> Signed-off-by: Ajay Kumar ajaykumar.rs@samsung.com > >> >>>> --- > >> >>>> > >> >>>> .../devicetree/bindings/video/bridge/ps8622.txt | 20 > >> >>>> ++++++++++++++++++++ 1 file changed, 20 insertions(+) > >> >>>> create mode 100644 > >> >>>> Documentation/devicetree/bindings/video/bridge/ps8622.txt > >> >>>> > >> >>>> diff --git > >> >>>> a/Documentation/devicetree/bindings/video/bridge/ps8622.txt > >> >>>> b/Documentation/devicetree/bindings/video/bridge/ps8622.txt > >> >>>> new file mode 100644 > >> >>>> index 0000000..0ec8172 > >> >>>> --- /dev/null > >> >>>> +++ b/Documentation/devicetree/bindings/video/bridge/ps8622.txt > >> >>>> @@ -0,0 +1,20 @@ > >> >>>> +ps8622-bridge bindings > >> >>>> + > >> >>>> +Required properties: > >> >>>> + - compatible: "parade,ps8622" or "parade,ps8625" > >> >>>> + - reg: first i2c address of the bridge > >> >>>> + - sleep-gpios: OF device-tree gpio specification for PD_ > >> >>>> pin. > >> >>>> + - reset-gpios: OF device-tree gpio specification for RST_ > >> >>>> pin. > >> >>>> + > >> >>>> +Optional properties: > >> >>>> + - lane-count: number of DP lanes to use > >> >>>> + - use-external-pwm: backlight will be controlled by an > >> >>>> external PWM > >> >>> > >> >>> What does this mean? That the backlight support from ps8625 is > >> >>> not used? If so, maybe "disable-pwm" or something? > >> >> > >> >> "use-external-pwm" or "disable-bridge-pwm" would be better. > >> > > >> > Well, the properties are about the bridge. "use-external-pwm" > >> > means that the bridge uses an external PWM, which, if I understood > >> > correctly, is not what the property is about. > >> > > >> > "disable-bridge-pwm" is ok, but the "bridge" there is extra. The > >> > properties are about the bridge, so it's implicit. > >> > >> Ok. I will use "disable-pwm". > > > > Why is this even necessary? According to the datasheet this device > > has circuitry for backlight control. If so, I'd expect it to expose > > either a backlight device or a PWM device. That way unless somebody > > is using the backlight/PWM exposed by the bridge the bridge can > > simply disable PWM. > > The driver does expose a backlight device. > And, the decision(whether to expose a backlight device or not) is made > based on the DT flag "use-external-pwm". > This was discussed before, and you suggested to use the boolean > property, refer to this link: > http://lists.freedesktop.org/archives/dri-devel/2014-July/065048.html
I think you misunderstood what I said, or maybe I didn't explain clearly what I meant. If the PS8622 provides a backlight there's nothing wrong with always exposing it. The bridge itself isn't going to be using the backlight anyway. Rather the panel itself should be using the backlight device exposed by PS8622 or some separate backlight device.
To illustrate by an example: ps8622: ... {
compatible = "parade,ps8622"; ... }; panel { ... backlight = <&ps8622>; ... };
No, if ps8622 backlight control is used, we need not specify the backlight phandle for the panel driver. Somehow, ps8622 internal circuitry keeps the bootup glitch free :) Backlight control and panel controls can be separate then.
But they shouldn't. Your panel driver should always be the one to control backlight. How else is the bridge supposed to know when to turn backlight on or off?
What you did in v6 of this series was look up a backlight device and then not use it. That seemed unnecessary. Looking at v6 again the reason for getting a phandle to the backlight was so that the device itself did not expose its own backlight controlling circuitry if an external one was being used. But since the bridge has no business controlling the backlight, having the backlight phandle in the bridge is not correct.
So I think what you could do in the driver instead is always expose the backlight device. If the panel used a different backlight, the PS8622's internal on simply wouldn't be accessed. It would still be possible to control the backlight in sysfs, but that shouldn't be a problem (only root can access it)
That would be like simple exposing a feature which cannot be used by the user, ideally which "should not be" used by the user.
And it won't be used unless they access the sysfs files directly. There are a lot of cases where we expose functionality that cannot be meaningfully used by the user. For example, a GPIO may not be routed to anything on a board, yet we don't explicitly hide any specific GPIOs from users.
That said, I have no strong objections to the boolean property if you feel like it's really necessary.
Won't you think having a boolean property for an optional feature of the device, is better than all these?
Like I said, I'm indifferent on the matter. I don't think it's necessary to hide the backlight device, but if you want to, please feel free to do so.
DT typically use
status = "disabled"
to disable devices. In this case we don't want to disable the ps8622 completely, but just one of its functions. Maybe a "backlight-status" property could be used for that ? If that's considered too verbose, I would be fine with a "disable-<feature>" boolean property too.
Another alternative would be to make the backlight a subnode:
ps8622: bridge@... { compatible = "parade,ps8622"; ... backlight { ... status = "disabled"; ... };
In the above case, how do I query the status of backlight sub node in the driver?
Something like this should work:
struct device_node *np;
np = of_get_child_by_name(dev->of_node, "backlight"); if (np) { if (of_device_is_available(np)) { /* register backlight device */ }
of_node_put(np); }
Thierry
Hi Thierry,
On Tuesday 23 September 2014 07:55:34 Thierry Reding wrote:
On Tue, Sep 23, 2014 at 03:00:37AM +0300, Laurent Pinchart wrote:
On Monday 22 September 2014 13:35:15 Thierry Reding wrote:
On Mon, Sep 22, 2014 at 04:53:22PM +0530, Ajay kumar wrote:
On Mon, Sep 22, 2014 at 4:11 PM, Thierry Reding wrote:
On Mon, Sep 22, 2014 at 02:01:38PM +0530, Ajay kumar wrote:
[snip]
I think you misunderstood what I said, or maybe I didn't explain clearly what I meant. If the PS8622 provides a backlight there's nothing wrong with always exposing it. The bridge itself isn't going to be using the backlight anyway. Rather the panel itself should be using the backlight device exposed by PS8622 or some separate backlight device.
To illustrate by an example: ps8622: ... { compatible = "parade,ps8622"; ... };
panel { ... backlight = <&ps8622>; ... };
No, if ps8622 backlight control is used, we need not specify the backlight phandle for the panel driver. Somehow, ps8622 internal circuitry keeps the bootup glitch free :) Backlight control and panel controls can be separate then.
But they shouldn't. Your panel driver should always be the one to control backlight. How else is the bridge supposed to know when to turn backlight on or off?
What you did in v6 of this series was look up a backlight device and then not use it. That seemed unnecessary. Looking at v6 again the reason for getting a phandle to the backlight was so that the device itself did not expose its own backlight controlling circuitry if an external one was being used. But since the bridge has no business controlling the backlight, having the backlight phandle in the bridge is not correct.
So I think what you could do in the driver instead is always expose the backlight device. If the panel used a different backlight, the PS8622's internal on simply wouldn't be accessed. It would still be possible to control the backlight in sysfs, but that shouldn't be a problem (only root can access it)
That would be like simple exposing a feature which cannot be used by the user, ideally which "should not be" used by the user.
And it won't be used unless they access the sysfs files directly. There are a lot of cases where we expose functionality that cannot be meaningfully used by the user. For example, a GPIO may not be routed to anything on a board, yet we don't explicitly hide any specific GPIOs from users.
That said, I have no strong objections to the boolean property if you feel like it's really necessary.
Won't you think having a boolean property for an optional feature of the device, is better than all these?
Like I said, I'm indifferent on the matter. I don't think it's necessary to hide the backlight device, but if you want to, please feel free to do so.
DT typically use
status = "disabled"
to disable devices. In this case we don't want to disable the ps8622 completely, but just one of its functions. Maybe a "backlight-status" property could be used for that ? If that's considered too verbose, I would be fine with a "disable-<feature>" boolean property too.
Another alternative would be to make the backlight a subnode:
ps8622: bridge@... { compatible = "parade,ps8622"; ...
backlight { ... status = "disabled"; ... };
};
I've thought about that as well. I feel it's getting a bit complex for little added benefit, but I don't have a very strong opinion.
Devices that expose several functions are not the odd case, and the need to selectively mark some of them as disabled in DT seems pretty common to me. I wonder if we should try to decide on standard bindings for that. To recap the proposals, we could have
- a boolean "disable-<feature>" property - a text "<feature>-status" property - a "<features>" subnode with a "status" text property
Anything else ?
On Wed, Sep 17, 2014 at 07:22:05PM +0300, Tomi Valkeinen wrote:
On 17/09/14 17:29, Ajay kumar wrote:
Hi Tomi,
Thanks for your comments.
On Wed, Sep 17, 2014 at 5:22 PM, Tomi Valkeinen tomi.valkeinen@ti.com wrote:
On 27/08/14 17:39, Ajay Kumar wrote:
Add documentation for DT properties supported by ps8622/ps8625 eDP-LVDS converter.
Signed-off-by: Ajay Kumar ajaykumar.rs@samsung.com
.../devicetree/bindings/video/bridge/ps8622.txt | 20 ++++++++++++++++++++ 1 file changed, 20 insertions(+) create mode 100644 Documentation/devicetree/bindings/video/bridge/ps8622.txt
diff --git a/Documentation/devicetree/bindings/video/bridge/ps8622.txt b/Documentation/devicetree/bindings/video/bridge/ps8622.txt new file mode 100644 index 0000000..0ec8172 --- /dev/null +++ b/Documentation/devicetree/bindings/video/bridge/ps8622.txt @@ -0,0 +1,20 @@ +ps8622-bridge bindings
+Required properties:
- compatible: "parade,ps8622" or "parade,ps8625"
- reg: first i2c address of the bridge
- sleep-gpios: OF device-tree gpio specification for PD_ pin.
- reset-gpios: OF device-tree gpio specification for RST_ pin.
+Optional properties:
- lane-count: number of DP lanes to use
- use-external-pwm: backlight will be controlled by an external PWM
What does this mean? That the backlight support from ps8625 is not used? If so, maybe "disable-pwm" or something?
"use-external-pwm" or "disable-bridge-pwm" would be better.
Well, the properties are about the bridge. "use-external-pwm" means that the bridge uses an external PWM, which, if I understood correctly, is not what the property is about.
"disable-bridge-pwm" is ok, but the "bridge" there is extra. The properties are about the bridge, so it's implicit.
+Example:
lvds-bridge@48 {
compatible = "parade,ps8622";
reg = <0x48>;
sleep-gpios = <&gpc3 6 1 0 0>;
reset-gpios = <&gpc3 1 1 0 0>;
lane-count = <1>;
};
I wish all new display component bindings would use the video ports/endpoints to describe the connections. It will be very difficult to improve the display driver model later if we're missing such critical pieces from the DT bindings.
Why do we need a complex graph when it can be handled using a simple phandle?
Maybe in your case you can handle it with simple phandle. Can you guarantee that it's enough for everyone, on all platforms?
Nobody can guarantee that. An interesting effect that DT ABI stability has had is that people now try to come up with overly generic concepts to describe devices in an attempt to make them more future-proof. I don't think we're doing ourselves a favour with that approach.
The point of the ports/endpoint graph is to also support more complicated scenarios.
But the scenario will always remain the same for this bridge. There will always be an input signal and a translation to some output signal along with some parameters to control how that translation happens. More complicated scenarios will likely need to be represented differently at a higher level than the bridge.
If you now create ps8622 bindings that do not support those graphs, the no one else using ps8622 can use ports/endpoint graphs either.
That's not true. The binding can easily be extended in a backwards- compatible way later on (should it really prove necessary).
Thierry
On 22/09/14 11:06, Thierry Reding wrote:
Why do we need a complex graph when it can be handled using a simple phandle?
Maybe in your case you can handle it with simple phandle. Can you guarantee that it's enough for everyone, on all platforms?
Nobody can guarantee that. An interesting effect that DT ABI stability has had is that people now try to come up with overly generic concepts to describe devices in an attempt to make them more future-proof. I don't think we're doing ourselves a favour with that approach.
I kind of agree. However, there are boards that need a more complex approach than just a simple phandle. They are nothing new.
So while it may be true that this specific encoder has never been used in such a complex way, and there's a chance that it never will, my comments are not really about this particular encoder.
What I want is a standard way to describe the video components. If we have a standard complex one (video graphs) and a standard simple one (simple phandles), it's ok for me.
The point of the ports/endpoint graph is to also support more complicated scenarios.
But the scenario will always remain the same for this bridge. There will always be an input signal and a translation to some output signal along with some parameters to control how that translation happens. More complicated scenarios will likely need to be represented differently at a higher level than the bridge.
Yes, there's always one active input and one output for this bridge. What the video graphs would bring is to have the possibility to have multiple inputs and outputs, of which a single ones could be active at a time. The different inputs and outputs could even have different settings required (say, first output requires this bit set, but when using second output that bit must be cleared).
If you now create ps8622 bindings that do not support those graphs, the no one else using ps8622 can use ports/endpoint graphs either.
That's not true. The binding can easily be extended in a backwards- compatible way later on (should it really prove necessary).
I hope you are right =). As I said in my earlier mail, my experience so far has been different.
Tomi
On Mon, Sep 22, 2014 at 05:23:25PM +0300, Tomi Valkeinen wrote:
On 22/09/14 11:06, Thierry Reding wrote:
Why do we need a complex graph when it can be handled using a simple phandle?
Maybe in your case you can handle it with simple phandle. Can you guarantee that it's enough for everyone, on all platforms?
Nobody can guarantee that. An interesting effect that DT ABI stability has had is that people now try to come up with overly generic concepts to describe devices in an attempt to make them more future-proof. I don't think we're doing ourselves a favour with that approach.
I kind of agree. However, there are boards that need a more complex approach than just a simple phandle. They are nothing new.
So while it may be true that this specific encoder has never been used in such a complex way, and there's a chance that it never will, my comments are not really about this particular encoder.
What I want is a standard way to describe the video components. If we have a standard complex one (video graphs) and a standard simple one (simple phandles), it's ok for me.
I certainly agree that it's useful to have standard ways to describe at least various aspects. For example I think it would be useful to add standard properties for a bridge's connections, such as "bridge" or "panel" to allow bridge chaining and attaching them to panels.
But I think that should be the end of it. Mandating anything other than that will just complicate things and limit what people can do in the binding.
One of the disadvantages of the video graph bindings is that they are overly vague in that they don't carry information about what type a device is. Bindings then have to require additional meta-data, at which point it's become far easier to describe things with a custom property that already provides context.
The point of the ports/endpoint graph is to also support more complicated scenarios.
But the scenario will always remain the same for this bridge. There will always be an input signal and a translation to some output signal along with some parameters to control how that translation happens. More complicated scenarios will likely need to be represented differently at a higher level than the bridge.
Yes, there's always one active input and one output for this bridge. What the video graphs would bring is to have the possibility to have multiple inputs and outputs, of which a single ones could be active at a time. The different inputs and outputs could even have different settings required (say, first output requires this bit set, but when using second output that bit must be cleared).
As discussed elsewhere this should be handled at a different level then. DT should describe the hardware and this particular bridge device simply doesn't have a means to connect more than a single input or more than a single output.
Thierry
Hi Thierry, Tomi,
On 09/23/2014 08:04 AM, Thierry Reding wrote:
On Mon, Sep 22, 2014 at 05:23:25PM +0300, Tomi Valkeinen wrote:
On 22/09/14 11:06, Thierry Reding wrote:
Why do we need a complex graph when it can be handled using a simple phandle?
Maybe in your case you can handle it with simple phandle. Can you guarantee that it's enough for everyone, on all platforms?
Nobody can guarantee that. An interesting effect that DT ABI stability has had is that people now try to come up with overly generic concepts to describe devices in an attempt to make them more future-proof. I don't think we're doing ourselves a favour with that approach.
I kind of agree. However, there are boards that need a more complex approach than just a simple phandle. They are nothing new.
So while it may be true that this specific encoder has never been used in such a complex way, and there's a chance that it never will, my comments are not really about this particular encoder.
What I want is a standard way to describe the video components. If we have a standard complex one (video graphs) and a standard simple one (simple phandles), it's ok for me.
I certainly agree that it's useful to have standard ways to describe at least various aspects. For example I think it would be useful to add standard properties for a bridge's connections, such as "bridge" or "panel" to allow bridge chaining and attaching them to panels.
But I think that should be the end of it. Mandating anything other than that will just complicate things and limit what people can do in the binding.
One of the disadvantages of the video graph bindings is that they are overly vague in that they don't carry information about what type a device is. Bindings then have to require additional meta-data, at which point it's become far easier to describe things with a custom property that already provides context.
I think it is opposite, graph bindings should describe only the link and certainly not what type of device is behind the endpoint. The fact we may need that information is a disadvantage of Linux frameworks and should be corrected in Linux, not by adding Linux specific properties to DT. For example display controller should not care where its output goes to: panel, encoder, bridge, whatever. It should care only if the parameters of the link are agreed with the receiver.
On the other side I agree graph bindings are bloated and it should be possible to simplify them to single phandle if we do not need extra parameters.
Regards Andrzej
The point of the ports/endpoint graph is to also support more complicated scenarios.
But the scenario will always remain the same for this bridge. There will always be an input signal and a translation to some output signal along with some parameters to control how that translation happens. More complicated scenarios will likely need to be represented differently at a higher level than the bridge.
Yes, there's always one active input and one output for this bridge. What the video graphs would bring is to have the possibility to have multiple inputs and outputs, of which a single ones could be active at a time. The different inputs and outputs could even have different settings required (say, first output requires this bit set, but when using second output that bit must be cleared).
As discussed elsewhere this should be handled at a different level then. DT should describe the hardware and this particular bridge device simply doesn't have a means to connect more than a single input or more than a single output.
Thierry
dri-devel mailing list dri-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/dri-devel
On Tue, Sep 23, 2014 at 09:24:12AM +0200, Andrzej Hajda wrote:
Hi Thierry, Tomi,
On 09/23/2014 08:04 AM, Thierry Reding wrote:
On Mon, Sep 22, 2014 at 05:23:25PM +0300, Tomi Valkeinen wrote:
On 22/09/14 11:06, Thierry Reding wrote:
Why do we need a complex graph when it can be handled using a simple phandle?
Maybe in your case you can handle it with simple phandle. Can you guarantee that it's enough for everyone, on all platforms?
Nobody can guarantee that. An interesting effect that DT ABI stability has had is that people now try to come up with overly generic concepts to describe devices in an attempt to make them more future-proof. I don't think we're doing ourselves a favour with that approach.
I kind of agree. However, there are boards that need a more complex approach than just a simple phandle. They are nothing new.
So while it may be true that this specific encoder has never been used in such a complex way, and there's a chance that it never will, my comments are not really about this particular encoder.
What I want is a standard way to describe the video components. If we have a standard complex one (video graphs) and a standard simple one (simple phandles), it's ok for me.
I certainly agree that it's useful to have standard ways to describe at least various aspects. For example I think it would be useful to add standard properties for a bridge's connections, such as "bridge" or "panel" to allow bridge chaining and attaching them to panels.
But I think that should be the end of it. Mandating anything other than that will just complicate things and limit what people can do in the binding.
One of the disadvantages of the video graph bindings is that they are overly vague in that they don't carry information about what type a device is. Bindings then have to require additional meta-data, at which point it's become far easier to describe things with a custom property that already provides context.
I think it is opposite, graph bindings should describe only the link and certainly not what type of device is behind the endpoint. The fact we may need that information is a disadvantage of Linux frameworks and should be corrected in Linux, not by adding Linux specific properties to DT. For example display controller should not care where its output goes to: panel, encoder, bridge, whatever. It should care only if the parameters of the link are agreed with the receiver.
Well, a display controller is never going to attach to a panel directly. But I agree that it would be nice to unify bridges and encoders more. It should be possible to make encoder always a bridge (or perhaps even replace encoders with bridges altogether). Then once you're out of the DRM device everything would be a bridge until you get to a panel.
I think we discussed this a while back in the context of bridge chaining.
Thierry
On 23/09/14 11:35, Thierry Reding wrote:
Well, a display controller is never going to attach to a panel directly.
With parallel RGB, that (almost) happens. There's voltage level shifting probably in the middle, but I don't see anything else there.
But I agree that it would be nice to unify bridges and encoders more. It should be possible to make encoder always a bridge (or perhaps even replace encoders with bridges altogether). Then once you're out of the DRM device everything would be a bridge until you get to a panel.
What exactly is a bridge and what is an encoder? Those are DRM constructs, aren't they?
As I see it, a video pipeline consist of a video source (display controller usually), a chain of encoders (all of which may not be strictly "encoders", they could be level shifters, muxes, ESD protection devices or such), and either a display device like a panel or a connector to outside world.
Am I right that in DRM world the encoder is the first device in the display chain after the display controller, and the next is a bridge? That sounds totally artificial, and I hope we don't reflect that in the DT side in any way.
Tomi
On Tue, Sep 23, 2014 at 12:40:24PM +0300, Tomi Valkeinen wrote:
On 23/09/14 11:35, Thierry Reding wrote:
Well, a display controller is never going to attach to a panel directly.
With parallel RGB, that (almost) happens. There's voltage level shifting probably in the middle, but I don't see anything else there.
The level shifting could be considered an encoder. Anyway, I didn't mean physically attach to a panel but rather in DRM. You'll always need an encoder and connector before you go to the panel.
But I agree that it would be nice to unify bridges and encoders more. It should be possible to make encoder always a bridge (or perhaps even replace encoders with bridges altogether). Then once you're out of the DRM device everything would be a bridge until you get to a panel.
What exactly is a bridge and what is an encoder? Those are DRM constructs, aren't they?
Yes. I think bridges are mostly a superset of encoders.
As I see it, a video pipeline consist of a video source (display controller usually), a chain of encoders (all of which may not be strictly "encoders", they could be level shifters, muxes, ESD protection devices or such), and either a display device like a panel or a connector to outside world.
Well, the panel itself is attached to a connector. The connector is really what userspace uses to control the output and indirectly the panel.
Am I right that in DRM world the encoder is the first device in the display chain after the display controller,
Yes.
and the next is a bridge?
A bridge or a connector. Typically it would be a connector, but that's only if you don't have bridges in between.
That sounds totally artificial, and I hope we don't reflect that in the DT side in any way.
Yes, they are software concepts and I'm not aware of any of it being exposed in DT. A display controller is usually implemented as DRM CRTC object and outputs (DSI, eDP, HDMI) as encoder (often with a connector tied to it).
Thierry
On 23/09/14 13:01, Thierry Reding wrote:
On Tue, Sep 23, 2014 at 12:40:24PM +0300, Tomi Valkeinen wrote:
On 23/09/14 11:35, Thierry Reding wrote:
Well, a display controller is never going to attach to a panel directly.
With parallel RGB, that (almost) happens. There's voltage level shifting probably in the middle, but I don't see anything else there.
The level shifting could be considered an encoder. Anyway, I didn't mean physically attach to a panel but rather in DRM. You'll always need an encoder and connector before you go to the panel.
"For now", you mean.
I'd rather see much more freedom there. If a display controller is connected to a panel directly, with only non-controllable level shifters in between (I don't even think those are strictly needed. why couldn't there be a low-end soc that works with higher voltages?), I think we should model it in the SW side by just having a device for the display controller and a device for the panel. No artificial encoders/bridges/connectors needed in between.
But I understand that for DRM legacy reasons that may never happen.
But I agree that it would be nice to unify bridges and encoders more. It should be possible to make encoder always a bridge (or perhaps even replace encoders with bridges altogether). Then once you're out of the DRM device everything would be a bridge until you get to a panel.
What exactly is a bridge and what is an encoder? Those are DRM constructs, aren't they?
Yes. I think bridges are mostly a superset of encoders.
Superset in what way? If I have a SiI9022 RGB-to-HDMI encoder embedded into my SoC (i.e. a DRM encoder), or I have a board with a SiI9022 encoder on the board (i.e. a DRM bridge), what is different?
As I see it, a video pipeline consist of a video source (display controller usually), a chain of encoders (all of which may not be strictly "encoders", they could be level shifters, muxes, ESD protection devices or such), and either a display device like a panel or a connector to outside world.
Well, the panel itself is attached to a connector. The connector is really what userspace uses to control the output and indirectly the panel.
Yep. That's also something that should be fixed. A connector SW entity is fine when connecting an external monitor, but a panel directly connected to the SoC does not have a connector, and it's the panel that should be controlled from the userspace.
But again, the legacy...
Am I right that in DRM world the encoder is the first device in the display chain after the display controller,
Yes.
and the next is a bridge?
A bridge or a connector. Typically it would be a connector, but that's only if you don't have bridges in between.
That sounds totally artificial, and I hope we don't reflect that in the DT side in any way.
Yes, they are software concepts and I'm not aware of any of it being exposed in DT. A display controller is usually implemented as DRM CRTC object and outputs (DSI, eDP, HDMI) as encoder (often with a connector tied to it).
Ok. Thanks for the clarifications.
Tomi
On Tue, Sep 23, 2014 at 03:09:44PM +0300, Tomi Valkeinen wrote:
On 23/09/14 13:01, Thierry Reding wrote:
On Tue, Sep 23, 2014 at 12:40:24PM +0300, Tomi Valkeinen wrote:
[...]
What exactly is a bridge and what is an encoder? Those are DRM constructs, aren't they?
Yes. I think bridges are mostly a superset of encoders.
Superset in what way? If I have a SiI9022 RGB-to-HDMI encoder embedded into my SoC (i.e. a DRM encoder), or I have a board with a SiI9022 encoder on the board (i.e. a DRM bridge), what is different?
Superset in what they represent from a software point of view. As in: an encoder /is a/ bridge. Though they aren't currently implemented that way.
As I see it, a video pipeline consist of a video source (display controller usually), a chain of encoders (all of which may not be strictly "encoders", they could be level shifters, muxes, ESD protection devices or such), and either a display device like a panel or a connector to outside world.
Well, the panel itself is attached to a connector. The connector is really what userspace uses to control the output and indirectly the panel.
Yep. That's also something that should be fixed. A connector SW entity is fine when connecting an external monitor, but a panel directly connected to the SoC does not have a connector, and it's the panel that should be controlled from the userspace.
I disagree. A panel is usually also attached using some sort of connector. Maybe not an HDMI, VGA or DP connector, but still some sort of connector where often it can even be removed.
Panels are theoretically hot-pluggable, too, much in the same way that monitors are.
But again, the legacy...
You've got to make the abstraction somewhere, and I'm quite surprised actually how well the DRM abstractions are working out. I mean we can support anything from HDMI to DP to eDP, DSI and LVDS with just the connector abstraction and it all just works. That makes it a pretty good abstraction in my book.
Thierry
On 23/09/14 17:38, Thierry Reding wrote:
On Tue, Sep 23, 2014 at 03:09:44PM +0300, Tomi Valkeinen wrote:
On 23/09/14 13:01, Thierry Reding wrote:
On Tue, Sep 23, 2014 at 12:40:24PM +0300, Tomi Valkeinen wrote:
[...]
What exactly is a bridge and what is an encoder? Those are DRM constructs, aren't they?
Yes. I think bridges are mostly a superset of encoders.
Superset in what way? If I have a SiI9022 RGB-to-HDMI encoder embedded into my SoC (i.e. a DRM encoder), or I have a board with a SiI9022 encoder on the board (i.e. a DRM bridge), what is different?
Superset in what they represent from a software point of view. As in: an encoder /is a/ bridge. Though they aren't currently implemented that way.
So they are equal, not superset? Or what does an encoder do that a bridge doesn't?
As I see it, a video pipeline consist of a video source (display controller usually), a chain of encoders (all of which may not be strictly "encoders", they could be level shifters, muxes, ESD protection devices or such), and either a display device like a panel or a connector to outside world.
Well, the panel itself is attached to a connector. The connector is really what userspace uses to control the output and indirectly the panel.
Yep. That's also something that should be fixed. A connector SW entity is fine when connecting an external monitor, but a panel directly connected to the SoC does not have a connector, and it's the panel that should be controlled from the userspace.
I disagree. A panel is usually also attached using some sort of connector. Maybe not an HDMI, VGA or DP connector, but still some sort of connector where often it can even be removed.
Yes, but a normal panel connector is really just an extension of wires. There is no difference if you wire the panel directly to the video source, or if there's a connector.
I think usually connectors to the external world are not quite as simple, as there may be some protection components or such, but even if there's nothing extra, they are still a clear component visible to outside world. For HDMI you (sometimes) even need properties for the connector, like source for +5V, i2c bus, hotplug pin.
But even if there are no extra properties like that, I still think it's good to have a connector entity for a connector to outside world. Whereas I don't see any need for such when the connector is internal. That said, I don't see any reason to prevent that either.
Panels are theoretically hot-pluggable, too, much in the same way that monitors are.
So are encoders, in the same way. I have a main board, with a video connector. That goes to an encoder on display board, and that again has a connector, to which the panel is connected.
I also have a panel that can be conneted directly to the main board's video output.
But again, the legacy...
You've got to make the abstraction somewhere, and I'm quite surprised actually how well the DRM abstractions are working out. I mean we can support anything from HDMI to DP to eDP, DSI and LVDS with just the connector abstraction and it all just works. That makes it a pretty good abstraction in my book.
Yes, I agree it's good in the sense that it works. And much better than fbdev, which has nothing. But it's not good in the sense that it's not what I'd design if I could start from scratch.
Tomi
On 09/23/2014 10:35 AM, Thierry Reding wrote:
On Tue, Sep 23, 2014 at 09:24:12AM +0200, Andrzej Hajda wrote:
Hi Thierry, Tomi,
On 09/23/2014 08:04 AM, Thierry Reding wrote:
On Mon, Sep 22, 2014 at 05:23:25PM +0300, Tomi Valkeinen wrote:
On 22/09/14 11:06, Thierry Reding wrote:
> Why do we need a complex graph when it can be handled using a simple phandle? Maybe in your case you can handle it with simple phandle. Can you guarantee that it's enough for everyone, on all platforms?
Nobody can guarantee that. An interesting effect that DT ABI stability has had is that people now try to come up with overly generic concepts to describe devices in an attempt to make them more future-proof. I don't think we're doing ourselves a favour with that approach.
I kind of agree. However, there are boards that need a more complex approach than just a simple phandle. They are nothing new.
So while it may be true that this specific encoder has never been used in such a complex way, and there's a chance that it never will, my comments are not really about this particular encoder.
What I want is a standard way to describe the video components. If we have a standard complex one (video graphs) and a standard simple one (simple phandles), it's ok for me.
I certainly agree that it's useful to have standard ways to describe at least various aspects. For example I think it would be useful to add standard properties for a bridge's connections, such as "bridge" or "panel" to allow bridge chaining and attaching them to panels.
But I think that should be the end of it. Mandating anything other than that will just complicate things and limit what people can do in the binding.
One of the disadvantages of the video graph bindings is that they are overly vague in that they don't carry information about what type a device is. Bindings then have to require additional meta-data, at which point it's become far easier to describe things with a custom property that already provides context.
I think it is opposite, graph bindings should describe only the link and certainly not what type of device is behind the endpoint. The fact we may need that information is a disadvantage of Linux frameworks and should be corrected in Linux, not by adding Linux specific properties to DT. For example display controller should not care where its output goes to: panel, encoder, bridge, whatever. It should care only if the parameters of the link are agreed with the receiver.
Well, a display controller is never going to attach to a panel directly.
Yes it is. For example Exynos Display Controller uses parallel RGB as its output format, so in case of DPI panels it is connected directly to the panel. I guess it is similar with other SoCs.
But I agree that it would be nice to unify bridges and encoders more. It should be possible to make encoder always a bridge (or perhaps even replace encoders with bridges altogether). Then once you're out of the DRM device everything would be a bridge until you get to a panel.
I agree that some sort of unification of bridges, (slave) encoders is a good thing, this way we stay only with bridges and panels as receivers. But we will still have to deal with the code like: if (on the other end of the link is panel) do panel framework specific things else do bridge framework specific things
The code in both branches usually does similar things but due to framework differences it is difficult to merge.
Ideally it would be best to get rid of such constructs. For example by trying to make frameworks per interface type exposed by device (eg. video_sink) and not by device type (drm_bridge, drm_panel).
I have proposed it is possible by (ab)using drm_panel framework for DSI/LVDS bridge [1].
[1]: http://lists.freedesktop.org/archives/dri-devel/2014-February/053705.html
Regards Andrzej
I think we discussed this a while back in the context of bridge chaining.
Thierry
On Tue, Sep 23, 2014 at 11:43:47AM +0200, Andrzej Hajda wrote:
On 09/23/2014 10:35 AM, Thierry Reding wrote:
[...]
But I agree that it would be nice to unify bridges and encoders more. It should be possible to make encoder always a bridge (or perhaps even replace encoders with bridges altogether). Then once you're out of the DRM device everything would be a bridge until you get to a panel.
I agree that some sort of unification of bridges, (slave) encoders is a good thing, this way we stay only with bridges and panels as receivers. But we will still have to deal with the code like: if (on the other end of the link is panel) do panel framework specific things else do bridge framework specific things
The code in both branches usually does similar things but due to framework differences it is difficult to merge.
That's because they are inherently different entities. You can perform operations on a panel that don't make sense for a bridge and vice-versa.
Ideally it would be best to get rid of such constructs. For example by trying to make frameworks per interface type exposed by device (eg. video_sink) and not by device type (drm_bridge, drm_panel).
But then you loose all information about the real type of device. Or you have to create a common base class. And then you're still back to dealing with the two specific cases in many places, so the gain seems rather minimal.
Thierry
On 09/23/2014 12:10 PM, Thierry Reding wrote:
On Tue, Sep 23, 2014 at 11:43:47AM +0200, Andrzej Hajda wrote:
On 09/23/2014 10:35 AM, Thierry Reding wrote:
[...]
But I agree that it would be nice to unify bridges and encoders more. It should be possible to make encoder always a bridge (or perhaps even replace encoders with bridges altogether). Then once you're out of the DRM device everything would be a bridge until you get to a panel.
I agree that some sort of unification of bridges, (slave) encoders is a good thing, this way we stay only with bridges and panels as receivers. But we will still have to deal with the code like: if (on the other end of the link is panel) do panel framework specific things else do bridge framework specific things
The code in both branches usually does similar things but due to framework differences it is difficult to merge.
That's because they are inherently different entities. You can perform operations on a panel that don't make sense for a bridge and vice-versa.
Ideally it would be best to get rid of such constructs. For example by trying to make frameworks per interface type exposed by device (eg. video_sink) and not by device type (drm_bridge, drm_panel).
But then you loose all information about the real type of device.
Driver knows type of its device anyway. Why should it know device type of devices it is connected to? It is enough it knows how to talk to other device. Like in real world, why desktop PC should know it is connected to DVI monitor or to DVI/HDMI converter which is connected to TV?
Or you have to create a common base class. And then you're still back to dealing with the two specific cases in many places, so the gain seems rather minimal.
For example RGB panel and RGB/??? bridge have the same hardware input interface. Why shall they have different representation in kernel?
Regards Andrzej
Thierry
On Tue, Sep 23, 2014 at 12:34:54PM +0200, Andrzej Hajda wrote:
On 09/23/2014 12:10 PM, Thierry Reding wrote:
On Tue, Sep 23, 2014 at 11:43:47AM +0200, Andrzej Hajda wrote:
On 09/23/2014 10:35 AM, Thierry Reding wrote:
[...]
But I agree that it would be nice to unify bridges and encoders more. It should be possible to make encoder always a bridge (or perhaps even replace encoders with bridges altogether). Then once you're out of the DRM device everything would be a bridge until you get to a panel.
I agree that some sort of unification of bridges, (slave) encoders is a good thing, this way we stay only with bridges and panels as receivers. But we will still have to deal with the code like: if (on the other end of the link is panel) do panel framework specific things else do bridge framework specific things
The code in both branches usually does similar things but due to framework differences it is difficult to merge.
That's because they are inherently different entities. You can perform operations on a panel that don't make sense for a bridge and vice-versa.
Ideally it would be best to get rid of such constructs. For example by trying to make frameworks per interface type exposed by device (eg. video_sink) and not by device type (drm_bridge, drm_panel).
But then you loose all information about the real type of device.
Driver knows type of its device anyway. Why should it know device type of devices it is connected to? It is enough it knows how to talk to other device. Like in real world, why desktop PC should know it is connected to DVI monitor or to DVI/HDMI converter which is connected to TV?
TVs are much more standardized. There are things like DDC/CI which can be used to control the device. Or there's additional circuitry or control paths to change things like brightness. The same isn't true of panels.
Or you have to create a common base class. And then you're still back to dealing with the two specific cases in many places, so the gain seems rather minimal.
For example RGB panel and RGB/??? bridge have the same hardware input interface. Why shall they have different representation in kernel?
Because panels have different requirements than bridges. Panels for example require the backlight to be adjustable, bridges don't.
Thierry
On 09/23/2014 04:41 PM, Thierry Reding wrote:
On Tue, Sep 23, 2014 at 12:34:54PM +0200, Andrzej Hajda wrote:
On 09/23/2014 12:10 PM, Thierry Reding wrote:
On Tue, Sep 23, 2014 at 11:43:47AM +0200, Andrzej Hajda wrote:
On 09/23/2014 10:35 AM, Thierry Reding wrote:
[...]
But I agree that it would be nice to unify bridges and encoders more. It should be possible to make encoder always a bridge (or perhaps even replace encoders with bridges altogether). Then once you're out of the DRM device everything would be a bridge until you get to a panel.
I agree that some sort of unification of bridges, (slave) encoders is a good thing, this way we stay only with bridges and panels as receivers. But we will still have to deal with the code like: if (on the other end of the link is panel) do panel framework specific things else do bridge framework specific things
The code in both branches usually does similar things but due to framework differences it is difficult to merge.
That's because they are inherently different entities. You can perform operations on a panel that don't make sense for a bridge and vice-versa.
Ideally it would be best to get rid of such constructs. For example by trying to make frameworks per interface type exposed by device (eg. video_sink) and not by device type (drm_bridge, drm_panel).
But then you loose all information about the real type of device.
Driver knows type of its device anyway. Why should it know device type of devices it is connected to? It is enough it knows how to talk to other device. Like in real world, why desktop PC should know it is connected to DVI monitor or to DVI/HDMI converter which is connected to TV?
TVs are much more standardized. There are things like DDC/CI which can be used to control the device. Or there's additional circuitry or control paths to change things like brightness. The same isn't true of panels.
But it is also true on HW level in case of display subsystem components - If some device have output format X it means it can be connected to any device having input format X, whatever it is: panel, bridge, image enhancer...
Or you have to create a common base class. And then you're still back to dealing with the two specific cases in many places, so the gain seems rather minimal.
For example RGB panel and RGB/??? bridge have the same hardware input interface. Why shall they have different representation in kernel?
Because panels have different requirements than bridges. Panels for example require the backlight to be adjustable, bridges don't.
But I have asked about their input interface, RGB panel and RGB/??? bridge have the same input interface. In other words if instead of creating frameworks for each device type we try to create framework for functions/interfaces these device provides we should end up with fewer frameworks and more clean code.
Regards Andrzej
Thierry
On 23/09/14 17:41, Thierry Reding wrote:
On Tue, Sep 23, 2014 at 12:34:54PM +0200, Andrzej Hajda wrote:
On 09/23/2014 12:10 PM, Thierry Reding wrote:
On Tue, Sep 23, 2014 at 11:43:47AM +0200, Andrzej Hajda wrote:
On 09/23/2014 10:35 AM, Thierry Reding wrote:
[...]
But I agree that it would be nice to unify bridges and encoders more. It should be possible to make encoder always a bridge (or perhaps even replace encoders with bridges altogether). Then once you're out of the DRM device everything would be a bridge until you get to a panel.
I agree that some sort of unification of bridges, (slave) encoders is a good thing, this way we stay only with bridges and panels as receivers. But we will still have to deal with the code like: if (on the other end of the link is panel) do panel framework specific things else do bridge framework specific things
The code in both branches usually does similar things but due to framework differences it is difficult to merge.
That's because they are inherently different entities. You can perform operations on a panel that don't make sense for a bridge and vice-versa.
Ideally it would be best to get rid of such constructs. For example by trying to make frameworks per interface type exposed by device (eg. video_sink) and not by device type (drm_bridge, drm_panel).
But then you loose all information about the real type of device.
Driver knows type of its device anyway. Why should it know device type of devices it is connected to? It is enough it knows how to talk to other device. Like in real world, why desktop PC should know it is connected to DVI monitor or to DVI/HDMI converter which is connected to TV?
TVs are much more standardized. There are things like DDC/CI which can be used to control the device. Or there's additional circuitry or control paths to change things like brightness. The same isn't true of panels.
Or you have to create a common base class. And then you're still back to dealing with the two specific cases in many places, so the gain seems rather minimal.
For example RGB panel and RGB/??? bridge have the same hardware input interface. Why shall they have different representation in kernel?
Because panels have different requirements than bridges. Panels for example require the backlight to be adjustable, bridges don't.
I agree that panels and bridges are different, but not from the video source's point of view. A bridge/encoder just streams video data to the next entity in the chain, according to the given configuration. It does not matter if the next one is a panel or another bridge. The source bridge doesn't care if the next entity has a backlight or not.
I don't know if we need a different representation for bridges and panels. Thinking about backlight, the driver can just register the backlight device if it needs one. There's no need to differentiate bridges and panels for that. But maybe there are other reasons that warrant different representations.
However, my current feeling is that there's no need for different representations.
Tomi
Hi Thierry,
On Tuesday 23 September 2014 12:10:33 Thierry Reding wrote:
On Tue, Sep 23, 2014 at 11:43:47AM +0200, Andrzej Hajda wrote:
On 09/23/2014 10:35 AM, Thierry Reding wrote:
[...]
But I agree that it would be nice to unify bridges and encoders more. It should be possible to make encoder always a bridge (or perhaps even replace encoders with bridges altogether). Then once you're out of the DRM device everything would be a bridge until you get to a panel.
I agree that some sort of unification of bridges, (slave) encoders is a good thing, this way we stay only with bridges and panels as receivers.> But we will still have to deal with the code like:
if (on the other end of the link is panel) do panel framework specific things else do bridge framework specific things
The code in both branches usually does similar things but due to framework differences it is difficult to merge.
That's because they are inherently different entities. You can perform operations on a panel that don't make sense for a bridge and vice-versa.
But a subset of the operations are common, so it's annoying to have to explicitly deal with both objects in code that only cares about the common subset of operations.
Ideally it would be best to get rid of such constructs. For example by trying to make frameworks per interface type exposed by device (eg. video_sink) and not by device type (drm_bridge, drm_panel).
But then you loose all information about the real type of device. Or you have to create a common base class. And then you're still back to dealing with the two specific cases in many places, so the gain seems rather minimal.
We would need to experiment with the concept, but a single abstract class works surprisingly well in V4L. Devices as diverse as camera sensors, HDMI receivers or video scalers expose a single API, and drivers for the logical capture devices (roughly equivalent to the DRM display controller drivers) turned out not to need much knowledge about what the connected devices really are in order to use them. This was implemented with a superset of operations, with many of them being optional to implement for component drivers. Of course knowing the exact type of a component can still be sometimes useful, but that would be pretty easy to implement through a query operation exposed by the components.
We're digressing here, but my point is that the encoder/bridge merge that we seem to agree about should, in my opinion, be followed by a merge with panels. I am, however, more interested at this point by the encoder/bridge merge, as having two separate frameworks there is the biggest pain point for my use cases.
On 23/09/14 09:04, Thierry Reding wrote:
I certainly agree that it's useful to have standard ways to describe at least various aspects. For example I think it would be useful to add standard properties for a bridge's connections, such as "bridge" or "panel" to allow bridge chaining and attaching them to panels.
I don't see a need for such properties. Do you have examples where they would be needed?
The driver for the respective device does know if it's a bridge or a panel, so that information is there as soon as the driver has loaded.
But I think that should be the end of it. Mandating anything other than that will just complicate things and limit what people can do in the binding.
One of the disadvantages of the video graph bindings is that they are overly vague in that they don't carry information about what type a device is. Bindings then have to require additional meta-data, at which point it's become far easier to describe things with a custom property that already provides context.
I don't see why the graphs and such metadata are connected in any way. They are separate issues. If we need such metadata, it needs to be added in any case. That is not related to the graphs.
Yes, there's always one active input and one output for this bridge. What the video graphs would bring is to have the possibility to have multiple inputs and outputs, of which a single ones could be active at a time. The different inputs and outputs could even have different settings required (say, first output requires this bit set, but when using second output that bit must be cleared).
As discussed elsewhere this should be handled at a different level then. DT should describe the hardware and this particular bridge device simply doesn't have a means to connect more than a single input or more than a single output.
Well, I can't say about this particular bridge, but afaik you can connect a parallel RGB signal to multiple panels just like that, without any muxing.
If a mux is needed, I agree that there should be a device for that mux. But we still need a way to have multiple different "configuration sets" for the bridge, as I explained in the earlier mail.
Tomi
On Tue, Sep 23, 2014 at 11:54:27AM +0300, Tomi Valkeinen wrote:
On 23/09/14 09:04, Thierry Reding wrote:
I certainly agree that it's useful to have standard ways to describe at least various aspects. For example I think it would be useful to add standard properties for a bridge's connections, such as "bridge" or "panel" to allow bridge chaining and attaching them to panels.
I don't see a need for such properties. Do you have examples where they would be needed?
The driver for the respective device does know if it's a bridge or a panel, so that information is there as soon as the driver has loaded.
These are used for chaining not identifying the device. For example:
bridge0: ... { ... bridge = <&bridge1>; ... };
bridge1: ... { ... panel = <&panel>; ... };
panel: ... { ... };
But I think that should be the end of it. Mandating anything other than that will just complicate things and limit what people can do in the binding.
One of the disadvantages of the video graph bindings is that they are overly vague in that they don't carry information about what type a device is. Bindings then have to require additional meta-data, at which point it's become far easier to describe things with a custom property that already provides context.
I don't see why the graphs and such metadata are connected in any way. They are separate issues. If we need such metadata, it needs to be added in any case. That is not related to the graphs.
My point is that if you use plain phandles you usually have the meta-data already. Referring to the above example, bridge0 knows that it should look for a bridge with phandle &bridge1, whereas bridge1 knows that the device it is connected to is a panel.
Yes, there's always one active input and one output for this bridge. What the video graphs would bring is to have the possibility to have multiple inputs and outputs, of which a single ones could be active at a time. The different inputs and outputs could even have different settings required (say, first output requires this bit set, but when using second output that bit must be cleared).
As discussed elsewhere this should be handled at a different level then. DT should describe the hardware and this particular bridge device simply doesn't have a means to connect more than a single input or more than a single output.
Well, I can't say about this particular bridge, but afaik you can connect a parallel RGB signal to multiple panels just like that, without any muxing.
Right, but in that case you're not reconfiguring the signal in any way for each of the panels. You send one single signal to all of them. For all intents and purposes there is only one panel. Well, I guess you could have separate backlights for the panels. In that case though it seems better to represent it at least as a virtual mux or bridge, or perhaps a "mux panel".
Thierry
On 23/09/14 12:39, Thierry Reding wrote:
My point is that if you use plain phandles you usually have the meta-data already. Referring to the above example, bridge0 knows that it should look for a bridge with phandle &bridge1, whereas bridge1 knows that the device it is connected to is a panel.
The bridge should not care what kind of device is there on the other end. The bridge just has an output, going to another video component.
Well, I can't say about this particular bridge, but afaik you can connect a parallel RGB signal to multiple panels just like that, without any muxing.
Right, but in that case you're not reconfiguring the signal in any way for each of the panels. You send one single signal to all of them. For
Yes, that's one use case, cloning. But I was not talking about that.
all intents and purposes there is only one panel. Well, I guess you could have separate backlights for the panels. In that case though it seems better to represent it at least as a virtual mux or bridge, or perhaps a "mux panel".
I was talking about the case where you have two totally different devices, let's say panels, connected to the same output. One could take a 16-bit parallel RGB signal, the other 24-bit. Only one of them can be enabled at a time (from HW perspective both can be enabled at the same time, but then the other one shows garbage).
Tomi
On Tue, Sep 23, 2014 at 02:31:35PM +0300, Tomi Valkeinen wrote:
On 23/09/14 12:39, Thierry Reding wrote:
My point is that if you use plain phandles you usually have the meta-data already. Referring to the above example, bridge0 knows that it should look for a bridge with phandle &bridge1, whereas bridge1 knows that the device it is connected to is a panel.
The bridge should not care what kind of device is there on the other end. The bridge just has an output, going to another video component.
You'll need to know at some point that one of the devices in a panel, otherwise you can't treat it like a panel.
Well, I can't say about this particular bridge, but afaik you can connect a parallel RGB signal to multiple panels just like that, without any muxing.
Right, but in that case you're not reconfiguring the signal in any way for each of the panels. You send one single signal to all of them. For
Yes, that's one use case, cloning. But I was not talking about that.
all intents and purposes there is only one panel. Well, I guess you could have separate backlights for the panels. In that case though it seems better to represent it at least as a virtual mux or bridge, or perhaps a "mux panel".
I was talking about the case where you have two totally different devices, let's say panels, connected to the same output. One could take a 16-bit parallel RGB signal, the other 24-bit. Only one of them can be enabled at a time (from HW perspective both can be enabled at the same time, but then the other one shows garbage).
So we're essentially back to a mux, albeit maybe a virtual one.
Thierry
On 23/09/14 17:45, Thierry Reding wrote:
On Tue, Sep 23, 2014 at 02:31:35PM +0300, Tomi Valkeinen wrote:
On 23/09/14 12:39, Thierry Reding wrote:
My point is that if you use plain phandles you usually have the meta-data already. Referring to the above example, bridge0 knows that it should look for a bridge with phandle &bridge1, whereas bridge1 knows that the device it is connected to is a panel.
The bridge should not care what kind of device is there on the other end. The bridge just has an output, going to another video component.
You'll need to know at some point that one of the devices in a panel, otherwise you can't treat it like a panel.
The bridge doesn't need to treat it like a panel. Someone else might, though. But the panel driver knows it is a panel, and can thus register needed services, or mark itself as a panel.
Well, I can't say about this particular bridge, but afaik you can connect a parallel RGB signal to multiple panels just like that, without any muxing.
Right, but in that case you're not reconfiguring the signal in any way for each of the panels. You send one single signal to all of them. For
Yes, that's one use case, cloning. But I was not talking about that.
all intents and purposes there is only one panel. Well, I guess you could have separate backlights for the panels. In that case though it seems better to represent it at least as a virtual mux or bridge, or perhaps a "mux panel".
I was talking about the case where you have two totally different devices, let's say panels, connected to the same output. One could take a 16-bit parallel RGB signal, the other 24-bit. Only one of them can be enabled at a time (from HW perspective both can be enabled at the same time, but then the other one shows garbage).
So we're essentially back to a mux, albeit maybe a virtual one.
Yes. Are you suggesting to model virtual hardware in .dts? I got the impression that you wanted to model only the real hardware in .dts =).
But seriously speaking, I was thinking about this. I'd really like to have a generic video-mux node, that would still somehow allow us to have device specific configurations for the video sources and sinks (which the endpoints provide us), without endpoints.
The reason endpoints don't feel right in this case is that it makes sense that the mux has a single input and two outputs, but with endpoints we need two endpoints from the bridge (which is still ok), but we also need two endpoitns in the mux's input side, which doesn't feel right.
I came up with the following. It's quite ugly, though. I hope someone has ideas how it could be done in a neater way:
bridge1234 { bridge1234-cfg1: config1 { high-voltage; };
bridge1234-cfg2: config2 { low-voltage; }; output = <&mux>; };
mux: video-gpio-mux { gpio = <123>;
outputs = <&panel1 &panel2>; input-configs = <&bridge1234-cfg1 &bridge1234-cfg2>; };
panel1: panel-foo {
};
panel2: panel-foo {
};
So the bridge node has configuration sets. These might be compared to pinmux nodes. And the mux has a list of input-configs. When panel1 is to be enabled, "bridge1234-cfg1" config becomes active, and the bridge is given this configuration.
I have to say the endpoint system feels cleaner than the above, but perhaps it's possible to improve the method above somehow.
Tomi
Hi Tomi,
On Wednesday 24 September 2014 11:42:06 Tomi Valkeinen wrote:
On 23/09/14 17:45, Thierry Reding wrote:
On Tue, Sep 23, 2014 at 02:31:35PM +0300, Tomi Valkeinen wrote:
On 23/09/14 12:39, Thierry Reding wrote:
My point is that if you use plain phandles you usually have the meta-data already. Referring to the above example, bridge0 knows that it should look for a bridge with phandle &bridge1, whereas bridge1 knows that the device it is connected to is a panel.
The bridge should not care what kind of device is there on the other end. The bridge just has an output, going to another video component.
You'll need to know at some point that one of the devices in a panel, otherwise you can't treat it like a panel.
The bridge doesn't need to treat it like a panel. Someone else might, though. But the panel driver knows it is a panel, and can thus register needed services, or mark itself as a panel.
Well, I can't say about this particular bridge, but afaik you can connect a parallel RGB signal to multiple panels just like that, without any muxing.
Right, but in that case you're not reconfiguring the signal in any way for each of the panels. You send one single signal to all of them. For
Yes, that's one use case, cloning. But I was not talking about that.
all intents and purposes there is only one panel. Well, I guess you could have separate backlights for the panels. In that case though it seems better to represent it at least as a virtual mux or bridge, or perhaps a "mux panel".
I was talking about the case where you have two totally different devices, let's say panels, connected to the same output. One could take a 16-bit parallel RGB signal, the other 24-bit. Only one of them can be enabled at a time (from HW perspective both can be enabled at the same time, but then the other one shows garbage).
So we're essentially back to a mux, albeit maybe a virtual one.
Yes. Are you suggesting to model virtual hardware in .dts? I got the impression that you wanted to model only the real hardware in .dts =).
But seriously speaking, I was thinking about this. I'd really like to have a generic video-mux node, that would still somehow allow us to have device specific configurations for the video sources and sinks (which the endpoints provide us), without endpoints.
The reason endpoints don't feel right in this case is that it makes sense that the mux has a single input and two outputs, but with endpoints we need two endpoints from the bridge (which is still ok), but we also need two endpoitns in the mux's input side, which doesn't feel right.
I came up with the following. It's quite ugly, though. I hope someone has ideas how it could be done in a neater way:
bridge1234 { bridge1234-cfg1: config1 { high-voltage; };
bridge1234-cfg2: config2 { low-voltage; };
output = <&mux>; };
mux: video-gpio-mux { gpio = <123>;
outputs = <&panel1 &panel2>; input-configs = <&bridge1234-cfg1 &bridge1234-cfg2>; };
panel1: panel-foo {
};
panel2: panel-foo {
};
So the bridge node has configuration sets. These might be compared to pinmux nodes. And the mux has a list of input-configs. When panel1 is to be enabled, "bridge1234-cfg1" config becomes active, and the bridge is given this configuration.
I have to say the endpoint system feels cleaner than the above, but perhaps it's possible to improve the method above somehow.
Isn't it possible for the bridge to compute its configuration at runtime by querying properties of the downstream video entities ? That would solve the problem neatly.
On 06/10/14 17:40, Laurent Pinchart wrote:
But seriously speaking, I was thinking about this. I'd really like to have a generic video-mux node, that would still somehow allow us to have device specific configurations for the video sources and sinks (which the endpoints provide us), without endpoints.
The reason endpoints don't feel right in this case is that it makes sense that the mux has a single input and two outputs, but with endpoints we need two endpoints from the bridge (which is still ok), but we also need two endpoitns in the mux's input side, which doesn't feel right.
I came up with the following. It's quite ugly, though. I hope someone has ideas how it could be done in a neater way:
bridge1234 { bridge1234-cfg1: config1 { high-voltage; };
bridge1234-cfg2: config2 { low-voltage; };
output = <&mux>; };
mux: video-gpio-mux { gpio = <123>;
outputs = <&panel1 &panel2>; input-configs = <&bridge1234-cfg1 &bridge1234-cfg2>; };
panel1: panel-foo {
};
panel2: panel-foo {
};
So the bridge node has configuration sets. These might be compared to pinmux nodes. And the mux has a list of input-configs. When panel1 is to be enabled, "bridge1234-cfg1" config becomes active, and the bridge is given this configuration.
I have to say the endpoint system feels cleaner than the above, but perhaps it's possible to improve the method above somehow.
Isn't it possible for the bridge to compute its configuration at runtime by querying properties of the downstream video entities ? That would solve the problem neatly.
You mean the bridge driver would somehow take a peek into panel1 and panel2 nodes, looking for bridge specific properties? Sounds somewhat fragile to me... How would the bridge driver know a property is for the bridge?
Tomi
Hi Tomi,
On Tuesday 07 October 2014 10:06:10 Tomi Valkeinen wrote:
On 06/10/14 17:40, Laurent Pinchart wrote:
But seriously speaking, I was thinking about this. I'd really like to have a generic video-mux node, that would still somehow allow us to have device specific configurations for the video sources and sinks (which the endpoints provide us), without endpoints.
The reason endpoints don't feel right in this case is that it makes sense that the mux has a single input and two outputs, but with endpoints we need two endpoints from the bridge (which is still ok), but we also need two endpoitns in the mux's input side, which doesn't feel right.
I came up with the following. It's quite ugly, though. I hope someone has ideas how it could be done in a neater way:
bridge1234 {
bridge1234-cfg1: config1 {
high-voltage;
};
bridge1234-cfg2: config2 {
low-voltage;
};
output = <&mux>;
};
mux: video-gpio-mux {
gpio = <123>;
outputs = <&panel1 &panel2>; input-configs = <&bridge1234-cfg1 &bridge1234-cfg2>;
};
panel1: panel-foo {
};
panel2: panel-foo {
};
So the bridge node has configuration sets. These might be compared to pinmux nodes. And the mux has a list of input-configs. When panel1 is to be enabled, "bridge1234-cfg1" config becomes active, and the bridge is given this configuration.
I have to say the endpoint system feels cleaner than the above, but perhaps it's possible to improve the method above somehow.
Isn't it possible for the bridge to compute its configuration at runtime by querying properties of the downstream video entities ? That would solve the problem neatly.
You mean the bridge driver would somehow take a peek into panel1 and panel2 nodes, looking for bridge specific properties? Sounds somewhat fragile to me... How would the bridge driver know a property is for the bridge?
No, I mean the bridge driver should call the API provided by the panel objects to get information about the panels, and compute its configuration parameters from those. I'm not sure if that's possible though, it depends on whether the bridge parameters can be computed from information provided by the panel.
On 07/10/14 10:23, Laurent Pinchart wrote:
You mean the bridge driver would somehow take a peek into panel1 and panel2 nodes, looking for bridge specific properties? Sounds somewhat fragile to me... How would the bridge driver know a property is for the bridge?
No, I mean the bridge driver should call the API provided by the panel objects to get information about the panels, and compute its configuration parameters from those. I'm not sure if that's possible though, it depends on whether the bridge parameters can be computed from information provided by the panel.
Right. My example tried to show a case where they can't be queried. The board or the wiring causes signal degradation, which can be fixed by increasing the bridge output voltage slightly.
So it has nothing really to do with the panel, but the board.
I fully admit that it is a purely theoretical case, and I don't have any real use cases in mind right now.
Tomi
Hi Tomi,
On Tuesday 07 October 2014 11:25:56 Tomi Valkeinen wrote:
On 07/10/14 10:23, Laurent Pinchart wrote:
You mean the bridge driver would somehow take a peek into panel1 and panel2 nodes, looking for bridge specific properties? Sounds somewhat fragile to me... How would the bridge driver know a property is for the bridge?
No, I mean the bridge driver should call the API provided by the panel objects to get information about the panels, and compute its configuration parameters from those. I'm not sure if that's possible though, it depends on whether the bridge parameters can be computed from information provided by the panel.
Right. My example tried to show a case where they can't be queried. The board or the wiring causes signal degradation, which can be fixed by increasing the bridge output voltage slightly.
So it has nothing really to do with the panel, but the board.
I fully admit that it is a purely theoretical case, and I don't have any real use cases in mind right now.
Still, I agree with you that we might need to configure the bridge differently depending on the selected output with parameters that are not specific to the bridge.
There's two use cases I can think of. In the first one the panels are connected directly to the bridge. The bridge should have two links from its output port, one to each panel. If we use the OF graph bindings then the bridge output port node would have two endpoint nodes, and configuration parameters can be stored in the endpoint nodes.
In the second use case another element would be present between the bridge and the two panels. In that case there would be a single link between the bridge and that other element. If the bridge needs to be configured differently depending on the selected panel using parameters that can't be queried from the panels, then we'll have a problem. I'm not sure whether this use case has practical applications as board-related parameters seem to me that they pertain to physical links.
On Wed, Sep 17, 2014 at 02:52:42PM +0300, Tomi Valkeinen wrote:
On 27/08/14 17:39, Ajay Kumar wrote:
Add documentation for DT properties supported by ps8622/ps8625 eDP-LVDS converter.
Signed-off-by: Ajay Kumar ajaykumar.rs@samsung.com
.../devicetree/bindings/video/bridge/ps8622.txt | 20 ++++++++++++++++++++ 1 file changed, 20 insertions(+) create mode 100644 Documentation/devicetree/bindings/video/bridge/ps8622.txt
diff --git a/Documentation/devicetree/bindings/video/bridge/ps8622.txt b/Documentation/devicetree/bindings/video/bridge/ps8622.txt new file mode 100644 index 0000000..0ec8172 --- /dev/null +++ b/Documentation/devicetree/bindings/video/bridge/ps8622.txt @@ -0,0 +1,20 @@ +ps8622-bridge bindings
+Required properties:
- compatible: "parade,ps8622" or "parade,ps8625"
- reg: first i2c address of the bridge
- sleep-gpios: OF device-tree gpio specification for PD_ pin.
- reset-gpios: OF device-tree gpio specification for RST_ pin.
+Optional properties:
- lane-count: number of DP lanes to use
- use-external-pwm: backlight will be controlled by an external PWM
What does this mean? That the backlight support from ps8625 is not used? If so, maybe "disable-pwm" or something?
+Example:
- lvds-bridge@48 {
compatible = "parade,ps8622";
reg = <0x48>;
sleep-gpios = <&gpc3 6 1 0 0>;
reset-gpios = <&gpc3 1 1 0 0>;
lane-count = <1>;
- };
I wish all new display component bindings would use the video ports/endpoints to describe the connections. It will be very difficult to improve the display driver model later if we're missing such critical pieces from the DT bindings.
I disagree. Why would we want to burden all devices with a bloated binding and drivers with parsing a complex graph when it's not even known that it will be necessary? Evidently this device works fine using the current binding. Just because there are bindings to describe ports in a generic way doesn't mean it has to be applied everywhere. After all the concept of ports/endpoints applies to non-video devices too, yet we don't require the bindings for those devices to add ports or endpoints nodes.
Also it won't be very difficult to extend the binding in a backwards compatible way if that becomes necessary.
One thing that I'd like to see in this binding, though, is how to hook up the bridge to a panel. However I'm still catching up on mail after vacation, so perhaps this has already been discussed further down the thread.
Thierry
On 22/09/14 10:54, Thierry Reding wrote:
I wish all new display component bindings would use the video ports/endpoints to describe the connections. It will be very difficult to improve the display driver model later if we're missing such critical pieces from the DT bindings.
I disagree. Why would we want to burden all devices with a bloated
I agree that the .dts becomes more bloated with video graph.
binding and drivers with parsing a complex graph when it's not even
Hopefully not all drivers will need to do the parsing themselves. If it's common stuff, I don't see why we can't have helpers to do the work.
known that it will be necessary? Evidently this device works fine using the current binding. Just because there are bindings to describe
Well, I can write almost any kind of bindings, and then evidently my device would work. For me, on my board.
ports in a generic way doesn't mean it has to be applied everywhere. After all the concept of ports/endpoints applies to non-video devices too, yet we don't require the bindings for those devices to add ports or endpoints nodes.
I guess non-video devices haven't had need for those. I have had lots of boards with video setup that cannot be represented with simple phandles. I'm not sure if I have just been unlucky or what, but my understand is that other people have encountered such boards also. Usually the problems encountered there have been circumvented with some hacky video driver for that specific board, or maybe a static configuration handled by the boot loader.
Also it won't be very difficult to extend the binding in a backwards compatible way if that becomes necessary.
I don't know about that. More or less all the cases I've encountered where a binding has not been good from the start have been all but "not very difficult".
Do we have a standard way of representing the video pipeline with simple phandles? Or does everyone just do their own version? If there's no standard way, it sounds it'll be a mess to support in the future.
If there's much objection to the bloatiness of video graphs, maybe we could come up with an simpler alternative (like the phandles here), and "standardize" it. Then, if common display framework or some such ever realizes, we could say that if your driver uses CDF, you need to support these video graph bindings and these simpler bindings to be compatible.
However, I do have a slight recollection that alternative simplifications to the video graph were discussed at some point, and, while I may remember wrong, I think it was not accepted. At least I had one method to simplify the ports/endpoints, but that was removed from the patches I had.
Tomi
On Mon, Sep 22, 2014 at 05:04:54PM +0300, Tomi Valkeinen wrote:
On 22/09/14 10:54, Thierry Reding wrote:
I wish all new display component bindings would use the video ports/endpoints to describe the connections. It will be very difficult to improve the display driver model later if we're missing such critical pieces from the DT bindings.
I disagree. Why would we want to burden all devices with a bloated
I agree that the .dts becomes more bloated with video graph.
binding and drivers with parsing a complex graph when it's not even
Hopefully not all drivers will need to do the parsing themselves. If it's common stuff, I don't see why we can't have helpers to do the work.
known that it will be necessary? Evidently this device works fine using the current binding. Just because there are bindings to describe
Well, I can write almost any kind of bindings, and then evidently my device would work. For me, on my board.
Well, that's the whole problem with DT. For many devices we only have a single setup to test against. And even when we have several they often are derived from each other. But the alternative would be to defer (possibly indefinitely) merging support for a device until a second, wildly different setup shows up. That's completely unreasonable and we need to start somewhere.
The same issue exists whether you use the video graph or not. But like I said elsewhere, I think the key to making this work is by keeping the DT simple and making sure we can properly use the devices at the OS level so that we only need to make sure we have the right connections in the DT.
ports in a generic way doesn't mean it has to be applied everywhere. After all the concept of ports/endpoints applies to non-video devices too, yet we don't require the bindings for those devices to add ports or endpoints nodes.
I guess non-video devices haven't had need for those. I have had lots of boards with video setup that cannot be represented with simple phandles. I'm not sure if I have just been unlucky or what, but my understand is that other people have encountered such boards also. Usually the problems encountered there have been circumvented with some hacky video driver for that specific board, or maybe a static configuration handled by the boot loader.
I have yet to encounter such a setup. Can you point me at a DTS for one such setup? I do remember a couple of hypothetical cases being discussed at one time or another, but I haven't seen any actual DTS content where this was needed.
Also it won't be very difficult to extend the binding in a backwards compatible way if that becomes necessary.
I don't know about that. More or less all the cases I've encountered where a binding has not been good from the start have been all but "not very difficult".
Do we have a standard way of representing the video pipeline with simple phandles? Or does everyone just do their own version? If there's no standard way, it sounds it'll be a mess to support in the future.
It doesn't matter all that much whether the representation is standard. phandles should simply point to the next element in the pipeline and the OS abstractions should be good enough to handle the details about how to chain the elements.
While it's entirely possible to represent that using a generic graph, in most cases (at least the trivial ones) I think that's completely redundant. If you have a pipeline of two (single input/output) elements where video flows from the first to the second, then all you need is a phandle from the first element to the second element. Everything else can easily be derived. The input for the second device will be the first implicitly. No need to explicitly describe that in DT. Doing it explicitly would be redundant.
Thierry
On 23/09/14 09:21, Thierry Reding wrote:
Well, I can write almost any kind of bindings, and then evidently my device would work. For me, on my board.
Well, that's the whole problem with DT. For many devices we only have a single setup to test against. And even when we have several they often are derived from each other. But the alternative would be to defer (possibly indefinitely) merging support for a device until a second, wildly different setup shows up. That's completely unreasonable and we need to start somewhere.
Yes, but in this case we know of existing boards that have complex setups. It's not theoretical.
I'm not saying we should stop everything until we have a 100% solution for the rare complex cases. But we should keep them in mind and, when possible, solve problems in a way that will work for the complex cases also.
I guess non-video devices haven't had need for those. I have had lots of boards with video setup that cannot be represented with simple phandles. I'm not sure if I have just been unlucky or what, but my understand is that other people have encountered such boards also. Usually the problems encountered there have been circumvented with some hacky video driver for that specific board, or maybe a static configuration handled by the boot loader.
I have yet to encounter such a setup. Can you point me at a DTS for one such setup? I do remember a couple of hypothetical cases being discussed at one time or another, but I haven't seen any actual DTS content where this was needed.
No, I can't point to them as they are not in the mainline (at least the ones I've been working on), for obvious reasons.
With a quick glance, I have the following devices in my cabinet that have more complex setups: OMAP 4430 SDP, BeagleBoneBlack + LCD, AM43xx EVM. Many Nokia devices used to have such setups, usually so that the LCD and tv-out were connected to the same video source.
Do we have a standard way of representing the video pipeline with simple phandles? Or does everyone just do their own version? If there's no standard way, it sounds it'll be a mess to support in the future.
It doesn't matter all that much whether the representation is standard.
Again, I disagree.
phandles should simply point to the next element in the pipeline and the OS abstractions should be good enough to handle the details about how to chain the elements.
I, on the other hand, would rather see the links the other way around. Panel having a link to the video source, etc.
The video graphs have two-way links, which of course is the safest options, but also more verbose and redundant.
When this was discussed earlier, it was unclear which way the links should be. It's true that only links to one direction are strictly needed, but the question raised was that if in the drivers we end up always going the links the other way, the performance penalty may be somewhat big. (If I recall right).
Tomi
On Tue, Sep 23, 2014 at 12:30:20PM +0300, Tomi Valkeinen wrote:
On 23/09/14 09:21, Thierry Reding wrote:
Well, I can write almost any kind of bindings, and then evidently my device would work. For me, on my board.
Well, that's the whole problem with DT. For many devices we only have a single setup to test against. And even when we have several they often are derived from each other. But the alternative would be to defer (possibly indefinitely) merging support for a device until a second, wildly different setup shows up. That's completely unreasonable and we need to start somewhere.
Yes, but in this case we know of existing boards that have complex setups. It's not theoretical.
Complex setups involving /this particular/ bridge and binding are theoretical at this point, however.
phandles should simply point to the next element in the pipeline and the OS abstractions should be good enough to handle the details about how to chain the elements.
I, on the other hand, would rather see the links the other way around. Panel having a link to the video source, etc.
Why? It seems much more natural to describe this using the natural flow of data. The device furthest away from the CPU should be the target of the phandle. That way the DT can be used to trace the flow of data down the pipeline.
The video graphs have two-way links, which of course is the safest options, but also more verbose and redundant.
Right. If we take that line of thinking to the extreme we end up listing individual registers in DT so that we can safely assume that we can control all possible aspects of the device.
Like I said, this seems to be the latest response to DT ABI stability requirement. Add as much data to a DT node as possible so that it has higher chances of being complete. The result is often overly complex DT content that doesn't add any real value.
When this was discussed earlier, it was unclear which way the links should be. It's true that only links to one direction are strictly needed, but the question raised was that if in the drivers we end up always going the links the other way, the performance penalty may be somewhat big. (If I recall right).
I doubt that graphs will become so complex that walking it would become a performance bottleneck. In fact if we're constantly walking the graph we're already doing something wrong. It should only be necessary when the pipeline configuration changes, which should hopefully not be very often (i.e. not several times per second).
Thierry
On Tuesday 23 September 2014 11:53:15 Thierry Reding wrote:
On Tue, Sep 23, 2014 at 12:30:20PM +0300, Tomi Valkeinen wrote:
On 23/09/14 09:21, Thierry Reding wrote:
Well, I can write almost any kind of bindings, and then evidently my device would work. For me, on my board.
Well, that's the whole problem with DT. For many devices we only have a single setup to test against. And even when we have several they often are derived from each other. But the alternative would be to defer (possibly indefinitely) merging support for a device until a second, wildly different setup shows up. That's completely unreasonable and we need to start somewhere.
Yes, but in this case we know of existing boards that have complex setups. It's not theoretical.
Complex setups involving /this particular/ bridge and binding are theoretical at this point, however.
phandles should simply point to the next element in the pipeline and the OS abstractions should be good enough to handle the details about how to chain the elements.
I, on the other hand, would rather see the links the other way around. Panel having a link to the video source, etc.
Why? It seems much more natural to describe this using the natural flow of data. The device furthest away from the CPU should be the target of the phandle. That way the DT can be used to trace the flow of data down the pipeline.
The video graphs have two-way links, which of course is the safest options, but also more verbose and redundant.
Right. If we take that line of thinking to the extreme we end up listing individual registers in DT so that we can safely assume that we can control all possible aspects of the device.
And the other extreme would be to have a single DT node for the logical video device with all information pertaining to it stored in C code. Extremes are never good, we need to find a reasonable middle-ground here. I believe OF graph fulfills that requirement, it might be slightly more verbose than a single phandle, but it's also much more versatile.
Like I said, this seems to be the latest response to DT ABI stability requirement. Add as much data to a DT node as possible so that it has higher chances of being complete. The result is often overly complex DT content that doesn't add any real value.
When this was discussed earlier, it was unclear which way the links should be. It's true that only links to one direction are strictly needed, but the question raised was that if in the drivers we end up always going the links the other way, the performance penalty may be somewhat big. (If I recall right).
I doubt that graphs will become so complex that walking it would become a performance bottleneck. In fact if we're constantly walking the graph we're already doing something wrong. It should only be necessary when the pipeline configuration changes, which should hopefully not be very often (i.e. not several times per second).
On Tue, Sep 23, 2014 at 02:12:52PM +0300, Laurent Pinchart wrote:
On Tuesday 23 September 2014 11:53:15 Thierry Reding wrote:
On Tue, Sep 23, 2014 at 12:30:20PM +0300, Tomi Valkeinen wrote:
On 23/09/14 09:21, Thierry Reding wrote:
Well, I can write almost any kind of bindings, and then evidently my device would work. For me, on my board.
Well, that's the whole problem with DT. For many devices we only have a single setup to test against. And even when we have several they often are derived from each other. But the alternative would be to defer (possibly indefinitely) merging support for a device until a second, wildly different setup shows up. That's completely unreasonable and we need to start somewhere.
Yes, but in this case we know of existing boards that have complex setups. It's not theoretical.
Complex setups involving /this particular/ bridge and binding are theoretical at this point, however.
phandles should simply point to the next element in the pipeline and the OS abstractions should be good enough to handle the details about how to chain the elements.
I, on the other hand, would rather see the links the other way around. Panel having a link to the video source, etc.
Why? It seems much more natural to describe this using the natural flow of data. The device furthest away from the CPU should be the target of the phandle. That way the DT can be used to trace the flow of data down the pipeline.
The video graphs have two-way links, which of course is the safest options, but also more verbose and redundant.
Right. If we take that line of thinking to the extreme we end up listing individual registers in DT so that we can safely assume that we can control all possible aspects of the device.
And the other extreme would be to have a single DT node for the logical video device with all information pertaining to it stored in C code. Extremes are never good, we need to find a reasonable middle-ground here. I believe OF graph fulfills that requirement, it might be slightly more verbose than a single phandle, but it's also much more versatile.
Oh well, yet another one of these things where we'll never reach an agreement I guess.
Thierry
On 23/09/14 12:53, Thierry Reding wrote:
Yes, but in this case we know of existing boards that have complex setups. It's not theoretical.
Complex setups involving /this particular/ bridge and binding are theoretical at this point, however.
Right, but this discussion, at least from my part, has not so much been about this particular bridge, but bridges in general.
So I don't have any requirements for this bridge driver/bindings to support complex use cases at the time being.
But I do want us to have an option to use the bridge in the future in such complex case. And yes, we can't guarantee that we'd hit the bindings right whatever we do, but we should think about it and at least try.
phandles should simply point to the next element in the pipeline and the OS abstractions should be good enough to handle the details about how to chain the elements.
I, on the other hand, would rather see the links the other way around. Panel having a link to the video source, etc.
Why? It seems much more natural to describe this using the natural flow of data. The device furthest away from the CPU should be the target of the phandle. That way the DT can be used to trace the flow of data down the pipeline.
Because I see video components "using" their video sources. A panel uses an encoder to produce video data for the panel. An encoder uses its video source to produce video data. A bit like a device uses a gpio, or a pwm.
Especially with more complex encoders/panels having the panel/encoder in control of its video source is often the easiest (only?) way to manage the job. This has worked well on OMAP, whereas the earlier model where the video source controlled the video sink was full of problems. Not that that exactly proves anything, but that's my experience, and I didn't find any easy solutions when the video source was in control.
So while the video data flows from SoC to the panel, the control goes the other way. Whether the DT links should model the video data or control, I don't know. Both feel natural to me.
But if a panel driver controls its video source, it makes sense for the panel driver to get its video source in its probe, and that happens easiest if the panel has a link to the video source.
Tomi
On Tue, Sep 23, 2014 at 03:00:31PM +0300, Tomi Valkeinen wrote:
On 23/09/14 12:53, Thierry Reding wrote:
Yes, but in this case we know of existing boards that have complex setups. It's not theoretical.
Complex setups involving /this particular/ bridge and binding are theoretical at this point, however.
Right, but this discussion, at least from my part, has not so much been about this particular bridge, but bridges in general.
So I don't have any requirements for this bridge driver/bindings to support complex use cases at the time being.
But I do want us to have an option to use the bridge in the future in such complex case. And yes, we can't guarantee that we'd hit the bindings right whatever we do, but we should think about it and at least try.
phandles should simply point to the next element in the pipeline and the OS abstractions should be good enough to handle the details about how to chain the elements.
I, on the other hand, would rather see the links the other way around. Panel having a link to the video source, etc.
Why? It seems much more natural to describe this using the natural flow of data. The device furthest away from the CPU should be the target of the phandle. That way the DT can be used to trace the flow of data down the pipeline.
Because I see video components "using" their video sources. A panel uses an encoder to produce video data for the panel. An encoder uses its video source to produce video data. A bit like a device uses a gpio, or a pwm.
Especially with more complex encoders/panels having the panel/encoder in control of its video source is often the easiest (only?) way to manage the job. This has worked well on OMAP, whereas the earlier model where the video source controlled the video sink was full of problems. Not that that exactly proves anything, but that's my experience, and I didn't find any easy solutions when the video source was in control.
So while the video data flows from SoC to the panel, the control goes the other way. Whether the DT links should model the video data or control, I don't know. Both feel natural to me.
But if a panel driver controls its video source, it makes sense for the panel driver to get its video source in its probe, and that happens easiest if the panel has a link to the video source.
That's an orthogonal problem. You speak about the link in software here, whereas the phandles only represent the link in the description of hardware. Now DT describes hardware from the perspective of the CPU, so it's sort of like a cave that you're trying to explore. You start at the top with the address bus, from where a couple of tunnels lead to the various peripherals on the address bus. A display controller might have another tunnel to a panel, etc.
If you go the other way around, how do you detect how things connect? Where do you get the information about the panel so you can trace back to the origin?
Thierry
On 23/09/14 17:58, Thierry Reding wrote:
But if a panel driver controls its video source, it makes sense for the panel driver to get its video source in its probe, and that happens easiest if the panel has a link to the video source.
That's an orthogonal problem. You speak about the link in software here, whereas the phandles only represent the link in the description of hardware. Now DT describes hardware from the perspective of the CPU, so it's sort of like a cave that you're trying to explore. You start at the top with the address bus, from where a couple of tunnels lead to the various peripherals on the address bus. A display controller might have another tunnel to a panel, etc.
We don't do that for GPIOs, regulators, etc. either. And for video data there are no address buses. Yes, for DSI and similar we have a control bus, but that is modeled differently than the video data path.
Now, I'm fine with starting from the CPU, going outwards. I agree that it's probably better to model it in the direction of the data stream, even if that would make the SW a bit more complex.
However, I do think it's not as clear as you make it sound, especially if we take cameras/sensors into account as Laurent explained elsewhere in this thread.
If you go the other way around, how do you detect how things connect? Where do you get the information about the panel so you can trace back to the origin?
When the panel driver probes, it registers itself as a panel and gets its video source. Similarly a bridge in between gets its video source, which often would be the SoC, i.e. the origin.
Tomi
On Wed, Sep 24, 2014 at 12:08:37PM +0300, Tomi Valkeinen wrote:
On 23/09/14 17:58, Thierry Reding wrote:
But if a panel driver controls its video source, it makes sense for the panel driver to get its video source in its probe, and that happens easiest if the panel has a link to the video source.
That's an orthogonal problem. You speak about the link in software here, whereas the phandles only represent the link in the description of hardware. Now DT describes hardware from the perspective of the CPU, so it's sort of like a cave that you're trying to explore. You start at the top with the address bus, from where a couple of tunnels lead to the various peripherals on the address bus. A display controller might have another tunnel to a panel, etc.
We don't do that for GPIOs, regulators, etc. either. And for video data there are no address buses. Yes, for DSI and similar we have a control bus, but that is modeled differently than the video data path.
GPIOs and regulators are just auxiliary devices that some other device needs to operate. For instance if you want to know how to operate the GPIO or regulator, the same still applies. You start from the CPU and look up the GPIO controller and then the index of the GPIO within that controller. For regulators you'd typically go and look for an I2C bus that has a PMIC, which will expose the regulator.
Now for a device such as a display panel, what you want to do is control the display panel. If part of how you control it involves toggling a GPIO or a regulator, then you get access to those via phandles. And then controlling the GPIO and regulator becomes the same as above. So it's a matter of compositing devices in that case.
For the video data you use the phandles to model the path that video data takes, with all the devices in that path chained together. So while both approaches use the same mechanism (phandle) to describe the relationships, the nature of the relationships is quite different.
Now, I'm fine with starting from the CPU, going outwards. I agree that it's probably better to model it in the direction of the data stream, even if that would make the SW a bit more complex.
However, I do think it's not as clear as you make it sound, especially if we take cameras/sensors into account as Laurent explained elsewhere in this thread.
How are cameras different? The CPU wants to capture video data from the camera, so it needs to go look for a video capture device, which in turn needs to involve a sensor.
It isn't so much about following the data stream itself, but rather the connections between devices that the CPU needs to involve in order to initiate the data flow.
If you go the other way around, how do you detect how things connect? Where do you get the information about the panel so you can trace back to the origin?
When the panel driver probes, it registers itself as a panel and gets its video source. Similarly a bridge in between gets its video source, which often would be the SoC, i.e. the origin.
That sounds backwards to me. The device tree serves the purpose of supplementing missing information that can't be probed if hardware is too stupid. I guess that's one of the primary reasons for structuring it the way we do, from the CPU point of view, because it allows the CPU to probe via the device tree.
Probing is always done downstream, so you'd start by looking at some type of bus interface and query it for what devices are present on the bus. Take for example PCI: the CPU only needs to know how to access the host bridge and will then probe devices behind each of the ports on that bridge. Some of those devices will be bridges, too, so it will continue to probe down the hierarchy.
Without DT you don't have a means to know that there was a panel before you've actually gone and probed your whole hierarchy and found a GPU with some sort of output that a panel can be connected to. I think it makes a lot of sense to describe things in the same way in DT.
Thierry
On 25/09/14 09:23, Thierry Reding wrote:
How are cameras different? The CPU wants to capture video data from the camera, so it needs to go look for a video capture device, which in turn needs to involve a sensor.
Let's say we have an XXX-to-YYY encoder. We use that encoder to convert the SoC's XXX output to YYY, which is then shown on a panel. So, in this case, the encoder's DT node will have a "panel" or "output" phandle, pointing to the panel.
We then use the exact same encoder in a setup in which we have a camera which outputs XXX, which the encoder then converts to YYY, which is then captured by the SoC. Here the "output" phandle would point to the SoC.
If you go the other way around, how do you detect how things connect? Where do you get the information about the panel so you can trace back to the origin?
When the panel driver probes, it registers itself as a panel and gets its video source. Similarly a bridge in between gets its video source, which often would be the SoC, i.e. the origin.
That sounds backwards to me. The device tree serves the purpose of supplementing missing information that can't be probed if hardware is too stupid. I guess that's one of the primary reasons for structuring it the way we do, from the CPU point of view, because it allows the CPU to probe via the device tree.
Probing is always done downstream, so you'd start by looking at some type of bus interface and query it for what devices are present on the bus. Take for example PCI: the CPU only needs to know how to access the host bridge and will then probe devices behind each of the ports on that bridge. Some of those devices will be bridges, too, so it will continue to probe down the hierarchy.
Without DT you don't have a means to know that there was a panel before you've actually gone and probed your whole hierarchy and found a GPU with some sort of output that a panel can be connected to. I think it makes a lot of sense to describe things in the same way in DT.
Maybe I don't quite follow, but it sounds to be you are mixing control and data. For control, all you say is true. The CPU probes the devices on control busses, either with the aid of HW or the aid of DT, going downstream.
But the data paths are a different matter. The CPU/SoC may not even be involved in the whole data path. You could have a sensor on the board directly connected to a panel. Both are controlled by the CPU, but the data path goes from the sensor to the panel (or vice versa). There's no way the data paths can be "CPU centric" in that case.
Also, a difference with the data paths compared to control paths is that they are not strictly needed for operation. An encoder can generate an output without enabling its input (test pattern or maybe blank screen, or maybe a screen with company logo). Or an encoder with two inputs might only get the second input when the user requests a very high res mode. So it is possible that the data paths are lazily initialized.
You do know that there is a panel right after the device is probed according to its control bus. It doesn't mean that the data paths are there yet. In some cases the user space needs to reconfigure the data paths before a panel has an input and can be used to show an image from the SoC's display subsystem.
The point here being that the data path bindings don't really relate to the probing part. You can probe no matter which way the data path bindings go, and no matter if there actually exists (yet) a probed device on the other end of a data path phandle.
While I think having video data connections in DT either way, downstream or upstream, would work, it has felt most natural for me to have the phandles from video sinks to video sources.
The reason for that is that I think the video sink has to be in control of its source. It's the sink that tells the source to start or stop or reconfigure. So I have had need to get the video source from a video sink, but I have never had the need to get the video sinks from video sources.
Tomi
Hi Tomi and Thierry,
On Monday 06 October 2014 14:34:00 Tomi Valkeinen wrote:
On 25/09/14 09:23, Thierry Reding wrote:
How are cameras different? The CPU wants to capture video data from the camera, so it needs to go look for a video capture device, which in turn needs to involve a sensor.
Let's say we have an XXX-to-YYY encoder. We use that encoder to convert the SoC's XXX output to YYY, which is then shown on a panel. So, in this case, the encoder's DT node will have a "panel" or "output" phandle, pointing to the panel.
We then use the exact same encoder in a setup in which we have a camera which outputs XXX, which the encoder then converts to YYY, which is then captured by the SoC. Here the "output" phandle would point to the SoC.
phandles are pretty simple and versatile, which is one of the reasons why they are widely used. The drawback is that they are used to model totally different concepts, which then get mixed in our brains.
The DT nodes that make a complete system are related in many different ways. DT has picked one of those relationships, namely the control parent-child relationship, made it special, and arranged the nodes in a tree structure based on those relationships. As Thierry mentioned this make sense given that DT addresses the lack of discoverability from a CPU point of view.
As many other relationships between nodes had to be represented in DT phandles got introduced. One of their use cases is to reference resources required by devices, such as GPIOs, clocks and regulators. In those cases the distinction between the resource provider and the resource user is clear. The provider and user roles are clearly identified in the relationship, with the user being the master and the provider being the slave.
After those first two classes of relationships (control parent/child and resource provider/user), a need to specify data connections in DT arose. Different models got adopted depending on the subsystems and/or devices, all based on phandles.
I believe this use case is different compared to the first two in that it defines connections, not master/slave relationships. The connection doesn't model which entity control or use the other (if any), but how data flows between entities. There is no clear master or slave in that model, different control models can then be implemented in device drivers depending on the use cases, but those are very much implementation details from a DT point of view. The composite device model used for display drivers (and camera drivers for that matter) usually sets all devices on equal footing, and then picks a master (which can be one of the hardware devices, or a virtual logical device) depending on the requirements of the kernel and/or userspace subsystem(s).
I thus don't think there's any point in arguing which entity is the resource and which entity is the user in this discussion, as that should be unrelated to the DT bindings. If we need to select a single phandle direction from a hardware description point of view, the only direction that makes sense is one based on the data flow direction. Making phandles always point outwards or inwards from the CPU point of view doesn't make sense, especially when the CPU doesn't get involved as a data point in a media pipeline (think about a connector -> processing -> connector pipeline for instance, where data will be processed by hardware only without going through system memory at any point).
Now, we also have to keep in mind that the DT description, while it should model the hardware, also needs to be usable from a software point of view. A hardware model that would precisely describe the system in very convoluted ways wouldn't be very useful. We thus need to select a model that will ease software development, while only describing the hardware and without depending on a particular software implementation. That model should be as simple as possible, but doesn't necessarily need to be the simplest model possible if that would result in many implementation issues.
I think the OF graph model is a good candidate here. It is unarguably more complex than a single phandle, but it also makes different software implementations possible while still keeping the DT completely low.
If you go the other way around, how do you detect how things connect? Where do you get the information about the panel so you can trace back to the origin?
When the panel driver probes, it registers itself as a panel and gets its video source. Similarly a bridge in between gets its video source, which often would be the SoC, i.e. the origin.
That sounds backwards to me. The device tree serves the purpose of supplementing missing information that can't be probed if hardware is too stupid. I guess that's one of the primary reasons for structuring it the way we do, from the CPU point of view, because it allows the CPU to probe via the device tree.
Probing is always done downstream, so you'd start by looking at some type of bus interface and query it for what devices are present on the bus. Take for example PCI: the CPU only needs to know how to access the host bridge and will then probe devices behind each of the ports on that bridge. Some of those devices will be bridges, too, so it will continue to probe down the hierarchy.
Without DT you don't have a means to know that there was a panel before you've actually gone and probed your whole hierarchy and found a GPU with some sort of output that a panel can be connected to. I think it makes a lot of sense to describe things in the same way in DT.
Maybe I don't quite follow, but it sounds to be you are mixing control and data. For control, all you say is true. The CPU probes the devices on control busses, either with the aid of HW or the aid of DT, going downstream.
But the data paths are a different matter. The CPU/SoC may not even be involved in the whole data path. You could have a sensor on the board directly connected to a panel. Both are controlled by the CPU, but the data path goes from the sensor to the panel (or vice versa). There's no way the data paths can be "CPU centric" in that case.
Also, a difference with the data paths compared to control paths is that they are not strictly needed for operation. An encoder can generate an output without enabling its input (test pattern or maybe blank screen, or maybe a screen with company logo). Or an encoder with two inputs might only get the second input when the user requests a very high res mode. So it is possible that the data paths are lazily initialized.
You do know that there is a panel right after the device is probed according to its control bus. It doesn't mean that the data paths are there yet. In some cases the user space needs to reconfigure the data paths before a panel has an input and can be used to show an image from the SoC's display subsystem.
The point here being that the data path bindings don't really relate to the probing part. You can probe no matter which way the data path bindings go, and no matter if there actually exists (yet) a probed device on the other end of a data path phandle.
While I think having video data connections in DT either way, downstream or upstream, would work, it has felt most natural for me to have the phandles from video sinks to video sources.
The reason for that is that I think the video sink has to be in control of its source. It's the sink that tells the source to start or stop or reconfigure. So I have had need to get the video source from a video sink, but I have never had the need to get the video sinks from video sources.
We could decide to model all data connections are phandles that go in the data flow direction (source to sink), opposite to the data flow direction (sink to source), or in both directions. The problem with the sink to source direction is that it raises the complexity of implementations for display drivers, as the master driver that will bind all the components together will have a hard time locating the components in DT if all the components point towards it. Modeling the connections in the source to sink direction only would create the exact same problem for video capture (camera) devices. That's why I believe that bidirectional connections would be a better choice.
On 09/23/2014 11:30 AM, Tomi Valkeinen wrote:
On 23/09/14 09:21, Thierry Reding wrote:
Well, I can write almost any kind of bindings, and then evidently my device would work. For me, on my board.
Well, that's the whole problem with DT. For many devices we only have a single setup to test against. And even when we have several they often are derived from each other. But the alternative would be to defer (possibly indefinitely) merging support for a device until a second, wildly different setup shows up. That's completely unreasonable and we need to start somewhere.
Yes, but in this case we know of existing boards that have complex setups. It's not theoretical.
I'm not saying we should stop everything until we have a 100% solution for the rare complex cases. But we should keep them in mind and, when possible, solve problems in a way that will work for the complex cases also.
I guess non-video devices haven't had need for those. I have had lots of boards with video setup that cannot be represented with simple phandles. I'm not sure if I have just been unlucky or what, but my understand is that other people have encountered such boards also. Usually the problems encountered there have been circumvented with some hacky video driver for that specific board, or maybe a static configuration handled by the boot loader.
I have yet to encounter such a setup. Can you point me at a DTS for one such setup? I do remember a couple of hypothetical cases being discussed at one time or another, but I haven't seen any actual DTS content where this was needed.
No, I can't point to them as they are not in the mainline (at least the ones I've been working on), for obvious reasons.
With a quick glance, I have the following devices in my cabinet that have more complex setups: OMAP 4430 SDP, BeagleBoneBlack + LCD, AM43xx EVM. Many Nokia devices used to have such setups, usually so that the LCD and tv-out were connected to the same video source.
Do we have a standard way of representing the video pipeline with simple phandles? Or does everyone just do their own version? If there's no standard way, it sounds it'll be a mess to support in the future.
It doesn't matter all that much whether the representation is standard.
Again, I disagree.
phandles should simply point to the next element in the pipeline and the OS abstractions should be good enough to handle the details about how to chain the elements.
I, on the other hand, would rather see the links the other way around. Panel having a link to the video source, etc.
The video graphs have two-way links, which of course is the safest options, but also more verbose and redundant.
When this was discussed earlier, it was unclear which way the links should be. It's true that only links to one direction are strictly needed, but the question raised was that if in the drivers we end up always going the links the other way, the performance penalty may be somewhat big. (If I recall right).
I do not see why performance may drop significantly? If the link is one-way it should probably work as below: - the destination registers itself in some framework, - the source looks for the destination in this framework using phandle, - the source starts to communicate with the destination - since now full two way link can be established dynamically.
Where do you see here big performance penalty?
Regards Andrzej
Tomi
On Tuesday 23 September 2014 12:02:45 Andrzej Hajda wrote:
On 09/23/2014 11:30 AM, Tomi Valkeinen wrote:
On 23/09/14 09:21, Thierry Reding wrote:
Well, I can write almost any kind of bindings, and then evidently my device would work. For me, on my board.
Well, that's the whole problem with DT. For many devices we only have a single setup to test against. And even when we have several they often are derived from each other. But the alternative would be to defer (possibly indefinitely) merging support for a device until a second, wildly different setup shows up. That's completely unreasonable and we need to start somewhere.
Yes, but in this case we know of existing boards that have complex setups. It's not theoretical.
I'm not saying we should stop everything until we have a 100% solution for the rare complex cases. But we should keep them in mind and, when possible, solve problems in a way that will work for the complex cases also.>
I guess non-video devices haven't had need for those. I have had lots of boards with video setup that cannot be represented with simple phandles. I'm not sure if I have just been unlucky or what, but my understand is that other people have encountered such boards also. Usually the problems encountered there have been circumvented with some hacky video driver for that specific board, or maybe a static configuration handled by the boot loader.
I have yet to encounter such a setup. Can you point me at a DTS for one such setup? I do remember a couple of hypothetical cases being discussed at one time or another, but I haven't seen any actual DTS content where this was needed.
No, I can't point to them as they are not in the mainline (at least the ones I've been working on), for obvious reasons.
With a quick glance, I have the following devices in my cabinet that have more complex setups: OMAP 4430 SDP, BeagleBoneBlack + LCD, AM43xx EVM. Many Nokia devices used to have such setups, usually so that the LCD and tv-out were connected to the same video source.
Do we have a standard way of representing the video pipeline with simple phandles? Or does everyone just do their own version? If there's no standard way, it sounds it'll be a mess to support in the future.
It doesn't matter all that much whether the representation is standard.
Again, I disagree.
phandles should simply point to the next element in the pipeline and the OS abstractions should be good enough to handle the details about how to chain the elements.
I, on the other hand, would rather see the links the other way around. Panel having a link to the video source, etc.
The video graphs have two-way links, which of course is the safest options, but also more verbose and redundant.
When this was discussed earlier, it was unclear which way the links should be. It's true that only links to one direction are strictly needed, but the question raised was that if in the drivers we end up always going the links the other way, the performance penalty may be somewhat big. (If I recall right).
I do not see why performance may drop significantly? If the link is one-way it should probably work as below:
- the destination registers itself in some framework,
- the source looks for the destination in this framework using phandle,
- the source starts to communicate with the destination - since now full
two way link can be established dynamically.
Where do you see here big performance penalty?
The performance-related problems arise when you need to locate the remote device in the direction opposite to the phandle link direction. Traversing a link forward just involves a phandle lookup, but traversing it backwards isn't possible the same way.
On 09/23/2014 01:10 PM, Laurent Pinchart wrote:
On Tuesday 23 September 2014 12:02:45 Andrzej Hajda wrote:
On 09/23/2014 11:30 AM, Tomi Valkeinen wrote:
On 23/09/14 09:21, Thierry Reding wrote:
Well, I can write almost any kind of bindings, and then evidently my device would work. For me, on my board.
Well, that's the whole problem with DT. For many devices we only have a single setup to test against. And even when we have several they often are derived from each other. But the alternative would be to defer (possibly indefinitely) merging support for a device until a second, wildly different setup shows up. That's completely unreasonable and we need to start somewhere.
Yes, but in this case we know of existing boards that have complex setups. It's not theoretical.
I'm not saying we should stop everything until we have a 100% solution for the rare complex cases. But we should keep them in mind and, when possible, solve problems in a way that will work for the complex cases also.>
I guess non-video devices haven't had need for those. I have had lots of boards with video setup that cannot be represented with simple phandles. I'm not sure if I have just been unlucky or what, but my understand is that other people have encountered such boards also. Usually the problems encountered there have been circumvented with some hacky video driver for that specific board, or maybe a static configuration handled by the boot loader.
I have yet to encounter such a setup. Can you point me at a DTS for one such setup? I do remember a couple of hypothetical cases being discussed at one time or another, but I haven't seen any actual DTS content where this was needed.
No, I can't point to them as they are not in the mainline (at least the ones I've been working on), for obvious reasons.
With a quick glance, I have the following devices in my cabinet that have more complex setups: OMAP 4430 SDP, BeagleBoneBlack + LCD, AM43xx EVM. Many Nokia devices used to have such setups, usually so that the LCD and tv-out were connected to the same video source.
Do we have a standard way of representing the video pipeline with simple phandles? Or does everyone just do their own version? If there's no standard way, it sounds it'll be a mess to support in the future.
It doesn't matter all that much whether the representation is standard.
Again, I disagree.
phandles should simply point to the next element in the pipeline and the OS abstractions should be good enough to handle the details about how to chain the elements.
I, on the other hand, would rather see the links the other way around. Panel having a link to the video source, etc.
The video graphs have two-way links, which of course is the safest options, but also more verbose and redundant.
When this was discussed earlier, it was unclear which way the links should be. It's true that only links to one direction are strictly needed, but the question raised was that if in the drivers we end up always going the links the other way, the performance penalty may be somewhat big. (If I recall right).
I do not see why performance may drop significantly? If the link is one-way it should probably work as below:
- the destination registers itself in some framework,
- the source looks for the destination in this framework using phandle,
- the source starts to communicate with the destination - since now full
two way link can be established dynamically.
Where do you see here big performance penalty?
The performance-related problems arise when you need to locate the remote device in the direction opposite to the phandle link direction. Traversing a link forward just involves a phandle lookup, but traversing it backwards isn't possible the same way.
But you do not need to traverse backwards. You just wait when the source start to communicate with the destination, at this moment destination can build back-link dynamically.
Regards Andrzej
On Tuesday 23 September 2014 13:18:30 Andrzej Hajda wrote:
On 09/23/2014 01:10 PM, Laurent Pinchart wrote:
On Tuesday 23 September 2014 12:02:45 Andrzej Hajda wrote:
On 09/23/2014 11:30 AM, Tomi Valkeinen wrote:
On 23/09/14 09:21, Thierry Reding wrote:
Well, I can write almost any kind of bindings, and then evidently my device would work. For me, on my board.
Well, that's the whole problem with DT. For many devices we only have a single setup to test against. And even when we have several they often are derived from each other. But the alternative would be to defer (possibly indefinitely) merging support for a device until a second, wildly different setup shows up. That's completely unreasonable and we need to start somewhere.
Yes, but in this case we know of existing boards that have complex setups. It's not theoretical.
I'm not saying we should stop everything until we have a 100% solution for the rare complex cases. But we should keep them in mind and, when possible, solve problems in a way that will work for the complex cases also.
I guess non-video devices haven't had need for those. I have had lots of boards with video setup that cannot be represented with simple phandles. I'm not sure if I have just been unlucky or what, but my understand is that other people have encountered such boards also. Usually the problems encountered there have been circumvented with some hacky video driver for that specific board, or maybe a static configuration handled by the boot loader.
I have yet to encounter such a setup. Can you point me at a DTS for one such setup? I do remember a couple of hypothetical cases being discussed at one time or another, but I haven't seen any actual DTS content where this was needed.
No, I can't point to them as they are not in the mainline (at least the ones I've been working on), for obvious reasons.
With a quick glance, I have the following devices in my cabinet that have more complex setups: OMAP 4430 SDP, BeagleBoneBlack + LCD, AM43xx EVM. Many Nokia devices used to have such setups, usually so that the LCD and tv-out were connected to the same video source.
Do we have a standard way of representing the video pipeline with simple phandles? Or does everyone just do their own version? If there's no standard way, it sounds it'll be a mess to support in the future.
It doesn't matter all that much whether the representation is standard.
Again, I disagree.
phandles should simply point to the next element in the pipeline and the OS abstractions should be good enough to handle the details about how to chain the elements.
I, on the other hand, would rather see the links the other way around. Panel having a link to the video source, etc.
The video graphs have two-way links, which of course is the safest options, but also more verbose and redundant.
When this was discussed earlier, it was unclear which way the links should be. It's true that only links to one direction are strictly needed, but the question raised was that if in the drivers we end up always going the links the other way, the performance penalty may be somewhat big. (If I recall right).
I do not see why performance may drop significantly? If the link is one-way it should probably work as below:
- the destination registers itself in some framework,
- the source looks for the destination in this framework using phandle,
- the source starts to communicate with the destination - since now full
two way link can be established dynamically.
Where do you see here big performance penalty?
The performance-related problems arise when you need to locate the remote device in the direction opposite to the phandle link direction. Traversing a link forward just involves a phandle lookup, but traversing it backwards isn't possible the same way.
But you do not need to traverse backwards. You just wait when the source start to communicate with the destination, at this moment destination can build back-link dynamically.
Your driver might not need it today for your use cases, but can you be certain that no driver on any OS would need to ?
This becomes an issue even on Linux when considering video-related devices that can be part of either a capture pipeline or a display pipeline. If the link always goes in the data flow direction, then it will be easy to locate the downstream device (bridge or panel) from the display controller driver, but it would be much more difficult to locate the same device from a camera driver as all of a sudden the device would become an upstream device.
On 09/23/2014 01:23 PM, Laurent Pinchart wrote:
On Tuesday 23 September 2014 13:18:30 Andrzej Hajda wrote:
On 09/23/2014 01:10 PM, Laurent Pinchart wrote:
On Tuesday 23 September 2014 12:02:45 Andrzej Hajda wrote:
On 09/23/2014 11:30 AM, Tomi Valkeinen wrote:
On 23/09/14 09:21, Thierry Reding wrote:
> Well, I can write almost any kind of bindings, and then evidently my > device would work. For me, on my board. Well, that's the whole problem with DT. For many devices we only have a single setup to test against. And even when we have several they often are derived from each other. But the alternative would be to defer (possibly indefinitely) merging support for a device until a second, wildly different setup shows up. That's completely unreasonable and we need to start somewhere.
Yes, but in this case we know of existing boards that have complex setups. It's not theoretical.
I'm not saying we should stop everything until we have a 100% solution for the rare complex cases. But we should keep them in mind and, when possible, solve problems in a way that will work for the complex cases also.
> I guess non-video devices haven't had need for those. I have had lots > of boards with video setup that cannot be represented with simple > phandles. I'm not sure if I have just been unlucky or what, but my > understand is that other people have encountered such boards also. > Usually the problems encountered there have been circumvented with > some hacky video driver for that specific board, or maybe a static > configuration handled by the boot loader. I have yet to encounter such a setup. Can you point me at a DTS for one such setup? I do remember a couple of hypothetical cases being discussed at one time or another, but I haven't seen any actual DTS content where this was needed.
No, I can't point to them as they are not in the mainline (at least the ones I've been working on), for obvious reasons.
With a quick glance, I have the following devices in my cabinet that have more complex setups: OMAP 4430 SDP, BeagleBoneBlack + LCD, AM43xx EVM. Many Nokia devices used to have such setups, usually so that the LCD and tv-out were connected to the same video source.
> Do we have a standard way of representing the video pipeline with > simple phandles? Or does everyone just do their own version? If > there's no standard way, it sounds it'll be a mess to support in the > future. It doesn't matter all that much whether the representation is standard.
Again, I disagree.
phandles should simply point to the next element in the pipeline and the OS abstractions should be good enough to handle the details about how to chain the elements.
I, on the other hand, would rather see the links the other way around. Panel having a link to the video source, etc.
The video graphs have two-way links, which of course is the safest options, but also more verbose and redundant.
When this was discussed earlier, it was unclear which way the links should be. It's true that only links to one direction are strictly needed, but the question raised was that if in the drivers we end up always going the links the other way, the performance penalty may be somewhat big. (If I recall right).
I do not see why performance may drop significantly? If the link is one-way it should probably work as below:
- the destination registers itself in some framework,
- the source looks for the destination in this framework using phandle,
- the source starts to communicate with the destination - since now full
two way link can be established dynamically.
Where do you see here big performance penalty?
The performance-related problems arise when you need to locate the remote device in the direction opposite to the phandle link direction. Traversing a link forward just involves a phandle lookup, but traversing it backwards isn't possible the same way.
But you do not need to traverse backwards. You just wait when the source start to communicate with the destination, at this moment destination can build back-link dynamically.
Your driver might not need it today for your use cases, but can you be certain that no driver on any OS would need to ?
I have just showed how to create back-link dynamically if we have only forward-link in DT. Ie it is a trivial 'proof' that the direction is not so important. So I do not understand why do you pose such question?
This becomes an issue even on Linux when considering video-related devices that can be part of either a capture pipeline or a display pipeline. If the link always goes in the data flow direction, then it will be easy to locate the downstream device (bridge or panel) from the display controller driver, but it would be much more difficult to locate the same device from a camera driver as all of a sudden the device would become an upstream device.
Why?
If you have graph: sensor --> camera
Then camera register itself in some framework as a destination device and sensor looks in this framework for the device identified by remote endpoint. Then sensor tells camera it is connected to it and voila.
The framework here can be v4l2 specific or it can be of_graph specific whatever.
Regards Andrzej
On Tuesday 23 September 2014 13:47:40 Andrzej Hajda wrote:
On 09/23/2014 01:23 PM, Laurent Pinchart wrote:
On Tuesday 23 September 2014 13:18:30 Andrzej Hajda wrote:
On 09/23/2014 01:10 PM, Laurent Pinchart wrote:
On Tuesday 23 September 2014 12:02:45 Andrzej Hajda wrote:
On 09/23/2014 11:30 AM, Tomi Valkeinen wrote:
On 23/09/14 09:21, Thierry Reding wrote: >> Well, I can write almost any kind of bindings, and then evidently my >> device would work. For me, on my board. > > Well, that's the whole problem with DT. For many devices we only have > a single setup to test against. And even when we have several they > often are derived from each other. But the alternative would be to > defer (possibly indefinitely) merging support for a device until a > second, wildly different setup shows up. That's completely > unreasonable and we need to start somewhere.
Yes, but in this case we know of existing boards that have complex setups. It's not theoretical.
I'm not saying we should stop everything until we have a 100% solution for the rare complex cases. But we should keep them in mind and, when possible, solve problems in a way that will work for the complex cases also.
>> I guess non-video devices haven't had need for those. I have had >> lots of boards with video setup that cannot be represented with >> simple phandles. I'm not sure if I have just been unlucky or what, >> but my understand is that other people have encountered such boards >> also. Usually the problems encountered there have been circumvented >> with some hacky video driver for that specific board, or maybe a >> static configuration handled by the boot loader. > > I have yet to encounter such a setup. Can you point me at a DTS for > one such setup? I do remember a couple of hypothetical cases being > discussed at one time or another, but I haven't seen any actual DTS > content where this was needed.
No, I can't point to them as they are not in the mainline (at least the ones I've been working on), for obvious reasons.
With a quick glance, I have the following devices in my cabinet that have more complex setups: OMAP 4430 SDP, BeagleBoneBlack + LCD, AM43xx EVM. Many Nokia devices used to have such setups, usually so that the LCD and tv-out were connected to the same video source.
>> Do we have a standard way of representing the video pipeline with >> simple phandles? Or does everyone just do their own version? If >> there's no standard way, it sounds it'll be a mess to support in the >> future. > > It doesn't matter all that much whether the representation is > standard.
Again, I disagree.
> phandles should simply point to the next element in the pipeline and > the OS abstractions should be good enough to handle the details about > how to chain the elements.
I, on the other hand, would rather see the links the other way around. Panel having a link to the video source, etc.
The video graphs have two-way links, which of course is the safest options, but also more verbose and redundant.
When this was discussed earlier, it was unclear which way the links should be. It's true that only links to one direction are strictly needed, but the question raised was that if in the drivers we end up always going the links the other way, the performance penalty may be somewhat big. (If I recall right).
I do not see why performance may drop significantly? If the link is one-way it should probably work as below:
- the destination registers itself in some framework,
- the source looks for the destination in this framework using phandle,
- the source starts to communicate with the destination - since now
full two way link can be established dynamically.
Where do you see here big performance penalty?
The performance-related problems arise when you need to locate the remote device in the direction opposite to the phandle link direction. Traversing a link forward just involves a phandle lookup, but traversing it backwards isn't possible the same way.
But you do not need to traverse backwards. You just wait when the source start to communicate with the destination, at this moment destination can build back-link dynamically.
Your driver might not need it today for your use cases, but can you be certain that no driver on any OS would need to ?
I have just showed how to create back-link dynamically if we have only forward-link in DT. Ie it is a trivial 'proof' that the direction is not so important. So I do not understand why do you pose such question?
This becomes an issue even on Linux when considering video-related devices that can be part of either a capture pipeline or a display pipeline. If the link always goes in the data flow direction, then it will be easy to locate the downstream device (bridge or panel) from the display controller driver, but it would be much more difficult to locate the same device from a camera driver as all of a sudden the device would become an upstream device.
Why?
If you have graph: sensor --> camera
Then camera register itself in some framework as a destination device and sensor looks in this framework for the device identified by remote endpoint. Then sensor tells camera it is connected to it and voila.
Except that both kernelspace and userspace deal with cameras the other way around, the master device is the camera receiver, not the camera sensor. DRM is architected the same way, with the component that performs DMA operations being the master device.
The framework here can be v4l2 specific or it can be of_graph specific whatever.
On 09/23/2014 01:52 PM, Laurent Pinchart wrote:
On Tuesday 23 September 2014 13:47:40 Andrzej Hajda wrote:
On 09/23/2014 01:23 PM, Laurent Pinchart wrote:
On Tuesday 23 September 2014 13:18:30 Andrzej Hajda wrote:
On 09/23/2014 01:10 PM, Laurent Pinchart wrote:
On Tuesday 23 September 2014 12:02:45 Andrzej Hajda wrote:
On 09/23/2014 11:30 AM, Tomi Valkeinen wrote: > On 23/09/14 09:21, Thierry Reding wrote: >>> Well, I can write almost any kind of bindings, and then evidently my >>> device would work. For me, on my board. >> >> Well, that's the whole problem with DT. For many devices we only have >> a single setup to test against. And even when we have several they >> often are derived from each other. But the alternative would be to >> defer (possibly indefinitely) merging support for a device until a >> second, wildly different setup shows up. That's completely >> unreasonable and we need to start somewhere. > > Yes, but in this case we know of existing boards that have complex > setups. It's not theoretical. > > I'm not saying we should stop everything until we have a 100% solution > for the rare complex cases. But we should keep them in mind and, when > possible, solve problems in a way that will work for the complex cases > also. > >>> I guess non-video devices haven't had need for those. I have had >>> lots of boards with video setup that cannot be represented with >>> simple phandles. I'm not sure if I have just been unlucky or what, >>> but my understand is that other people have encountered such boards >>> also. Usually the problems encountered there have been circumvented >>> with some hacky video driver for that specific board, or maybe a >>> static configuration handled by the boot loader. >> >> I have yet to encounter such a setup. Can you point me at a DTS for >> one such setup? I do remember a couple of hypothetical cases being >> discussed at one time or another, but I haven't seen any actual DTS >> content where this was needed. > > No, I can't point to them as they are not in the mainline (at least > the ones I've been working on), for obvious reasons. > > With a quick glance, I have the following devices in my cabinet that > have more complex setups: OMAP 4430 SDP, BeagleBoneBlack + LCD, AM43xx > EVM. Many Nokia devices used to have such setups, usually so that the > LCD and tv-out were connected to the same video source. > >>> Do we have a standard way of representing the video pipeline with >>> simple phandles? Or does everyone just do their own version? If >>> there's no standard way, it sounds it'll be a mess to support in the >>> future. >> >> It doesn't matter all that much whether the representation is >> standard. > > Again, I disagree. > >> phandles should simply point to the next element in the pipeline and >> the OS abstractions should be good enough to handle the details about >> how to chain the elements. > > I, on the other hand, would rather see the links the other way around. > Panel having a link to the video source, etc. > > The video graphs have two-way links, which of course is the safest > options, but also more verbose and redundant. > > When this was discussed earlier, it was unclear which way the links > should be. It's true that only links to one direction are strictly > needed, but the question raised was that if in the drivers we end up > always going the links the other way, the performance penalty may be > somewhat big. (If I recall right).
I do not see why performance may drop significantly? If the link is one-way it should probably work as below:
- the destination registers itself in some framework,
- the source looks for the destination in this framework using phandle,
- the source starts to communicate with the destination - since now
full two way link can be established dynamically.
Where do you see here big performance penalty?
The performance-related problems arise when you need to locate the remote device in the direction opposite to the phandle link direction. Traversing a link forward just involves a phandle lookup, but traversing it backwards isn't possible the same way.
But you do not need to traverse backwards. You just wait when the source start to communicate with the destination, at this moment destination can build back-link dynamically.
Your driver might not need it today for your use cases, but can you be certain that no driver on any OS would need to ?
I have just showed how to create back-link dynamically if we have only forward-link in DT. Ie it is a trivial 'proof' that the direction is not so important. So I do not understand why do you pose such question?
This becomes an issue even on Linux when considering video-related devices that can be part of either a capture pipeline or a display pipeline. If the link always goes in the data flow direction, then it will be easy to locate the downstream device (bridge or panel) from the display controller driver, but it would be much more difficult to locate the same device from a camera driver as all of a sudden the device would become an upstream device.
Why?
If you have graph: sensor --> camera
Then camera register itself in some framework as a destination device and sensor looks in this framework for the device identified by remote endpoint. Then sensor tells camera it is connected to it and voila.
Except that both kernelspace and userspace deal with cameras the other way around, the master device is the camera receiver, not the camera sensor. DRM is architected the same way, with the component that performs DMA operations being the master device.
But the link direction do not determines who should be the master device. It just determines who should perform initial handshake.
Andrzej
On Tue, Sep 23, 2014 at 02:52:24PM +0300, Laurent Pinchart wrote:
On Tuesday 23 September 2014 13:47:40 Andrzej Hajda wrote:
On 09/23/2014 01:23 PM, Laurent Pinchart wrote:
[...]
This becomes an issue even on Linux when considering video-related devices that can be part of either a capture pipeline or a display pipeline. If the link always goes in the data flow direction, then it will be easy to locate the downstream device (bridge or panel) from the display controller driver, but it would be much more difficult to locate the same device from a camera driver as all of a sudden the device would become an upstream device.
Why?
If you have graph: sensor --> camera
Then camera register itself in some framework as a destination device and sensor looks in this framework for the device identified by remote endpoint. Then sensor tells camera it is connected to it and voila.
Except that both kernelspace and userspace deal with cameras the other way around, the master device is the camera receiver, not the camera sensor. DRM is architected the same way, with the component that performs DMA operations being the master device.
I don't see what's wrong with having the camera reference the sensor by phandle instead. That's much more natural in my opinion.
Thierry
Hi Thierry,
On Tuesday 23 September 2014 16:49:38 Thierry Reding wrote:
On Tue, Sep 23, 2014 at 02:52:24PM +0300, Laurent Pinchart wrote:
On Tuesday 23 September 2014 13:47:40 Andrzej Hajda wrote:
On 09/23/2014 01:23 PM, Laurent Pinchart wrote:
[...]
This becomes an issue even on Linux when considering video-related devices that can be part of either a capture pipeline or a display pipeline. If the link always goes in the data flow direction, then it will be easy to locate the downstream device (bridge or panel) from the display controller driver, but it would be much more difficult to locate the same device from a camera driver as all of a sudden the device would become an upstream device.
Why?
If you have graph: sensor --> camera
Then camera register itself in some framework as a destination device and sensor looks in this framework for the device identified by remote endpoint. Then sensor tells camera it is connected to it and voila.
Except that both kernelspace and userspace deal with cameras the other way around, the master device is the camera receiver, not the camera sensor. DRM is architected the same way, with the component that performs DMA operations being the master device.
I don't see what's wrong with having the camera reference the sensor by phandle instead. That's much more natural in my opinion.
The problem, as explained by Tomi in a more recent e-mail (let's thus discuss the issue there) is that making the phandles pointing outwards from the CPU point of view stops working when the same chip or IP core can be used in both a camera and a display pipeline (and we have real use cases for that), or when the CPU isn't involved at all in the pipeline.
dri-devel@lists.freedesktop.org