Hi,
I have already proposed to reuse drm_panel infrastructure to implement bridges in my RFC [1] and I have implemented DSI/LVDS bridge in this way [2]. I guess this discussion is a result of my discussion with Inki in thread [1]. More comments below.
[1]: http://permalink.gmane.org/gmane.linux.kernel.samsung-soc/27044 [2]: http://permalink.gmane.org/gmane.linux.drivers.devicetree/61559
On 03/18/2014 06:37 PM, Daniel Vetter wrote:
On Tue, Mar 18, 2014 at 09:58:25PM +0900, Inki Dae wrote:
2014-03-18 21:47 GMT+09:00 Daniel Vetter daniel@ffwll.ch:
On Tue, Mar 18, 2014 at 1:42 PM, Inki Dae inki.dae@samsung.com wrote:
I think now drm_bridge couldn't do what we want for embedded systems as long as drm_encoder has drm_bridge. See the blow hardware pipeline, Display Controller-----Image Enhancement chip-----MIP DSI-----MIPI TO LVDS Bridge-----LCD Panel
In above hardware pipeline, Display controller is controlled by crtc, and Image Enhancement chip receives output from display controller. So the use of existing drm_bridge would be suitable to only bridge devices between MIPI DSI and LCD Panel, but not to Image Enhancement chip.
For such hardware, drm_panel infrastructure is more reasonable to me, and that is why I try to integrate drm_panel and drm_bridge to one integrated framework which has infrastructure same as existing drm_panel. The important thing is to free this integrated framework from drm_encoder so that crtc device can also use this framework.
Hm, what is this image enhancement chip? Is that some IP block on the SoC? Is it optional? Can it be attached to different crtcs?
In case of Exynos, this chip is in SoC, and can be only attached to one crtc, display controller. But I'm not sure other SoC have similar chip.
I think we have similar things on intel hardware, but without details on what it does and how it works I can't really say how to best expose it to userspace and how to best handle it internally in the driver. -Daniel
Simply saying, the image enhancement chip can enhance image data from display controller, i.e. saturation enhancement, color reproduction, dithering, and so on. And this chip receives image data through hardware wired lines connected internally between display controller and this chip.
To me this sounds like you simply need to expose all these capabilities to userspace as crtc properties. Which addresses one part of this issue.
The other side is how you are going to track this in the driver, and there you can do whatever you want - just add a pointer/structure to the exynos crtc structure for the display enhancement block.
The MIPI DSI block would then be treated as a drm_encoder, and all the later stages as drm_bridges up to the very last (the actual lvds panel) which would be a simple drm_panel.
I don't really see what additional complexity you need here. Especially since this image enhancer is on your SoC (and I guess a samgsung IP block not shared with any other SoC manufacture) you can easily keep the driver code for it in the exynos driver. So really no need to have a generic interface here. -Daniel
But what you proposes is complex. Blocks will be implemented as: 1. drm_encoder, 2. drm_bridge, 3. drm_panel, 4. another 'framework' to implement image enhancers, which are after crtc.
But these blocks are just 'video consumers' which can be implemented using drm_panel framework and 'video producers' (except real panels) which can be implemented as drm_panel clients. Of course drm_panel should be renamed to sth like drm_video_input and its ops should be extended.
Anyway instead of 4 or more different frameworks we will have only one framework.
In general I think it would be better to model just device interfaces instead of modeling whole devices. Btw this approach will allow to model such monsters as TC358710XBG hub [3].
[3]: http://www.toshiba-components.com/prpdf/5992e.pdf
Regards Andrzej