On Tuesday 29 of October 2013 12:19:35 Olof Johansson wrote:
On Mon, Oct 28, 2013 at 4:13 PM, Tomasz Figa tomasz.figa@gmail.com
wrote:
Hi,
On Wednesday 23 of October 2013 12:09:06 Sean Paul wrote:
On Wed, Oct 23, 2013 at 11:53 AM, Dave Airlie airlied@gmail.com
wrote:
>> I think we need to start considering a framework where >> subdrivers >> just >> add drm objects themselves, then the toplevel node is >> responsible >> for >> knowing that everything for the current configuration is >> loaded. > > It would be nice to specify the various pieces in dt, then have > some > type of drm notifier to the toplevel node when everything has > been > probed. Doing it in the dt would allow standalone > drm_bridge/drm_panel > drivers to be transparent as far as the device's drm driver is > concerned. > > Sean > >> I realise we may need to make changes to the core drm to allow >> this >> but we should probably start to create a strategy for fixing >> the >> API >> issues that this throws up. >> >> Note I'm not yet advocating for dynamic addition of nodes once >> the >> device is in use, or removing them.
I do wonder if we had some sort of tag in the device tree for any nodes involved in the display, and the core drm layer would read that list, and when every driver registers tick things off, and when the last one joins we get a callback and init the drm layer, we'd of course have the basic drm layer setup prior to that so we can add the objects as the drivers load. It might make development a bit trickier as you'd need to make sure someone claimed ownership of all the bits for init to proceed.>>
Yeah, that's basically what the strawman looked like in my head.
Instead of a property in each node, I was thinking of having a separate gfx pipe nodes that would have dt pointers to the various pieces involved in that pipe. This would allow us to associate standalone entities like bridges and panels with encoders in dt w/o doing it in the drm code. I *think* this should be Ok with the dt guys since it is still describing the hardware, but I think we'd have to make sure it wasn't drm-specific.
I suppose the question is how much dynamic pipeline construction there is,
even on things like radeon and i915 we have dynamic clock generator to crtc to encoder setups, so I worry about static lists per-pipe, so I still think just stating all these devices are needed for display and a list of valid interconnections between them, then we can have the generic code model drm crtc/encoders/connectors on that list, and construct the possible_crtcs /possible_clones etc at that stage.
I'm, without excuse, hopeless at devicetree, so there are probably some violations, but something like:
display-pipelines {
required-elements = <&bridge-a &panel-a &encoder-x &encoder-y
&crtc-x &crtc-y>;
pipe1 {
bridge = <&bridge-a>; encoder = <&encoder-x>; crtc = <&crtc-y>;
}; pipe2 {
encoder = <&encoder-x>; crtc = <&crtc-x>;
}; pipe3 {
panel = <&panel-a>; encoder = <&encoder-y>; crtc = <&crtc-y>;
};
};
I'm tempted to add connector to the pipe nodes as well, so it's obvious which connector should be used in cases where multiple entities in the pipe implement drm_connector. However, I'm not sure if that would be NACKed by dt people.
I'm also not sure if there are too many combinations for i915 and radeon to make this unreasonable. I suppose those devices could just use required-elements and leave the pipe nodes out.
Just to put my two cents in, as one of the people involved into "the device tree movement", I'd say that instead of creating artifical entities, such as display-pipelines and all of the pipeX'es, device tree should represent relations between nodes.
According to the generic DT bindings we already have for video-interfaces [1] your example connection layout would look as follows:
panel-a {
/* Single input port */ port { panel_a: endpoint@0 { remote-endpoint = <&encoder_y_out>; }; };
};
bridge-a {
ports { /* Input port */ port@0 { bridge_a_in: endpoint@0 { remote-endpoint = <&encoder_x_out>; }; }; /* * Since it is a bridge, port@1 should be probably * present here as well... */ };
};
encoder-x {
ports { /* Input port */ port@0 { encoder_x_in0: endpoint@0 { remote-endpoint = <&crtc_x>; }; encoder_x_in1: endpoint@1 { remote-endpoint = <&crtc_y_out0>; }; }; /* Output port */ port@1 { encoder_x_out: endpoint@0 { remote-endpoint = <&bridge_a_in>; }; }; };
};
encoder-y {
ports { /* Input port */ port@0 { encoder_y_in: endpoint@0 { remote-endpoint = <&encoder_y_in>; }; }; /* Output port */ port@1 { encoder_y_out: endpoint@0 { remote-endpoint = <&panel_a>; }; }; };
};
crtc-x {
/* Single output port */ port { crtc_x: endpoint@0 { remote-endpoint = <&encoder_x_in0>; }; };
};
crtc-y {
/* Single output port */ port { crtc_y_out0: endpoint@0 { remote-endpoint = <&encoder_x_in1>; }; crtc_y_out1: endpoint@1 { remote-endpoint = <&encoder_y_in>; }; };
};
If I didn't make any typos, then the nodes above should not only represent fully the layout of your sample video pipelines, but also map connections between video entities with their physical endpoints, allowing respective drivers to properly configure any muxes based purely on DT data. Of course this also includes dependencies between all display entities.
It's a very deeply nested structure, I'm not sure there's a need to make a ports {} subnode really.
Also, I don't know if it makes sense to always name it remote-endpoint, or to use a more flexible name depending on what is actually connected, over which bus, etc.
But overall this looks sane-ish.
I fully agree with you. Personally I would take a bit different design decisions when designing this bindings, but here I'm just pointing an already defined binding that should suit the needs described in this thread, to avoid reinventing the wheel.
Best regards, Tomasz