On Tue, Sep 23, 2014 at 02:52:24PM +0300, Laurent Pinchart wrote:
On Tuesday 23 September 2014 13:47:40 Andrzej Hajda wrote:
On 09/23/2014 01:23 PM, Laurent Pinchart wrote:
[...]
This becomes an issue even on Linux when considering video-related devices that can be part of either a capture pipeline or a display pipeline. If the link always goes in the data flow direction, then it will be easy to locate the downstream device (bridge or panel) from the display controller driver, but it would be much more difficult to locate the same device from a camera driver as all of a sudden the device would become an upstream device.
Why?
If you have graph: sensor --> camera
Then camera register itself in some framework as a destination device and sensor looks in this framework for the device identified by remote endpoint. Then sensor tells camera it is connected to it and voila.
Except that both kernelspace and userspace deal with cameras the other way around, the master device is the camera receiver, not the camera sensor. DRM is architected the same way, with the component that performs DMA operations being the master device.
I don't see what's wrong with having the camera reference the sensor by phandle instead. That's much more natural in my opinion.
Thierry