On Tue, Apr 1, 2014 at 10:42 AM, Ville Syrjälä ville.syrjala@linux.intel.com wrote:
On Tue, Apr 01, 2014 at 10:22:07AM -0400, Rob Clark wrote:
On Tue, Apr 1, 2014 at 10:12 AM, Ville Syrjälä ville.syrjala@linux.intel.com wrote:
On Tue, Apr 01, 2014 at 09:54:40AM -0400, Rob Clark wrote:
On Tue, Apr 1, 2014 at 9:40 AM, Daniel Vetter daniel@ffwll.ch wrote:
On Tue, Apr 01, 2014 at 08:40:54AM -0400, Rob Clark wrote:
No, not really. I was just trying to get away with pushing some complexity (for case #1) up to userspace instead of doing it in the kernel.
To clarify: I don't think it makes sense to fully abstract this away in the kernel, especially if userspace needs to be aware of the boundary between the crtcs so that it can correctly tile up the logical frambuffer. But I'm not sure whether trying to make that possible with a generic userspace driver is sensible, or whether having a bit of magic glue code in the ddx/wayland/hwc part for e.g. msm is the better option, at least in the short term.
Since if the set of useable planes actually changes we need to push that decision up the stack even further like wayland/hwc currently allow, and maybe there's some things we need to fix at that layer first. Once we've learned that lesson we can push things down again and add a neat little generic kernel interface. At least thus far we've always done a bit of prototyping with driver-specific code to learn a few lessons, e.g. the various pieces of non-standard plane/overlay in i915.
right, things like 'STATUS' property for returning per-object status would start as driver custom. (And even 'SLAVE_CRTC'..) Userspace could look for certain property names in the same way that it looks for certain gl extension strings. But should be semi-standardized, so other drivers which need the same thing should use same property names/values/behaviors as much as possible.. which was the point for starting the thread ;-)
What's the problem with just using two crtcs? With the atomic API you just shovel the state for both down into the driver in one ioctl. This is pretty much what we'll need to do to drive those 4k MST DP displays as well. The driver will then have to do its best to genlock the crtcs if the hardware doesn't do it fully. IIRC that's how we're going to have to do the MST stuff, ie. use the same clock source for both obviously, and try to start all the pipes as fast as possible so that the vblanks line up. And that's going to require more changes to our modesetting codepaths.
well, two problems:
- it won't actually work (at least not without some overhaul of kms
core and helpers).. encoder only has a single crtc ptr. And anyway, it is useful for the driver to differentiate between which pipe/mixer is primary and which is slave.
What does primary/slave mean here? That seems like a rather hardware specific notion.
it could be.. you might need to configure the mixers differently (like setting a MERGE bit/bitfield in one of them).
But it seems easier for a driver to ignore that differentiation if it doesn't have to care about it, than the other way around
The SLAVE_CRTC property essentially gives you that 2nd pointer you need.
Would seem easier to add the pointer. Or even better: just expose the display as two connectors and then you don't have to change anything. It's just like having multiple displays positioned next to each other today.
this is specifically for the case where you have two crtcs, one encoder. I don't want to make the driver jump through hoops with a dummy encoder/connector for this..
- still would be nice to be able to drive 4k displays from x11.. and
for the most part there isn't much compelling reason for most ddx's to migrate to atomic ioctl.
Someone might argue that 4k support is a compelling reason ;)
well, yeah, we could put an semi-artificial restriction like that to force people to move to the new ioctl. But I'd rather not.
BR, -R
-- Ville Syrjälä Intel OTC