On Wed, Oct 2, 2019 at 9:45 AM Ville Syrjälä ville.syrjala@linux.intel.com wrote:
On Tue, Oct 01, 2019 at 02:20:55PM -0700, Jeykumar Sankaran wrote:
On 2019-09-30 03:39, Ville Syrjälä wrote:
On Fri, Sep 27, 2019 at 06:28:51PM -0700, Jeykumar Sankaran wrote:
The mode_config max width/height values determine the maximum resolution the pixel reader can handle.
Not according to the docs I "fixed" a while ago.
But the same values are used to restrict the size of the framebuffer creation. Hardware's with scaling blocks can operate on framebuffers larger/smaller than that of the pixel reader resolutions by scaling them down/up before rendering.
This changes adds a separate framebuffer max width/height fields in drm_mode_config to allow vendors to set if they are different than that of the default max resolution values.
If you're going to change the meaning of the old values you need to fix the drivers too.
Personally I don't see too much point in this since you most likely want to validate all the other timings as well, and so likely need some kind of mode_valid implementation anyway. Hence to validate modes there's not much benefit of having global min/max values.
https://patchwork.kernel.org/patch/10467155/
I believe you are referring to this patch.
I am primarily interested in the scaling scenario mentioned here. MSM and a few other hardware have scaling block that are used both ways:
- Where FB limits are larger than the display limits. Scalar blocks are
used to downscale the framebuffers and render within display limits.
In this scenario, with your patch, are you suggesting the drivers maintain the display limits locally and use those values in fill_modes() / mode_valid() to filter out invalid modes explicitly instead of mode_config.max_width/height?
- Where FB limits are smaller than display limits. Enforced for
performance reasons on low tier hardware. It reduces the fetch bandwidth and uses post blending scalar block to scale up the pixel stream to match the display resolution.
As Daniel mentioned in that discussion your typical userspace assumes that it can use a single unscaled framebuffer with any advertised mode. Hence I believe limiting the mode list based on the max framebuffer size is pretty much required unless you want to break existing userspace.
In i915 I went a bit further than that recently and now we filter the mode list based on the maximum plane size [1] (which can be less than the max fb size and less than the maximum crtc dimensions). And again that's because userspace assumes that it can just use a single unscaled fullscreen plane to cover the entire crtc.
These assumption are also carved into the legacy setcrtc uapi where you can't even specify multiple framebuffers. In theory a driver could internally use multiple planes to overcome some of the limitations, but in i915 at least we don't.
[1] https://cgit.freedesktop.org/drm/drm-intel/commit/?id=2d20411e25a3bf3d2914a2...
Any suggestions on how this topology can be handled with a single set of max/min values?
I think a safe way to relax these rules would be to either: a) Add a client cap by which userspace can inform the kernel it understands there are more complicated limits at play and thus can't assume that everything will just work b) Maybe we could just tie that in with the atomic cap since atomic clients are pretty much required to do the TEST_ONLY dance anyway, so one might hope they have a working fallback strategy. Though I suspect eg. the modesetting ddx wouldn't like this. But we no longer allow atomic with X anyway so that partcular argument may not hold much weight anymore.
What was the conclusion of the hack to not expose atomic to modesetting ddx, due to the brokenness of it's atomic use? I guess that could also make the modesetting case go away..
BR, -R