Hello Ilia!
On 2015-04-23 16:32, Ilia Mirkin wrote:
On Thu, Apr 23, 2015 at 9:39 AM, Tobias Jakobi tjakobi@math.uni-bielefeld.de wrote:
Hello Ilia,
On 2015-04-21 21:15, Ilia Mirkin wrote:
I know it was immensely useful to me when I was adding YUV plane support to nouveau. Seemed to work as advertised at the time (1.5y ago) for YUYV, UYVY, and NV12.
-ilia
maybe you can help me with that question.
Let's consider a user of the DRM interface that wants to feed NV12 data to it. NV12 is bi-planar, so the user should provide two handles/pitches/offsets describing chroma and luma plane respectively. But most of the time chroma and luma is contiguous in memory, with nothing in between.
I was wondering if it is an allowed setup to request NV12 as pixelformat, but only to provide _one_ handle/pitch/offset? (implying that we are in the contiguous setting)
Uhm... I'm no authority on the matter, merely vouching for the usefulness of the modetest tool :) However I was never aware of any contiguousness assumptions in NV12, afaik the two different planes are different :) It could also cause issues if you had, a, say, 32x30 image but whatever hw produced it wanted to make it 32x32. You'd end up with an offset between the two planes which wouldn't be specified. FWIW on the (much older) NVIDIA gpu's that I added support for, it assumes a separate offset:
nvif_wr32(dev, NV_PVIDEO_UVPLANE_OFFSET_BUFF(flip), nv_fb->nvbo->bo.offset + fb->offsets[1]);
Note that as far as the HW is concerned, it's an entirely separate memory location, not even an offset from the Y plane -- it could be 2 totally separate bo's for all it cares.
Thanks for the insight! That's kind of what I expected.
What confused me though is that the v4l2 API has this: http://www.hep.by/gnu/kernel/media/V4L2-PIX-FMT-NV12M.html
Maybe pixelformats are passed around differently in v4l2, but as far as I can see, the difference between v4l2-NV12 and v4l2-NV12M doesn't exist in DRM land. As soon as NV12 is used, we always have two planes given explicitly.
Also, as another datapoint, the VP3 and newer video decoding units on NVIDIA cards (generally speaking GeForce 200+) have firmware that produces the Y and UV data as completely separate pieces of data as well. On VP2 they had to be in the same buffer, but you could provide an explicit offset to the UV bit.
OK, so the Exynos video processor kinda does the same here. It needs separate pointers to chroma and luma.
-ilia
With best wishes, Tobias