On Fri, Sep 19, 2014 at 3:16 PM, Rob Clark robdclark@gmail.com wrote:
For the mesa part, it looks like there is a bit of work needed to teach egl about multi-planar buffers, buffers where offset[n] != 0, etc. I'll start with patches to teach egl how to import plain NV12 buffers. But once that is done, for it to be much use to me I'll need NV12MT, which means adding a new gallium format and __DRI_IMAGE_FOURCC_NV12MT.
Also, I'm still a bit undecided on how to represent multi-planar formats (ie. single pipe_resource encapsulating each of the planes? or pipe_resource per plane but teach pipe_sampler_view about textures which have multiple pipe_resource's, one for per plane).
So, on the mesa end of things, pipe_video_buffer looks like it may be a better fit for an imported multi-planar format external eglimage (since at least on some hw sampling a YUV buffer as a texture would take multiple texture sampler slots), other than the fact that we wouldn't have any codec in this case.. but does mesa state tracker understand how to use a pipe_video_buffer as a sampler in a shader somehow? Right now I can only see references to pipe_video_buffer from gallium video stuff. I'd really prefer not to have to introduce an extra YUV->RGB blit just to get the video frame into a form that can be used from GL..
How does the connection between eglImage and omx state tracker work? I'm probably getting at least a bit confused by the cpp macro hell in bellagio headers..
BR, -R