[Mesa-dev] [RFC] OES_external_image for i965
Pohjolainen, Topi
topi.pohjolainen at intel.com
Tue Mar 5 01:02:17 PST 2013
On Mon, Mar 04, 2013 at 10:55:07AM -0500, Kristian H?gsberg wrote:
> On Mon, Mar 4, 2013 at 10:11 AM, Pohjolainen, Topi
> <topi.pohjolainen at intel.com> wrote:
> > On Mon, Mar 04, 2013 at 09:56:34AM -0500, Kristian H?gsberg wrote:
> >> On Mon, Mar 4, 2013 at 4:55 AM, Pohjolainen, Topi
> >> <topi.pohjolainen at intel.com> wrote:
> >> > On Fri, Mar 01, 2013 at 10:03:45AM -0500, Kristian H?gsberg wrote:
> >> >> On Fri, Mar 1, 2013 at 3:51 AM, Pohjolainen, Topi
> >> >> <topi.pohjolainen at intel.com> wrote:
> >> >> > On Tue, Feb 26, 2013 at 04:05:25PM +0000, Tom Cooksey wrote:
> >> >> >> Hi Topi,
> >> >> >>
> >> >> >> > The second more or less questionable part is the support for creating YUV
> >> >> >> > buffers. In order to test for YUV sampling one needs a way of providing them
> >> >> >> > for the EGL stack. Here I chose to augment the dri driver backing gbm as I
> >> >> >> > couldn't come up with anything better. It may be helpful to take a look at the
> >> >> >> > corresponding piglit test case and framework support I've written for it.
> >> >> >>
> >> >> >> You might want to take a look at the EGL_EXT_image_dma_buf_import[i] which has been written
> >> >> >> specifically for this purpose. Though this does assume you have a driver which supports exporting a
> >> >> >> YUV buffer it has allocated with dma_buf, such as a v4l2 driver or even ion on Android.
> >> >> >>
> >> >> >
> >> >> > It certainly looks good addressing not only the individual plane setup but
> >> >> > allowing one to control also the conversion coefficients and subsampling
> >> >> > position.
> >> >> > Coming from piglit testing point of view, do you have any ideas where to
> >> >> > allocate the buffers from? I guess people wouldn't be too happy seeing v4l2 tied
> >> >> > into piglit, for example.
> >> >>
> >> >> SInce you're already using intel specific ioctls to mmap the buffers,
> >> >> I'd suggest you just go all the way and allocate using intel specific
> >> >> ioctls (like my simple-yuv.c example). I don't really see any other
> >> >> approach, but it's not pretty...
> >> >>
> >> >
> >> > I used gbm buffer objects in order to match the logic later in
> >> > 'dri2_drm_create_image_khr()' which expects the buffer to be of the type
> >> > 'gbm_dri_bo' (gbm_bo) for the target EGL_NATIVE_PIXMAP_KHR. Giving drm buffer
> >> > objects instead would require new target, I guess?
> >>
> >> Right... I'd use the extension Tom suggests:
> >>
> >> http://www.khronos.org/registry/egl/extensions/EXT/EGL_EXT_image_dma_buf_import.txt
> >>
> >> which is mostly implemented by this patch:
> >>
> >> http://lists.freedesktop.org/archives/mesa-dev/2013-February/035429.html
> >>
> >> with just the EGL extension bits missing. That way, you're also not
> >> dependent on any specific window system. As it is your test has to
> >> run under gbm, using the dmabuf import extension it can run under any
> >> window system.
> >
> > Just to clarify that I understood correctly. The actual creation of the buffer
> > (and dma_buf exporting) would still be via hardware specific ioctls (in intel's
> > case GEM)? Your and Tom's material address only the importing side, or did I
> > miss something?
>
> Yes, that's correct. You'll need intel create and export to fd
> functions, but you are already mapping the bo using intel specific
> ioctls. So I think it's cleaner to just have a chipset specific
> function to create the bo and return an fd, stride etc, and from there
> on it's generic code where you feed it into the dma_buf_import
> function.
I have to admit that I've been thinking the testing side keeping android
platform in mind. There the gralloc layer already provides hardware independent
interface to the cpu-writing-gpu-reading type of sharing and hence I tied the
buffer handling into the platform (window system) instead. I was hoping to avoid
introducing hardware specifics into piglit using the interface provided by the
platform instead.
Obviously I failed to do so in case of GBM as I was forced to use the intel
specific ioctl. For sake of argument even this could be avoided by pushing it
into the platform - one would need to extend the "gbm_dri_bo_write()" to accept
not only ARGB8888 but also YUV. The concern I had for not doing it already
was that this would be only to support testing (but so is the very YUV support
for gbm I introduced).
I'm not sure if you have looked the piglit patches I have for the buffer
handling and test setup, but there I have pushed platform specifics into the
framework logic leaving the test itself generic.
This could also accomodate further testing using Tom's extension and hardware
specific buffer handling - the platform code in piglit framework could share the
logic choosing the hardware, setting it up and providing the buffers for
testing.
Topi
More information about the mesa-dev
mailing list