[RFC 0/2] New feature: Framebuffer processors

Daniel Vetter daniel at ffwll.ch
Thu Aug 25 12:14:50 UTC 2016


On Thu, Aug 25, 2016 at 08:45:25PM +0900, Inki Dae wrote:
> 
> 
> 2016년 08월 25일 17:42에 Daniel Vetter 이(가) 쓴 글:
> > On Thu, Aug 25, 2016 at 05:06:55PM +0900, Inki Dae wrote:
> >>
> >>
> >> 2016년 08월 24일 20:57에 Daniel Vetter 이(가) 쓴 글:
> >>> On Wed, Aug 24, 2016 at 08:44:24PM +0900, Inki Dae wrote:
> >>>> Hi,
> >>>>
> >>>> 2016년 08월 23일 18:41에 Daniel Stone 이(가) 쓴 글:
> >>>>> Hi,
> >>>>>
> >>>>> On 22 August 2016 at 16:23, Rob Clark <robdclark at gmail.com> wrote:
> >>>>>> I guess a lot comes down to 'how long before hw designers bolt a CP to
> >>>>>> the thing'..  at that point, I think you especially don't want a
> >>>>>> per-blit kernel interface.
> >>>>>
> >>>>> Regardless of whether or not we want it, we already _have_ it, in the
> >>>>> form of V4L2 M2M. There are already a few IP blocks working on that, I
> >>>>> believe. If V4L2 <-> KMS interop is painful, well, we need to fix that
> >>>>> anyway ...
> >>>>
> >>>> So we are trying this. We had expereneced using V4L2 and DRM together on
> >>>> Linux Platform makes it too complicated, and also integrated DRM with
> >>>> M2M such as 2D and Post processor makes it simplified.  So We have been
> >>>> trying to move existing V4L2 based drivers into DRM excepting HW Video
> >>>> Codec - called it MFC - and Camera sensor and relevant things.
> >>>> I think now V4L2 and DRM frameworks may make many engineers confusing
> >>>> because there are the same devices which can be controlled by V4L2 and
> >>>> DRM frameworks - maybe we would need more efforts like Laurent did with
> >>>> Live source[1] in the future.
> >>>
> >>> Can you pls explain in more detail where working with both v4l and drm
> >>> drivers and making them cooperate using dma-bufs poses problems? We should
> >>> definitely fix that.
> >>
> >> I think it would be most Linux Platforms - Android, Chrome and Tizen -
> >> which would use OpenMAX/GStreammer for Multimedia and X or
> >> Wayland/SurfaceFlinger for Display.
> >
> > Yes, that's the use case. Where is the problem in making this happen? v4l
> > can import dma-bufs, drm can export them, and there's plenty devices
> > shipping (afaik) that make use of exact this pipeline. Can you pls explain
> > what problem you've hit trying to make this work on exynos?
> 
> No problem but just make it complicated as I mentioned above - the
> stream operations - S_FMT, REQBUF, QUARYBUF, QBUF, STREAM ON and DQBUF
> of V4L2 would never be simple as DRM.  Do you think M2M device should be
> controlled by V4L2 interfaces? and even 2D accelerator? As long as I
> know, The Graphics card on Desktop has all devices such as 2D/3D GPU, HW
> Video codec and Display controller, and these devices are controlled by
> DRM interfaces. So we - ARM Exynos - are trying to move these things to
> DRM world and also trying to implement more convenient interfaces like
> Marek did.

This is a misconception, there's nothing in the drm world requiring that
everything is under the same drm device. All the work we've done over the
past years (dma-buf, reservations, fence, prime, changes in X.org and
wayland) are _all_ to make it possible to have a gfx device consisting of
multiple drm/v4l/whatever else nodes. Especially for a SoC moving back to
fake-integrating stuff really isn't a good idea I think.

And wrt drm being simpler than v4l - I don't think drm is any simpler, at
elast if you look at some of the more feature-full render drivers.
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch


More information about the dri-devel mailing list