DRM DMA Engine
Ilia Mirkin
imirkin at alum.mit.edu
Thu Jun 16 12:39:10 UTC 2016
On Thu, Jun 16, 2016 at 8:09 AM, Jose Abreu <Jose.Abreu at synopsys.com> wrote:
> Hi Daniel,
>
> Sorry to bother you again. I promise this is the last time :)
>
> On 15-06-2016 11:15, Daniel Vetter wrote:
>> On Wed, Jun 15, 2016 at 11:48 AM, Jose Abreu <Jose.Abreu at synopsys.com> wrote:
>>> On 15-06-2016 09:52, Daniel Vetter wrote:
>>>> On Tue, Jun 14, 2016 at 1:19 PM, Jose Abreu <Jose.Abreu at synopsys.com> wrote:
>>>>>> I assume that xilinx VDMA is the only way to feed pixel data into your
>>>>>> display pipeline. Under that assumption:
>>>>>>
>>>>>> drm_plane should map to Xilinx VDMA, and the drm_plane->drm_crtc link
>>>>>> would represent the dma channel. With atomic you can subclass
>>>>>> drm_plane/crtc_state structures to store all the runtime configuration in
>>>>>> there.
>>>>>>
>>>>>> The actual buffer itsel would be represented by a drm_framebuffer, which
>>>>>> either wraps a shmem gem or a cma gem object.
>>>>>>
>>>>>> If you want to know about the callbacks used by the atomic helpers to push
>>>>>> out plane updates, look at the hooks drm_atomic_helper_commit_planes()
>>>>>> (and the related functions, see kerneldoc) calls.
>>>>>>
>>>>>> I hope this helps a bit more.
>>>>>> -Daniel
>>>>> Thanks a lot! With your help I was able to implement all the
>>>>> needed logic. Sorry to bother you but I have one more question.
>>>>> Right now I can initialize and configure the vdma correctly but I
>>>>> can only send one frame. I guess when the dma completes
>>>>> transmission I need to ask drm for a new frame, right? Because
>>>>> the commit function starts the vdma correctly but then the dma
>>>>> halts waiting for a new descriptor.
>>>> DRM has a continuous scanout model, i.e. when userspace doesn't give
>>>> you a new frame you're supposed to keep scanning out the current one.
>>>> So you need to rearm your upload code with the same drm_framebuffer if
>>>> userspace hasn't supplied a new one since the last time before the
>>>> vblank period starts.
>>>>
>>>> This is different to v4l, where userspace has to supply each frame
>>>> (and the kernel gets angry when there's not enough frames and signals
>>>> an underrun of the queue). This is because drm is geared at desktops,
>>>> and there it's perfectly normal to show the exact same frame for a
>>>> long time.
>>>> -Daniel
>>> Thanks, I was thinking this was similar to v4l. I am now able to
>>> send multiple frames so it is finally working! I have one little
>>> implementation detail: The controller that I am using supports
>>> deep color mode but I am using FB CMA helpers to create the
>>> framebuffer and I've seen that the supported bpp in these helpers
>>> only goes up to 32, right? Does this means that with these
>>> helpers I can't use deep color? Can I implement this deep color
>>> mode (48bpp) using a custom fb or do I also need custom gem
>>> allocation functions (Right now I am using GEM CMA helpers)?
>> Suprising the cma doesn't take pixel_format into account. If this
>> really doesn't work, pls fix up the cma helpers, not roll your own
>> copypasta ;-)
>>
>> Note that the fbdev emulation itself (maybe that's what threw you off)
>> only supports legacy rgb formats up to 32bits. But native kms can
>> support anything, we just might need to add the DRM_FOURCC codes for
>> that.
>> -Daniel
>
> So, I ended up using 32bits and everything is working fine! I
> tested using [1] and [2] but now I have kind of a dumb question:
> I want to use the new driver that I created as a secondary output
> of my desktop so that I can play videos using mplayer but I am
> not being able to do this. If I check in my linux settings only
> one display is being detected, although in /dev/dri the two video
> cards are present (the native one and the one I added). Does the
> driver needs something additional to do this or is it only in my
> X configuration? I tried editing this configuration but still
> doesn't work. I believe that because my driver is not being
> probed at runtime the display is not being created by X. Is this
> correct?
Have a look at
https://nouveau.freedesktop.org/wiki/Optimus/
specifically the section titled "Using outputs on discrete GPU". If
you're still having trouble, please provide an Xorg log.
More information about the dri-devel
mailing list