about mmap dma-buf and sync

Thomas Hellstrom thellstrom at vmware.com
Mon Aug 24 10:42:26 PDT 2015


On 08/24/2015 07:12 PM, Daniel Stone wrote:
> Hi,
>
> On 24 August 2015 at 18:10, Thomas Hellstrom <thellstrom at vmware.com> wrote:
>> On 08/24/2015 07:04 PM, Daniel Stone wrote:
>>> On 24 August 2015 at 17:56, Thomas Hellstrom <thellstrom at vmware.com> wrote:
>>>> On 08/24/2015 05:52 PM, Daniel Stone wrote:
>>>>> I still don't think this ameliorates the need for batching: consider
>>>>> the case where you update two disjoint screen regions and want them
>>>>> both flushed. Either you issue two separate sync calls (which can be
>>>>> disadvantageous performance-wise on some hardware / setups), or you
>>>>> accumulate the regions and only flush later. So either two ioctls (one
>>>>> in the style of dirtyfb and one to perform the sync/flush; you can
>>>>> shortcut to assume the full buffer was damaged if the former is
>>>>> missing), or one like this:
>>>>>
>>>>> struct dma_buf_sync_2d {
>>>>>         enum dma_buf_sync_flags flags;
>>>>>
>>>>>         __u64 stride_bytes;
>>>>>         __u32 bytes_per_pixel;
>>>>>         __u32 num_regions;
>>>>>
>>>>>         struct {
>>>>>                 __u64 x;
>>>>>                 __u64 y;
>>>>>                 __u64 width;
>>>>>                 __u64 height;
>>>>>         } regions[];
>>>>> };
>>>> Fine with me, although perhaps bytes_per_pixel is a bit redundant?
>>> Redundant how? It's not implicit in stride.
>> For flushing purposes, isn't it possible to cover all cases by assuming
>> bytes_per_pixel=1? Not that it matters much.
> Sure, though in that case best to replace x with line_byte_offset or
> something, because otherwise I guarantee you everyone will get it
> wrong, and it'll be a pain to track down. Like how I managed to
> misread it now. :)

OK, yeah you have a point. IMO let's go for your proposal.

Tiago, is this OK with you?

/Thomas





> Cheers,
> Daniel



More information about the dri-devel mailing list