[PATCH 4/4] drm/i915/gem: Migrate to system at dma-buf map time
Thomas Hellström
thomas.hellstrom at linux.intel.com
Fri Jun 25 19:10:29 UTC 2021
On 6/25/21 9:07 PM, Ruhl, Michael J wrote:
>> -----Original Message-----
>> From: Thomas Hellström <thomas.hellstrom at linux.intel.com>
>> Sent: Friday, June 25, 2021 2:50 PM
>> To: Ruhl, Michael J <michael.j.ruhl at intel.com>; intel-
>> gfx at lists.freedesktop.org; dri-devel at lists.freedesktop.org
>> Cc: Auld, Matthew <matthew.auld at intel.com>
>> Subject: Re: [PATCH 4/4] drm/i915/gem: Migrate to system at dma-buf map
>> time
>>
>> Hi, Mike,
>>
>> On 6/25/21 7:57 PM, Ruhl, Michael J wrote:
>>>> -----Original Message-----
>>>> From: Thomas Hellström <thomas.hellstrom at linux.intel.com>
>>>> Sent: Friday, June 25, 2021 1:52 PM
>>>> To: Ruhl, Michael J <michael.j.ruhl at intel.com>; intel-
>>>> gfx at lists.freedesktop.org; dri-devel at lists.freedesktop.org
>>>> Cc: Auld, Matthew <matthew.auld at intel.com>
>>>> Subject: Re: [PATCH 4/4] drm/i915/gem: Migrate to system at dma-buf
>> map
>>>> time
>>>>
>>>>
>>>> On 6/25/21 7:38 PM, Ruhl, Michael J wrote:
>>>>>> -----Original Message-----
>>>>>> From: Thomas Hellström <thomas.hellstrom at linux.intel.com>
>>>>>> Sent: Friday, June 25, 2021 12:18 PM
>>>>>> To: Ruhl, Michael J <michael.j.ruhl at intel.com>; intel-
>>>>>> gfx at lists.freedesktop.org; dri-devel at lists.freedesktop.org
>>>>>> Cc: Auld, Matthew <matthew.auld at intel.com>
>>>>>> Subject: Re: [PATCH 4/4] drm/i915/gem: Migrate to system at dma-buf
>>>> map
>>>>>> time
>>>>>>
>>>>>> Hi, Michael,
>>>>>>
>>>>>> thanks for looking at this.
>>>>>>
>>>>>> On 6/25/21 6:02 PM, Ruhl, Michael J wrote:
>>>>>>>> -----Original Message-----
>>>>>>>> From: dri-devel <dri-devel-bounces at lists.freedesktop.org> On
>> Behalf
>>>> Of
>>>>>>>> Thomas Hellström
>>>>>>>> Sent: Thursday, June 24, 2021 2:31 PM
>>>>>>>> To: intel-gfx at lists.freedesktop.org; dri-devel at lists.freedesktop.org
>>>>>>>> Cc: Thomas Hellström <thomas.hellstrom at linux.intel.com>; Auld,
>>>>>> Matthew
>>>>>>>> <matthew.auld at intel.com>
>>>>>>>> Subject: [PATCH 4/4] drm/i915/gem: Migrate to system at dma-buf
>> map
>>>>>> time
>>>>>>>> Until we support p2p dma or as a complement to that, migrate data
>>>>>>>> to system memory at dma-buf map time if possible.
>>>>>>>>
>>>>>>>> Signed-off-by: Thomas Hellström
>> <thomas.hellstrom at linux.intel.com>
>>>>>>>> ---
>>>>>>>> drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c | 9 ++++++++-
>>>>>>>> 1 file changed, 8 insertions(+), 1 deletion(-)
>>>>>>>>
>>>>>>>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
>>>>>>>> b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
>>>>>>>> index 616c3a2f1baf..a52f885bc09a 100644
>>>>>>>> --- a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
>>>>>>>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
>>>>>>>> @@ -25,7 +25,14 @@ static struct sg_table
>>>>>> *i915_gem_map_dma_buf(struct
>>>>>>>> dma_buf_attachment *attachme
>>>>>>>> struct scatterlist *src, *dst;
>>>>>>>> int ret, i;
>>>>>>>>
>>>>>>>> - ret = i915_gem_object_pin_pages_unlocked(obj);
>>>>>>>> + ret = i915_gem_object_lock_interruptible(obj, NULL);
>>>>>>> Hmm, I believe in most cases that the caller should be holding the
>>>>>>> lock (object dma-resv) on this object already.
>>>>>> Yes, I agree, In particular for other instances of our own driver, at
>>>>>> least since the dma_resv introduction.
>>>>>>
>>>>>> But I also think that's a pre-existing bug, since
>>>>>> i915_gem_object_pin_pages_unlocked() will also take the lock.
>>>>> Ouch yes. Missed that.
>>>>>
>>>>>> I Think we need to initially make the exporter dynamic-capable to
>>>>>> resolve this, and drop the locking here completely, as dma-buf docs says
>>>>>> that we're then guaranteed to get called with the object lock held.
>>>>>>
>>>>>> I figure if we make the exporter dynamic, we need to migrate already at
>>>>>> dma_buf_pin time so we don't pin the object in the wrong location.
>>>>> The exporter as dynamic (ops->pin is available) is optional, but importer
>>>>> dynamic (ops->move_notify) is required.
>>>>>
>>>>> With that in mind, it would seem that there are three possible
>> combinations
>>>>> for the migrate to be attempted:
>>>>>
>>>>> 1) in the ops->pin function (export_dynamic != import_dynamic, during
>>>> attach)
>>>>> 2) in the ops->pin function (export_dynamic and
>>>> !CONFIG_DMABUF_MOVE_NOTIFY) during mapping
>>>>> 3) and possibly in ops->map_dma_buf (exort_dynamic iand
>>>> CONFIG_DMABUF_MOVE_NOTIFY)
>>>>> Since one possibility has to be in the mapping function, it seems that if we
>>>>> can figure out the locking, that the migrate should probably be available
>>>> here.
>>>>> Mike
>>>> So perhaps just to initially fix the bug, we could just implement NOP
>>>> pin() and unpin() callbacks and drop the locking in map_attach() and
>>>> replace it with an assert_object_held();
>>> That is the sticky part of the move notify API.
>>>
>>> If you do the attach_dynamic you have to have an ops with move_notify.
>>>
>>> (https://elixir.bootlin.com/linux/v5.13-rc7/source/drivers/dma-buf/dma-
>> buf.c#L730)
>>> If you don't have that, i.e. just the pin interface, the attach will be
>>> rejected, and you will not get the callbacks.
>> I understood that as the requirement for move_notify is only if the
>> *importer* declares dynamic. A dynamic exporter could choose whether to
>> call move_notify() on eviction or to pin and never evict. If the
>> importer is non-dynamic, the core calls pin() and the only choice is to
>> pin and never evict.
>>
>> So if we temporarily choose to pin and never evict for *everything*, (as
>> the current code does now), I think we should be good for now, and then
>> we can implement all fancy p2p and move_notify stuff on top of that.
> /sigh.
>
> You are correct. I was mistakenly placing the pin API (dma_buf_ops) in the
> attach_ops. 😐 Must be Friday.
>
> Upon further reflection, I think that your path will work.
>
> However, is doing a pin (with no locking) from the dma_buf_mapping any different
> from using the pin API + export_dynamic?
>
> M
Yes, it's different for dynamic importers only that would otherwise
never pin, and we could mistakenly evict the object without having
implemented calling move_notify. If we pin, we never evict.
/Thomas
>> /Thomas
>>
>>
>>> So I think that the only thing we can do for now is to dop the locking and add
>> the
>>> assert_object_held();
>>>
>>> M
>>
>>
>>>> /Thomas
>>>>
More information about the dri-devel
mailing list