[PATCH 4/4] drm/i915/gem: Migrate to system at dma-buf map time
Ruhl, Michael J
michael.j.ruhl at intel.com
Fri Jun 25 17:38:07 UTC 2021
>-----Original Message-----
>From: Thomas Hellström <thomas.hellstrom at linux.intel.com>
>Sent: Friday, June 25, 2021 12:18 PM
>To: Ruhl, Michael J <michael.j.ruhl at intel.com>; intel-
>gfx at lists.freedesktop.org; dri-devel at lists.freedesktop.org
>Cc: Auld, Matthew <matthew.auld at intel.com>
>Subject: Re: [PATCH 4/4] drm/i915/gem: Migrate to system at dma-buf map
>time
>
>Hi, Michael,
>
>thanks for looking at this.
>
>On 6/25/21 6:02 PM, Ruhl, Michael J wrote:
>>> -----Original Message-----
>>> From: dri-devel <dri-devel-bounces at lists.freedesktop.org> On Behalf Of
>>> Thomas Hellström
>>> Sent: Thursday, June 24, 2021 2:31 PM
>>> To: intel-gfx at lists.freedesktop.org; dri-devel at lists.freedesktop.org
>>> Cc: Thomas Hellström <thomas.hellstrom at linux.intel.com>; Auld,
>Matthew
>>> <matthew.auld at intel.com>
>>> Subject: [PATCH 4/4] drm/i915/gem: Migrate to system at dma-buf map
>time
>>>
>>> Until we support p2p dma or as a complement to that, migrate data
>>> to system memory at dma-buf map time if possible.
>>>
>>> Signed-off-by: Thomas Hellström <thomas.hellstrom at linux.intel.com>
>>> ---
>>> drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c | 9 ++++++++-
>>> 1 file changed, 8 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
>>> b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
>>> index 616c3a2f1baf..a52f885bc09a 100644
>>> --- a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
>>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
>>> @@ -25,7 +25,14 @@ static struct sg_table
>*i915_gem_map_dma_buf(struct
>>> dma_buf_attachment *attachme
>>> struct scatterlist *src, *dst;
>>> int ret, i;
>>>
>>> - ret = i915_gem_object_pin_pages_unlocked(obj);
>>> + ret = i915_gem_object_lock_interruptible(obj, NULL);
>> Hmm, I believe in most cases that the caller should be holding the
>> lock (object dma-resv) on this object already.
>
>Yes, I agree, In particular for other instances of our own driver, at
>least since the dma_resv introduction.
>
>But I also think that's a pre-existing bug, since
>i915_gem_object_pin_pages_unlocked() will also take the lock.
Ouch yes. Missed that.
>I Think we need to initially make the exporter dynamic-capable to
>resolve this, and drop the locking here completely, as dma-buf docs says
>that we're then guaranteed to get called with the object lock held.
>
>I figure if we make the exporter dynamic, we need to migrate already at
>dma_buf_pin time so we don't pin the object in the wrong location.
The exporter as dynamic (ops->pin is available) is optional, but importer
dynamic (ops->move_notify) is required.
With that in mind, it would seem that there are three possible combinations
for the migrate to be attempted:
1) in the ops->pin function (export_dynamic != import_dynamic, during attach)
2) in the ops->pin function (export_dynamic and !CONFIG_DMABUF_MOVE_NOTIFY) during mapping
3) and possibly in ops->map_dma_buf (exort_dynamic iand CONFIG_DMABUF_MOVE_NOTIFY)
Since one possibility has to be in the mapping function, it seems that if we
can figure out the locking, that the migrate should probably be available here.
Mike
>/Thomas
>
>
>>
>> I know for the dynamic version of dma-buf, there is a check to make
>> sure that the lock is held when called.
>>
>> I think you will run into some issues if you try to get it here as well.
>>
>> Mike
>>
>>> + if (ret)
>>> + return ERR_PTR(ret);
>>> +
>>> + ret = i915_gem_object_migrate(obj, NULL, INTEL_REGION_SMEM);
>>> + if (!ret)
>>> + ret = i915_gem_object_pin_pages(obj);
>>> + i915_gem_object_unlock(obj);
>>> if (ret)
>>> goto err;
>>>
>>> --
>>> 2.31.1
More information about the dri-devel
mailing list