[PATCH 1/6] drm/ttm: Add unampping of the entire device address space
Daniel Vetter
daniel at ffwll.ch
Fri Jun 12 06:54:25 UTC 2020
On Thu, Jun 11, 2020 at 11:06:16AM -0400, Andrey Grodzovsky wrote:
>
> On 6/11/20 2:35 AM, Thomas Hellström (Intel) wrote:
> >
> > On 6/10/20 11:19 PM, Andrey Grodzovsky wrote:
> > >
> > > On 6/10/20 4:30 PM, Thomas Hellström (Intel) wrote:
> > > >
> > > > On 6/10/20 5:30 PM, Daniel Vetter wrote:
> > > > > On Wed, Jun 10, 2020 at 04:05:04PM +0200, Christian König wrote:
> > > > > > Am 10.06.20 um 15:54 schrieb Andrey Grodzovsky:
> > > > > > >
> > > > > > > On 6/10/20 6:15 AM, Thomas Hellström (Intel) wrote:
> > > > > > > >
> > > > > > > > On 6/9/20 7:21 PM, Koenig, Christian wrote:
> > > > > > > > >
> > > > > > > > > Am 09.06.2020 18:37 schrieb "Grodzovsky, Andrey"
> > > > > > > > > <Andrey.Grodzovsky at amd.com>:
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > On 6/5/20 2:40 PM, Christian König wrote:
> > > > > > > > > > Am 05.06.20 um 16:29 schrieb Andrey Grodzovsky:
> > > > > > > > > >>
> > > > > > > > > >> On 5/11/20 2:45 AM, Christian König wrote:
> > > > > > > > > >>> Am 09.05.20 um 20:51 schrieb Andrey Grodzovsky:
> > > > > > > > > >>>> Signed-off-by: Andrey Grodzovsky
> > > > > > > > > <andrey.grodzovsky at amd.com>
> > > > > > > > > >>>> ---
> > > > > > > > > >>>> drivers/gpu/drm/ttm/ttm_bo.c | 22 +++++++++++++++++++++-
> > > > > > > > > >>>> include/drm/ttm/ttm_bo_driver.h | 2 ++
> > > > > > > > > >>>> 2 files changed, 23 insertions(+), 1 deletion(-)
> > > > > > > > > >>>>
> > > > > > > > > >>>> diff --git a/drivers/gpu/drm/ttm/ttm_bo.c
> > > > > > > > > >>>> b/drivers/gpu/drm/ttm/ttm_bo.c
> > > > > > > > > >>>> index c5b516f..eae61cc 100644
> > > > > > > > > >>>> --- a/drivers/gpu/drm/ttm/ttm_bo.c
> > > > > > > > > >>>> +++ b/drivers/gpu/drm/ttm/ttm_bo.c
> > > > > > > > > >>>> @@ -1750,9 +1750,29 @@ void ttm_bo_unmap_virtual(struct
> > > > > > > > > >>>> ttm_buffer_object *bo)
> > > > > > > > > >>>> ttm_bo_unmap_virtual_locked(bo);
> > > > > > > > > >>>> ttm_mem_io_unlock(man);
> > > > > > > > > >>>> }
> > > > > > > > > >>>> +EXPORT_SYMBOL(ttm_bo_unmap_virtual);
> > > > > > > > > >>>> +void ttm_bo_unmap_virtual_address_space(struct
> > > > > > > > > ttm_bo_device *bdev)
> > > > > > > > > >>>> +{
> > > > > > > > > >>>> + struct ttm_mem_type_manager *man;
> > > > > > > > > >>>> + int i;
> > > > > > > > > >>>> -EXPORT_SYMBOL(ttm_bo_unmap_virtual);
> > > > > > > > > >>>
> > > > > > > > > >>>> + for (i = 0; i < TTM_NUM_MEM_TYPES; i++) {
> > > > > > > > > >>>> + man = &bdev->man[i];
> > > > > > > > > >>>> + if (man->has_type && man->use_type)
> > > > > > > > > >>>> + ttm_mem_io_lock(man, false);
> > > > > > > > > >>>> + }
> > > > > > > > > >>>
> > > > > > > > > >>> You should drop that it will just result in a deadlock
> > > > > > > > > warning for
> > > > > > > > > >>> Nouveau and has no effect at all.
> > > > > > > > > >>>
> > > > > > > > > >>> Apart from that looks good to me,
> > > > > > > > > >>> Christian.
> > > > > > > > > >>
> > > > > > > > > >>
> > > > > > > > > >> As I am considering to re-include this in V2 of the
> > > > > > > > > patchsets, can
> > > > > > > > > >> you clarify please why this will have no effect at all ?
> > > > > > > > > >
> > > > > > > > > > The locks are exclusive for Nouveau to allocate/free the io
> > > > > > > > > address
> > > > > > > > > > space.
> > > > > > > > > >
> > > > > > > > > > Since we don't do this here we don't need the locks.
> > > > > > > > > >
> > > > > > > > > > Christian.
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > So basically calling
> > > > > > > > > unmap_mapping_range doesn't require any
> > > > > > > > > extra
> > > > > > > > > locking around it and whatever locks
> > > > > > > > > are taken within the function
> > > > > > > > > should be enough ?
> > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > I think so, yes.
> > > > > > > > >
> > > > > > > > > Christian.
> > > > > > > > Yes, that's true. However, without the bo
> > > > > > > > reservation, nothing stops
> > > > > > > > a PTE from being immediately re-faulted back again. Even while
> > > > > > > > unmap_mapping_range() is running.
> > > > > > > >
> > > > > > > Can you explain more on this - specifically, which
> > > > > > > function to reserve
> > > > > > > the BO, why BO reservation would prevent re-fault of the PTE ?
> > > > > > >
> > > > > > Thomas is talking about
> > > > > > ttm_bo_reserver()/ttm_bo_unreserve(), but we don't
> > > > > > need this because we unmap everything because the whole
> > > > > > device is gone and
> > > > > > not just manipulate a single BO.
> > > > > >
> > > > > > > > So the device removed flag needs to be advertized before this
> > > > > > > > function is run,
> > > > > > > >
> > > > > > > I indeed intend to call this right after calling
> > > > > > > drm_dev_unplug from
> > > > > > > amdgpu_pci_remove while adding drm_dev_enter/exit in
> > > > > > > ttm_bo_vm_fault (or
> > > > > > > in amdgpu specific wrapper since I don't see how can I access struct
> > > > > > > drm_device from ttm_bo_vm_fault) and this in my understanding should
> > > > > > > stop a PTE from being re-faulted back as you pointed
> > > > > > > out - so again I
> > > > > > > don't see how bo reservation would prevent it so it looks like I am
> > > > > > > missing something...
> > > > > > >
> > > > > > >
> > > > > > > > (perhaps with a memory barrier pair).
> > > > > > > >
> > > > > > > drm_dev_unplug and drm_dev_enter/exit are RCU synchronized and so I
> > > > > > > don't think require any extra memory barriers for visibility of the
> > > > > > > removed flag being set
> > > > > > >
> > > > > > As far as I can see that should be perfectly sufficient.
> > > > > Only if you have a drm_dev_enter/exit pair in your fault handler.
> > > > > Otherwise you're still open to the races Thomas described.
> > > > > But aside from
> > > > > that the drm_dev_unplug stuff has all the barriers and stuff
> > > > > to make sure
> > > > > nothing escapes.
> > > > >
> > > > > Failure to drm_dev_enter could then also trigger the special
> > > > > case where we
> > > > > put a dummy page in place.
> > > > > -Daniel
> > > >
> > > > Hmm, Yes, indeed advertizing the flag before the call to
> > > > unmap_mapping_range isn't enough, since there might be fault
> > > > handlers running that haven't picked up the flag when
> > > > unmap_mapping_range is launched.
> > >
> > >
> > > If you mean those fault handlers that were in progress when the flag
> > > (drm_dev_unplug) was set in amdgpu_pci_remove then as long as i wrap
> > > the entire fault handler (probably using amdgpu specific .fault hook
> > > around ttm_bo_vm_fault) with drm_dev_enter/exit pair then
> > > drm_dev_unplug->synchronize_srcu will block until those in progress
> > > faults have completed and only after this i will call
> > > unmap_mapping_range. Should this be enough ?
> > >
> > > Andrey
> > >
> > >
> > Yes, I believe so. Although I suspect you might trip lockdep with
> > reverse locking order against the mmap_sem which is a constant pain in
> > fault handlers. If that's the case, you might be able to introduce
> > another srcu lock for this special purpose and synchronize just before
> > the address-space-wide unmap_mapping_range(). If it turns out that an
> > address space srcu lock like this is really needed, we should follow
> > Daniel's suggestion and try to use it from drm-wide helpers.
> >
> > /Thomas
>
>
> Does it make sense to prefault and set to zero page the entire VA range
> covered by the given VMA on the first page fault post device disconnect to
> save on other similar page faults ?
Performance doesn't matter at all on hotunplug, as long as we eventually
finish. Timing out the pci transactions will be so slow (while all the
hotunplug processing is going on) that serving a few billion minor faults
is peanuts. Simpler code wins here imo.
Also if you want to prefault, you also really want to prefault for the
normal use-case, and that should be done (iirc already is?) in the ttm
fault handler.
-Daniel
>
> Andrey
>
>
> >
> >
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
More information about the amd-gfx
mailing list