[PATCH 1/3] drm/amdgpu: add AMDGPU_GEM_CREATE_DISCARDABLE

Marek Olšák maraeo at gmail.com
Fri Jul 8 14:58:02 UTC 2022


Christian, should we set this flag for GDS too? Will it help with GDS OOM
failures?

Marek

On Fri., May 13, 2022, 07:26 Christian König, <
ckoenig.leichtzumerken at gmail.com> wrote:

> Exactly that's what we can't do.
>
> See the kernel must always be able to move things to GTT or discard. So
> when you want to guarantee that something is in VRAM you must at the
> same time say you can discard it if it can't.
>
> Christian.
>
> Am 13.05.22 um 10:43 schrieb Pierre-Eric Pelloux-Prayer:
> > Hi Marek, Christian,
> >
> > If the main feature for Mesa of AMDGPU_GEM_CREATE_DISCARDABLE is
> > getting the best placement, maybe we should have 2 separate flags:
> >   * AMDGPU_GEM_CREATE_DISCARDABLE: indicates to the kernel that it can
> > discards the content on eviction instead of preserving it
> >   * AMDGPU_GEM_CREATE_FORCE_BEST_PLACEMENT (or
> > AMDGPU_GEM_CREATE_NO_GTT_FALLBACK ? or AMDGPU_CREATE_GEM_AVOID_GTT?):
> > tells the kernel that this bo really needs to be in VRAM
> >
> >
> > Pierre-Eric
> >
> > On 13/05/2022 00:17, Marek Olšák wrote:
> >> Would it be better to set the VM_ALWAYS_VALID flag to have a greater
> >> guarantee that the best placement will be chosen?
> >>
> >> See, the main feature is getting the best placement, not being
> >> discardable. The best placement is a hw design requirement due to
> >> using memory for uses that are expected to have performance similar
> >> to onchip SRAMs. We need to make sure the best placement is
> >> guaranteed if it's VRAM.
> >>
> >> Marek
> >>
> >> On Thu., May 12, 2022, 03:26 Christian König,
> >> <ckoenig.leichtzumerken at gmail.com
> >> <mailto:ckoenig.leichtzumerken at gmail.com>> wrote:
> >>
> >>     Am 12.05.22 um 00:06 schrieb Marek Olšák:
> >>>     3rd question: Is it worth using this on APUs?
> >>
> >>     It makes memory management somewhat easier when we are really OOM.
> >>
> >>     E.g. it should also work for GTT allocations and when the core
> >> kernel says "Hey please free something up or I will start the
> >> OOM-killer" it's something we can easily throw away.
> >>
> >>     Not sure how many of those buffers we have, but marking
> >> everything which is temporary with that flag is probably a good idea.
> >>
> >>>
> >>>     Thanks,
> >>>     Marek
> >>>
> >>>     On Wed, May 11, 2022 at 5:58 PM Marek Olšák <maraeo at gmail.com
> >>> <mailto:maraeo at gmail.com>> wrote:
> >>>
> >>>         Will the kernel keep all discardable buffers in VRAM if VRAM
> >>> is not overcommitted by discardable buffers, or will other buffers
> >>> also affect the placement of discardable buffers?
> >>>
> >>
> >>     Regarding the eviction pressure the buffers will be handled like
> >> any other buffer, but instead of preserving the content it is just
> >> discarded on eviction.
> >>
> >>>
> >>>         Do evictions deallocate the buffer, or do they keep an
> >>> allocation in GTT and only the copy is skipped?
> >>>
> >>
> >>     It really deallocates the backing store of the buffer, just keeps
> >> a dummy page array around where all entries are NULL.
> >>
> >>     There is a patch set on the mailing list to make this a little
> >> bit more efficient, but even using the dummy page array should only
> >> have a few bytes overhead.
> >>
> >>     Regards,
> >>     Christian.
> >>
> >>>
> >>>         Thanks,
> >>>         Marek
> >>>
> >>>         On Wed, May 11, 2022 at 3:08 AM Marek Olšák
> >>> <maraeo at gmail.com <mailto:maraeo at gmail.com>> wrote:
> >>>
> >>>             OK that sounds good.
> >>>
> >>>             Marek
> >>>
> >>>             On Wed, May 11, 2022 at 2:04 AM Christian König
> >>> <ckoenig.leichtzumerken at gmail.com
> >>> <mailto:ckoenig.leichtzumerken at gmail.com>> wrote:
> >>>
> >>>                 Hi Marek,
> >>>
> >>>                 Am 10.05.22 um 22:43 schrieb Marek Olšák:
> >>>>                 A better flag name would be:
> >>>>                 AMDGPU_GEM_CREATE_BEST_PLACEMENT_OR_DISCARD
> >>>
> >>>                 A bit long for my taste and I think the best
> >>> placement is just a side effect.
> >>>
> >>>>
> >>>>                 Marek
> >>>>
> >>>>                 On Tue, May 10, 2022 at 4:13 PM Marek Olšák
> >>>> <maraeo at gmail.com <mailto:maraeo at gmail.com>> wrote:
> >>>>
> >>>>                     Does this really guarantee VRAM placement? The
> >>>> code doesn't say anything about that.
> >>>>
> >>>
> >>>                 Yes, see the code here:
> >>>
> >>>>
> >>>>                         diff --git
> >>>> a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
> >>>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
> >>>>                         index 8b7ee1142d9a..1944ef37a61e 100644
> >>>>                         ---
> >>>> a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
> >>>>                         +++
> >>>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
> >>>>                         @@ -567,6 +567,7 @@ int
> >>>> amdgpu_bo_create(struct amdgpu_device *adev,
> >>>>                                         bp->domain;
> >>>>                                 bo->allowed_domains =
> >>>> bo->preferred_domains;
> >>>>                                 if (bp->type != ttm_bo_type_kernel &&
> >>>>                         +           !(bp->flags &
> >>>> AMDGPU_GEM_CREATE_DISCARDABLE) &&
> >>>>                                     bo->allowed_domains ==
> >>>> AMDGPU_GEM_DOMAIN_VRAM)
> >>>> bo->allowed_domains |= AMDGPU_GEM_DOMAIN_GTT;
> >>>>
> >>>
> >>>                 The only case where this could be circumvented is
> >>> when you try to allocate more than physically available on an APU.
> >>>
> >>>                 E.g. you only have something like 32 MiB VRAM and
> >>> request 64 MiB, then the GEM code will catch the error and fallback
> >>> to GTT (IIRC).
> >>>
> >>>                 Regards,
> >>>                 Christian.
> >>>
> >>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.freedesktop.org/archives/amd-gfx/attachments/20220708/b3f9ba09/attachment.htm>


More information about the amd-gfx mailing list