[PATCH libdrm] amdgpu: add a faster BO list API

Marek Olšák maraeo at gmail.com
Wed Jan 16 16:14:34 UTC 2019


On Wed, Jan 16, 2019 at 10:15 AM Bas Nieuwenhuizen <bas at basnieuwenhuizen.nl>
wrote:

> On Wed, Jan 16, 2019 at 3:38 PM Marek Olšák <maraeo at gmail.com> wrote:
> >
> >
> >
> > On Wed, Jan 16, 2019, 7:46 AM Bas Nieuwenhuizen <bas at basnieuwenhuizen.nl
> wrote:
> >>
> >> So random questions:
> >>
> >> 1) In this discussion it was mentioned that some Vulkan drivers still
> >> use the bo_list interface. I think that implies radv as I think we're
> >> still using bo_list. Is there any other API we should be using? (Also,
> >> with VK_EXT_descriptor_indexing I suspect we'll be moving more towards
> >> a global bo list instead of a cmd buffer one, as we cannot know all
> >> the BOs referenced anymore, but not sure what end state here will be).
> >>
> >> 2) The other alternative mentioned was adding the buffers directly
> >> into the submit ioctl. Is this the desired end state (though as above
> >> I'm not sure how that works for vulkan)? If yes, what is the timeline
> >> for this that we need something in the interim?
> >
> >
> > Radeonsi already uses this.
> >
> >>
> >> 3) Did we measure any performance benefit?
> >>
> >> In general I'd like to to ack the raw bo list creation function as
> >> this interface seems easier to use. The two arrays thing has always
> >> been kind of a pain when we want to use e.g. builtin sort functions to
> >> make sure we have no duplicate BOs, but have some comments below.
> >
> >
> > The reason amdgpu was slower than radeon was because of this inefficient
> bo list interface.
> >
> >>
> >> On Mon, Jan 7, 2019 at 8:31 PM Marek Olšák <maraeo at gmail.com> wrote:
> >> >
> >> > From: Marek Olšák <marek.olsak at amd.com>
> >> >
> >> > ---
> >> >  amdgpu/amdgpu-symbol-check |  3 ++
> >> >  amdgpu/amdgpu.h            | 56
> +++++++++++++++++++++++++++++++++++++-
> >> >  amdgpu/amdgpu_bo.c         | 36 ++++++++++++++++++++++++
> >> >  amdgpu/amdgpu_cs.c         | 25 +++++++++++++++++
> >> >  4 files changed, 119 insertions(+), 1 deletion(-)
> >> >
> >> > diff --git a/amdgpu/amdgpu-symbol-check b/amdgpu/amdgpu-symbol-check
> >> > index 6f5e0f95..96a44b40 100755
> >> > --- a/amdgpu/amdgpu-symbol-check
> >> > +++ b/amdgpu/amdgpu-symbol-check
> >> > @@ -12,20 +12,22 @@ _edata
> >> >  _end
> >> >  _fini
> >> >  _init
> >> >  amdgpu_bo_alloc
> >> >  amdgpu_bo_cpu_map
> >> >  amdgpu_bo_cpu_unmap
> >> >  amdgpu_bo_export
> >> >  amdgpu_bo_free
> >> >  amdgpu_bo_import
> >> >  amdgpu_bo_inc_ref
> >> > +amdgpu_bo_list_create_raw
> >> > +amdgpu_bo_list_destroy_raw
> >> >  amdgpu_bo_list_create
> >> >  amdgpu_bo_list_destroy
> >> >  amdgpu_bo_list_update
> >> >  amdgpu_bo_query_info
> >> >  amdgpu_bo_set_metadata
> >> >  amdgpu_bo_va_op
> >> >  amdgpu_bo_va_op_raw
> >> >  amdgpu_bo_wait_for_idle
> >> >  amdgpu_create_bo_from_user_mem
> >> >  amdgpu_cs_chunk_fence_info_to_data
> >> > @@ -40,20 +42,21 @@ amdgpu_cs_destroy_semaphore
> >> >  amdgpu_cs_destroy_syncobj
> >> >  amdgpu_cs_export_syncobj
> >> >  amdgpu_cs_fence_to_handle
> >> >  amdgpu_cs_import_syncobj
> >> >  amdgpu_cs_query_fence_status
> >> >  amdgpu_cs_query_reset_state
> >> >  amdgpu_query_sw_info
> >> >  amdgpu_cs_signal_semaphore
> >> >  amdgpu_cs_submit
> >> >  amdgpu_cs_submit_raw
> >> > +amdgpu_cs_submit_raw2
> >> >  amdgpu_cs_syncobj_export_sync_file
> >> >  amdgpu_cs_syncobj_import_sync_file
> >> >  amdgpu_cs_syncobj_reset
> >> >  amdgpu_cs_syncobj_signal
> >> >  amdgpu_cs_syncobj_wait
> >> >  amdgpu_cs_wait_fences
> >> >  amdgpu_cs_wait_semaphore
> >> >  amdgpu_device_deinitialize
> >> >  amdgpu_device_initialize
> >> >  amdgpu_find_bo_by_cpu_mapping
> >> > diff --git a/amdgpu/amdgpu.h b/amdgpu/amdgpu.h
> >> > index dc51659a..5b800033 100644
> >> > --- a/amdgpu/amdgpu.h
> >> > +++ b/amdgpu/amdgpu.h
> >> > @@ -35,20 +35,21 @@
> >> >  #define _AMDGPU_H_
> >> >
> >> >  #include <stdint.h>
> >> >  #include <stdbool.h>
> >> >
> >> >  #ifdef __cplusplus
> >> >  extern "C" {
> >> >  #endif
> >> >
> >> >  struct drm_amdgpu_info_hw_ip;
> >> > +struct drm_amdgpu_bo_list_entry;
> >> >
> >> >
> /*--------------------------------------------------------------------------*/
> >> >  /* --------------------------- Defines
> ------------------------------------ */
> >> >
> /*--------------------------------------------------------------------------*/
> >> >
> >> >  /**
> >> >   * Define max. number of Command Buffers (IB) which could be sent to
> the single
> >> >   * hardware IP to accommodate CE/DE requirements
> >> >   *
> >> >   * \sa amdgpu_cs_ib_info
> >> > @@ -767,34 +768,65 @@ int amdgpu_bo_cpu_unmap(amdgpu_bo_handle
> buf_handle);
> >> >   *                            and no GPU access is scheduled.
> >> >   *                          1 GPU access is in fly or scheduled
> >> >   *
> >> >   * \return   0 - on success
> >> >   *          <0 - Negative POSIX Error code
> >> >   */
> >> >  int amdgpu_bo_wait_for_idle(amdgpu_bo_handle buf_handle,
> >> >                             uint64_t timeout_ns,
> >> >                             bool *buffer_busy);
> >> >
> >> > +/**
> >> > + * Creates a BO list handle for command submission.
> >> > + *
> >> > + * \param   dev                        - \c [in] Device handle.
> >> > + *                                See #amdgpu_device_initialize()
> >> > + * \param   number_of_buffers  - \c [in] Number of BOs in the list
> >> > + * \param   buffers            - \c [in] List of BO handles
> >> > + * \param   result             - \c [out] Created BO list handle
> >> > + *
> >> > + * \return   0 on success\n
> >> > + *          <0 - Negative POSIX Error code
> >> > + *
> >> > + * \sa amdgpu_bo_list_destroy_raw()
> >> > +*/
> >> > +int amdgpu_bo_list_create_raw(amdgpu_device_handle dev,
> >> > +                             uint32_t number_of_buffers,
> >> > +                             struct drm_amdgpu_bo_list_entry
> *buffers,
> >> > +                             uint32_t *result);
> >>
> >> So AFAIU  drm_amdgpu_bo_list_entry takes a raw bo handle while we
> >> never get a raw bo handle from libdrm_amdgpu. How are we supposed to
> >> fill it in?
> >
> >
> > This function returns it.
>
> This function returns a bo_list handle right? I'm talking about the BO
> handles in `buffers`, where do we get them?
>

Query KMS handles using the export function.

Marek
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.freedesktop.org/archives/amd-gfx/attachments/20190116/c97c7f1f/attachment.html>


More information about the amd-gfx mailing list