<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
</head>
<body text="#000000" bgcolor="#FFFFFF">
<div class="moz-cite-prefix">Am 16.01.19 um 15:39 schrieb Marek
Olšák:<br>
</div>
<blockquote type="cite"
cite="mid:CAAxE2A4k8JtkrS2XfgRdmYY3NVR4ges=Yqfh-TH9O=LnaVv02g@mail.gmail.com">
<meta http-equiv="content-type" content="text/html; charset=UTF-8">
<div dir="auto">
<div><br>
<br>
<div class="gmail_quote">
<div dir="ltr">On Wed, Jan 16, 2019, 9:34 AM Koenig,
Christian <<a href="mailto:Christian.Koenig@amd.com"
moz-do-not-send="true">Christian.Koenig@amd.com</a>
wrote:<br>
</div>
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">
<div text="#000000" bgcolor="#FFFFFF">
<div class="m_3799249069402976776moz-cite-prefix">Am
16.01.19 um 15:31 schrieb Marek Olšák:<br>
</div>
<blockquote type="cite">
<div dir="auto">
<div><br>
<br>
<div class="gmail_quote">
<div dir="ltr">On Wed, Jan 16, 2019, 7:55 AM
Christian König <<a
href="mailto:ckoenig.leichtzumerken@gmail.com"
target="_blank" rel="noreferrer"
moz-do-not-send="true">ckoenig.leichtzumerken@gmail.com</a>
wrote:<br>
</div>
<blockquote class="gmail_quote" style="margin:0
0 0 .8ex;border-left:1px #ccc
solid;padding-left:1ex">
Well if you ask me we should have the
following interface for <br>
negotiating memory management with the kernel:<br>
<br>
1. We have per process BOs which can't be
shared between processes.<br>
<br>
Those are always valid and don't need to be
mentioned in any BO list <br>
whatsoever.<br>
<br>
If we knew that a per process BO is currently
not in use we can <br>
optionally tell that to the kernel to make
memory management more efficient.<br>
<br>
In other words instead of a list of stuff
which is used we send down to <br>
the kernel a list of stuff which is not used
any more and that only when <br>
we know that it is necessary, e.g. when a game
or application overcommits.<br>
</blockquote>
</div>
</div>
<div dir="auto"><br>
</div>
<div dir="auto">Radeonsi doesn't use this because
this approach caused performance degradation and
also drops BO priorities.</div>
</div>
</blockquote>
<br>
The performance degradation where mostly shortcomings
with the LRU which by now have been fixed.<br>
<br>
BO priorities are a different topic, but could be added
to per VM BOs as well.<br>
</div>
</blockquote>
</div>
</div>
<div dir="auto"><br>
</div>
<div dir="auto">What's the minimum drm version that contains the
fixes?</div>
</div>
</blockquote>
<br>
I've pushed the last optimization this morning. No idea when it
really became useful, but the numbers from the closed source clients
now look much better.<br>
<br>
We should probably test and bump the drm version when we are sure
that this now works as expected.<br>
<br>
Christian.<br>
<br>
<blockquote type="cite"
cite="mid:CAAxE2A4k8JtkrS2XfgRdmYY3NVR4ges=Yqfh-TH9O=LnaVv02g@mail.gmail.com">
<div dir="auto">
<div dir="auto"><br>
</div>
<div dir="auto">Marek</div>
<div dir="auto"><br>
</div>
<div dir="auto">
<div class="gmail_quote">
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">
<div text="#000000" bgcolor="#FFFFFF">
<br>
Christian.<br>
<br>
<blockquote type="cite">
<div dir="auto">
<div dir="auto"><br>
</div>
<div dir="auto">Marek</div>
<div dir="auto"><br>
</div>
<div dir="auto">
<div class="gmail_quote">
<blockquote class="gmail_quote" style="margin:0
0 0 .8ex;border-left:1px #ccc
solid;padding-left:1ex">
<br>
2. We have shared BOs which are used by more
than one process.<br>
<br>
Those are rare and should be added to the per
CS list of BOs in use.<br>
<br>
<br>
The whole BO list interface Marek tries to
optimize here should be <br>
deprecated and not used any more.<br>
<br>
Regards,<br>
Christian.<br>
<br>
Am 16.01.19 um 13:46 schrieb Bas
Nieuwenhuizen:<br>
> So random questions:<br>
><br>
> 1) In this discussion it was mentioned
that some Vulkan drivers still<br>
> use the bo_list interface. I think that
implies radv as I think we're<br>
> still using bo_list. Is there any other
API we should be using? (Also,<br>
> with VK_EXT_descriptor_indexing I suspect
we'll be moving more towards<br>
> a global bo list instead of a cmd buffer
one, as we cannot know all<br>
> the BOs referenced anymore, but not sure
what end state here will be).<br>
><br>
> 2) The other alternative mentioned was
adding the buffers directly<br>
> into the submit ioctl. Is this the
desired end state (though as above<br>
> I'm not sure how that works for vulkan)?
If yes, what is the timeline<br>
> for this that we need something in the
interim?<br>
><br>
> 3) Did we measure any performance
benefit?<br>
><br>
> In general I'd like to to ack the raw bo
list creation function as<br>
> this interface seems easier to use. The
two arrays thing has always<br>
> been kind of a pain when we want to use
e.g. builtin sort functions to<br>
> make sure we have no duplicate BOs, but
have some comments below.<br>
><br>
> On Mon, Jan 7, 2019 at 8:31 PM Marek
Olšák <<a href="mailto:maraeo@gmail.com"
rel="noreferrer noreferrer" target="_blank"
moz-do-not-send="true">maraeo@gmail.com</a>>
wrote:<br>
>> From: Marek Olšák <<a
href="mailto:marek.olsak@amd.com"
rel="noreferrer noreferrer" target="_blank"
moz-do-not-send="true">marek.olsak@amd.com</a>><br>
>><br>
>> ---<br>
>> amdgpu/amdgpu-symbol-check | 3 ++<br>
>> amdgpu/amdgpu.h | 56
+++++++++++++++++++++++++++++++++++++-<br>
>> amdgpu/amdgpu_bo.c | 36
++++++++++++++++++++++++<br>
>> amdgpu/amdgpu_cs.c | 25
+++++++++++++++++<br>
>> 4 files changed, 119 insertions(+),
1 deletion(-)<br>
>><br>
>> diff --git
a/amdgpu/amdgpu-symbol-check
b/amdgpu/amdgpu-symbol-check<br>
>> index 6f5e0f95..96a44b40 100755<br>
>> --- a/amdgpu/amdgpu-symbol-check<br>
>> +++ b/amdgpu/amdgpu-symbol-check<br>
>> @@ -12,20 +12,22 @@ _edata<br>
>> _end<br>
>> _fini<br>
>> _init<br>
>> amdgpu_bo_alloc<br>
>> amdgpu_bo_cpu_map<br>
>> amdgpu_bo_cpu_unmap<br>
>> amdgpu_bo_export<br>
>> amdgpu_bo_free<br>
>> amdgpu_bo_import<br>
>> amdgpu_bo_inc_ref<br>
>> +amdgpu_bo_list_create_raw<br>
>> +amdgpu_bo_list_destroy_raw<br>
>> amdgpu_bo_list_create<br>
>> amdgpu_bo_list_destroy<br>
>> amdgpu_bo_list_update<br>
>> amdgpu_bo_query_info<br>
>> amdgpu_bo_set_metadata<br>
>> amdgpu_bo_va_op<br>
>> amdgpu_bo_va_op_raw<br>
>> amdgpu_bo_wait_for_idle<br>
>> amdgpu_create_bo_from_user_mem<br>
>> amdgpu_cs_chunk_fence_info_to_data<br>
>> @@ -40,20 +42,21 @@
amdgpu_cs_destroy_semaphore<br>
>> amdgpu_cs_destroy_syncobj<br>
>> amdgpu_cs_export_syncobj<br>
>> amdgpu_cs_fence_to_handle<br>
>> amdgpu_cs_import_syncobj<br>
>> amdgpu_cs_query_fence_status<br>
>> amdgpu_cs_query_reset_state<br>
>> amdgpu_query_sw_info<br>
>> amdgpu_cs_signal_semaphore<br>
>> amdgpu_cs_submit<br>
>> amdgpu_cs_submit_raw<br>
>> +amdgpu_cs_submit_raw2<br>
>> amdgpu_cs_syncobj_export_sync_file<br>
>> amdgpu_cs_syncobj_import_sync_file<br>
>> amdgpu_cs_syncobj_reset<br>
>> amdgpu_cs_syncobj_signal<br>
>> amdgpu_cs_syncobj_wait<br>
>> amdgpu_cs_wait_fences<br>
>> amdgpu_cs_wait_semaphore<br>
>> amdgpu_device_deinitialize<br>
>> amdgpu_device_initialize<br>
>> amdgpu_find_bo_by_cpu_mapping<br>
>> diff --git a/amdgpu/amdgpu.h
b/amdgpu/amdgpu.h<br>
>> index dc51659a..5b800033 100644<br>
>> --- a/amdgpu/amdgpu.h<br>
>> +++ b/amdgpu/amdgpu.h<br>
>> @@ -35,20 +35,21 @@<br>
>> #define _AMDGPU_H_<br>
>><br>
>> #include <stdint.h><br>
>> #include <stdbool.h><br>
>><br>
>> #ifdef __cplusplus<br>
>> extern "C" {<br>
>> #endif<br>
>><br>
>> struct drm_amdgpu_info_hw_ip;<br>
>> +struct drm_amdgpu_bo_list_entry;<br>
>><br>
>>
/*--------------------------------------------------------------------------*/<br>
>> /* ---------------------------
Defines ------------------------------------
*/<br>
>>
/*--------------------------------------------------------------------------*/<br>
>><br>
>> /**<br>
>> * Define max. number of Command
Buffers (IB) which could be sent to the single<br>
>> * hardware IP to accommodate CE/DE
requirements<br>
>> *<br>
>> * \sa amdgpu_cs_ib_info<br>
>> @@ -767,34 +768,65 @@ int
amdgpu_bo_cpu_unmap(amdgpu_bo_handle
buf_handle);<br>
>> * and
no GPU access is scheduled.<br>
>> * 1 GPU
access is in fly or scheduled<br>
>> *<br>
>> * \return 0 - on success<br>
>> * <0 - Negative POSIX
Error code<br>
>> */<br>
>> int
amdgpu_bo_wait_for_idle(amdgpu_bo_handle
buf_handle,<br>
>> uint64_t
timeout_ns,<br>
>> bool
*buffer_busy);<br>
>><br>
>> +/**<br>
>> + * Creates a BO list handle for
command submission.<br>
>> + *<br>
>> + * \param dev
- \c [in] Device handle.<br>
>> + *
See #amdgpu_device_initialize()<br>
>> + * \param number_of_buffers - \c
[in] Number of BOs in the list<br>
>> + * \param buffers - \c
[in] List of BO handles<br>
>> + * \param result - \c
[out] Created BO list handle<br>
>> + *<br>
>> + * \return 0 on success\n<br>
>> + * <0 - Negative POSIX
Error code<br>
>> + *<br>
>> + * \sa amdgpu_bo_list_destroy_raw()<br>
>> +*/<br>
>> +int
amdgpu_bo_list_create_raw(amdgpu_device_handle
dev,<br>
>> +
uint32_t number_of_buffers,<br>
>> + struct
drm_amdgpu_bo_list_entry *buffers,<br>
>> +
uint32_t *result);<br>
> So AFAIU drm_amdgpu_bo_list_entry takes
a raw bo handle while we<br>
> never get a raw bo handle from
libdrm_amdgpu. How are we supposed to<br>
> fill it in?<br>
><br>
> What do we win by having the raw handle
for the bo_list? If we would<br>
> not return the raw handle we would not
need the submit_raw2.<br>
><br>
>> +<br>
>> +/**<br>
>> + * Destroys a BO list handle.<br>
>> + *<br>
>> + * \param bo_list - \c [in] BO
list handle.<br>
>> + *<br>
>> + * \return 0 on success\n<br>
>> + * <0 - Negative POSIX
Error code<br>
>> + *<br>
>> + * \sa amdgpu_bo_list_create_raw(),
amdgpu_cs_submit_raw2()<br>
>> +*/<br>
>> +int
amdgpu_bo_list_destroy_raw(amdgpu_device_handle
dev, uint32_t bo_list);<br>
>> +<br>
>> /**<br>
>> * Creates a BO list handle for
command submission.<br>
>> *<br>
>> * \param dev
- \c [in] Device handle.<br>
>> *
See #amdgpu_device_initialize()<br>
>> * \param number_of_resources
- \c [in] Number of BOs in the list<br>
>> * \param resources - \c
[in] List of BO handles<br>
>> * \param resource_prios - \c
[in] Optional priority for each handle<br>
>> * \param result - \c
[out] Created BO list handle<br>
>> *<br>
>> * \return 0 on success\n<br>
>> * <0 - Negative POSIX
Error code<br>
>> *<br>
>> - * \sa amdgpu_bo_list_destroy()<br>
>> + * \sa amdgpu_bo_list_destroy(),
amdgpu_cs_submit_raw2()<br>
>> */<br>
>> int
amdgpu_bo_list_create(amdgpu_device_handle
dev,<br>
>> uint32_t
number_of_resources,<br>
>>
amdgpu_bo_handle *resources,<br>
>> uint8_t
*resource_prios,<br>
>>
amdgpu_bo_list_handle *result);<br>
>><br>
>> /**<br>
>> * Destroys a BO list handle.<br>
>> *<br>
>> @@ -1580,20 +1612,42 @@ struct
drm_amdgpu_cs_chunk;<br>
>> struct drm_amdgpu_cs_chunk_dep;<br>
>> struct drm_amdgpu_cs_chunk_data;<br>
>><br>
>> int
amdgpu_cs_submit_raw(amdgpu_device_handle dev,<br>
>>
amdgpu_context_handle context,<br>
>>
amdgpu_bo_list_handle bo_list_handle,<br>
>> int
num_chunks,<br>
>> struct
drm_amdgpu_cs_chunk *chunks,<br>
>> uint64_t
*seq_no);<br>
>><br>
>> +/**<br>
>> + * Submit raw command submission to
the kernel with a raw BO list handle.<br>
>> + *<br>
>> + * \param dev - \c
[in] device handle<br>
>> + * \param context - \c [in]
context handle for context id<br>
>> + * \param bo_list_handle - \c [in]
raw bo list handle (0 for none)<br>
>> + * \param num_chunks - \c [in]
number of CS chunks to submit<br>
>> + * \param chunks - \c [in]
array of CS chunks<br>
>> + * \param seq_no - \c [out]
output sequence number for submission.<br>
>> + *<br>
>> + * \return 0 on success\n<br>
>> + * <0 - Negative POSIX
Error code<br>
>> + *<br>
>> + * \sa amdgpu_bo_list_create_raw(),
amdgpu_bo_list_destroy_raw()<br>
>> + */<br>
>> +int
amdgpu_cs_submit_raw2(amdgpu_device_handle
dev,<br>
>> +
amdgpu_context_handle context,<br>
>> + uint32_t
bo_list_handle,<br>
>> + int
num_chunks,<br>
>> + struct
drm_amdgpu_cs_chunk *chunks,<br>
>> + uint64_t
*seq_no);<br>
>> +<br>
>> void
amdgpu_cs_chunk_fence_to_dep(struct
amdgpu_cs_fence *fence,<br>
>>
struct drm_amdgpu_cs_chunk_dep *dep);<br>
>> void
amdgpu_cs_chunk_fence_info_to_data(struct
amdgpu_cs_fence_info *fence_info,<br>
>>
struct drm_amdgpu_cs_chunk_data *data);<br>
>><br>
>> /**<br>
>> * Reserve VMID<br>
>> * \param context - \c [in] GPU
Context<br>
>> * \param flags - \c [in] TBD<br>
>> *<br>
>> diff --git a/amdgpu/amdgpu_bo.c
b/amdgpu/amdgpu_bo.c<br>
>> index c0f42e81..21bc73aa 100644<br>
>> --- a/amdgpu/amdgpu_bo.c<br>
>> +++ b/amdgpu/amdgpu_bo.c<br>
>> @@ -611,20 +611,56 @@ drm_public int
amdgpu_create_bo_from_user_mem(amdgpu_device_handle dev,<br>
>>
pthread_mutex_lock(&dev->bo_table_mutex);<br>
>> r =
handle_table_insert(&dev->bo_handles,
(*buf_handle)->handle,<br>
>>
*buf_handle);<br>
>>
pthread_mutex_unlock(&dev->bo_table_mutex);<br>
>> if (r)<br>
>>
amdgpu_bo_free(*buf_handle);<br>
>> out:<br>
>> return r;<br>
>> }<br>
>><br>
>> +drm_public int
amdgpu_bo_list_create_raw(amdgpu_device_handle
dev,<br>
>> +
uint32_t number_of_buffers,<br>
>> +
struct drm_amdgpu_bo_list_entry *buffers,<br>
>> +
uint32_t *result)<br>
>> +{<br>
>> + union drm_amdgpu_bo_list
args;<br>
>> + int r;<br>
>> +<br>
>> + memset(&args, 0,
sizeof(args));<br>
>> + args.in.operation =
AMDGPU_BO_LIST_OP_CREATE;<br>
>> + args.in.bo_number =
number_of_buffers;<br>
>> + args.in.bo_info_size =
sizeof(struct drm_amdgpu_bo_list_entry);<br>
>> + args.in.bo_info_ptr =
(uint64_t)(uintptr_t)buffers;<br>
>> +<br>
>> + r =
drmCommandWriteRead(dev->fd,
DRM_AMDGPU_BO_LIST,<br>
>> +
&args, sizeof(args));<br>
>> + if (r)<br>
>> + return r;<br>
>> +<br>
>> + *result =
args.out.list_handle;<br>
>> + return 0;<br>
>> +}<br>
>> +<br>
>> +drm_public int
amdgpu_bo_list_destroy_raw(amdgpu_device_handle
dev,<br>
>> +
uint32_t bo_list)<br>
>> +{<br>
>> + union drm_amdgpu_bo_list
args;<br>
>> +<br>
>> + memset(&args, 0,
sizeof(args));<br>
>> + args.in.operation =
AMDGPU_BO_LIST_OP_DESTROY;<br>
>> + args.in.list_handle =
bo_list;<br>
>> +<br>
>> + return
drmCommandWriteRead(dev->fd,
DRM_AMDGPU_BO_LIST,<br>
>> +
&args, sizeof(args));<br>
>> +}<br>
>> +<br>
>> drm_public int
amdgpu_bo_list_create(amdgpu_device_handle
dev,<br>
>>
uint32_t number_of_resources,<br>
>>
amdgpu_bo_handle *resources,<br>
>>
uint8_t *resource_prios,<br>
>>
amdgpu_bo_list_handle *result)<br>
>> {<br>
>> struct
drm_amdgpu_bo_list_entry *list;<br>
>> union drm_amdgpu_bo_list
args;<br>
>> unsigned i;<br>
>> int r;<br>
>> diff --git a/amdgpu/amdgpu_cs.c
b/amdgpu/amdgpu_cs.c<br>
>> index 3b8231aa..5bedf748 100644<br>
>> --- a/amdgpu/amdgpu_cs.c<br>
>> +++ b/amdgpu/amdgpu_cs.c<br>
>> @@ -724,20 +724,45 @@ drm_public int
amdgpu_cs_submit_raw(amdgpu_device_handle dev,<br>
>> r =
drmCommandWriteRead(dev->fd, DRM_AMDGPU_CS,<br>
>>
&cs, sizeof(cs));<br>
>> if (r)<br>
>> return r;<br>
>><br>
>> if (seq_no)<br>
>> *seq_no =
cs.out.handle;<br>
>> return 0;<br>
>> }<br>
>><br>
>> +drm_public int
amdgpu_cs_submit_raw2(amdgpu_device_handle
dev,<br>
>> +
amdgpu_context_handle context,<br>
>> +
uint32_t bo_list_handle,<br>
>> +
int num_chunks,<br>
>> +
struct drm_amdgpu_cs_chunk *chunks,<br>
>> +
uint64_t *seq_no)<br>
>> +{<br>
>> + union drm_amdgpu_cs cs = {0};<br>
>> + uint64_t *chunk_array;<br>
>> + int i, r;<br>
>> +<br>
>> + chunk_array =
alloca(sizeof(uint64_t) * num_chunks);<br>
>> + for (i = 0; i <
num_chunks; i++)<br>
>> + chunk_array[i] =
(uint64_t)(uintptr_t)&chunks[i];<br>
>> + cs.in.chunks =
(uint64_t)(uintptr_t)chunk_array;<br>
>> + cs.in.ctx_id =
context->id;<br>
>> + cs.in.bo_list_handle =
bo_list_handle;<br>
>> + cs.in.num_chunks =
num_chunks;<br>
>> + r =
drmCommandWriteRead(dev->fd, DRM_AMDGPU_CS,<br>
>> +
&cs, sizeof(cs));<br>
>> + if (!r && seq_no)<br>
>> + *seq_no =
cs.out.handle;<br>
>> + return r;<br>
>> +}<br>
>> +<br>
>> drm_public void
amdgpu_cs_chunk_fence_info_to_data(struct
amdgpu_cs_fence_info *fence_info,<br>
>>
struct drm_amdgpu_cs_chunk_data *data)<br>
>> {<br>
>> data->fence_data.handle =
fence_info->handle->handle;<br>
>> data->fence_data.offset =
fence_info->offset * sizeof(uint64_t);<br>
>> }<br>
>><br>
>> drm_public void
amdgpu_cs_chunk_fence_to_dep(struct
amdgpu_cs_fence *fence,<br>
>>
struct drm_amdgpu_cs_chunk_dep *dep)<br>
>> {<br>
>> --<br>
>> 2.17.1<br>
>><br>
>>
_______________________________________________<br>
>> amd-gfx mailing list<br>
>> <a
href="mailto:amd-gfx@lists.freedesktop.org"
rel="noreferrer noreferrer" target="_blank"
moz-do-not-send="true">
amd-gfx@lists.freedesktop.org</a><br>
>> <a
href="https://lists.freedesktop.org/mailman/listinfo/amd-gfx"
rel="noreferrer noreferrer noreferrer"
target="_blank" moz-do-not-send="true">
https://lists.freedesktop.org/mailman/listinfo/amd-gfx</a><br>
>
_______________________________________________<br>
> amd-gfx mailing list<br>
> <a
href="mailto:amd-gfx@lists.freedesktop.org"
rel="noreferrer noreferrer" target="_blank"
moz-do-not-send="true">
amd-gfx@lists.freedesktop.org</a><br>
> <a
href="https://lists.freedesktop.org/mailman/listinfo/amd-gfx"
rel="noreferrer noreferrer noreferrer"
target="_blank" moz-do-not-send="true">
https://lists.freedesktop.org/mailman/listinfo/amd-gfx</a><br>
<br>
</blockquote>
</div>
</div>
</div>
</blockquote>
<br>
</div>
</blockquote>
</div>
</div>
</div>
<br>
<fieldset class="mimeAttachmentHeader"></fieldset>
<pre class="moz-quote-pre" wrap="">_______________________________________________
amd-gfx mailing list
<a class="moz-txt-link-abbreviated" href="mailto:amd-gfx@lists.freedesktop.org">amd-gfx@lists.freedesktop.org</a>
<a class="moz-txt-link-freetext" href="https://lists.freedesktop.org/mailman/listinfo/amd-gfx">https://lists.freedesktop.org/mailman/listinfo/amd-gfx</a>
</pre>
</blockquote>
<br>
</body>
</html>