[Intel-gfx] [PATCH 1/1] intel: align reuse buffer's size on page size instead
Chris Wilson
chris at chris-wilson.co.uk
Sat Mar 3 09:45:37 UTC 2018
Quoting James Xiong (2018-03-02 17:53:04)
> From: "Xiong, James" <james.xiong at intel.com>
>
> With gem_reuse enabled, when a buffer size is different than
> the sizes of buckets, it is aligned to the next bucket's size,
> which means about 25% more memory than the requested is allocated
> in the worst senario. For example:
>
> Orignal size Actual
> 32KB+1Byte 40KB
> .
> .
> .
> 8MB+1Byte 10MB
> .
> .
> .
> 96MB+1Byte 112MB
>
> This is very memory expensive and make the reuse feature less
> favorable than it deserves to be.
>
> This commit aligns the reuse buffer size on page size instead,
> the buffer whose size falls between bucket[n] and bucket[n+1] is
> put in bucket[n] when it's done; And when searching for a cached
> buffer for reuse, it goes through the cached buffers list in the
> bucket until a cached buffer, whose size is larger than or equal
> to the requested size, is found.
>
> Performed gfxbench tests, the performances and reuse ratioes
> (reuse count/allocation count) were same as before, saved memory
> usage by 1% ~ 7%(gl_manhattan: peak allocated memory size was
> reduced from 448401408 to 419078144).
Apart from GL doesn't use this... And what is the impact on !llc, where
buffer reuse is more important? (Every new buffer requires clflushing.)
> Signed-off-by: Xiong, James <james.xiong at intel.com>
> ---
> intel/intel_bufmgr_gem.c | 180 +++++++++++++++++++++++++----------------------
> libdrm_lists.h | 6 ++
> 2 files changed, 101 insertions(+), 85 deletions(-)
>
> diff --git a/intel/intel_bufmgr_gem.c b/intel/intel_bufmgr_gem.c
> index 386da30..5b2d0d0 100644
> --- a/intel/intel_bufmgr_gem.c
> +++ b/intel/intel_bufmgr_gem.c
> @@ -402,11 +402,10 @@ drm_intel_gem_bo_bucket_for_size(drm_intel_bufmgr_gem *bufmgr_gem,
> {
> int i;
>
> - for (i = 0; i < bufmgr_gem->num_buckets; i++) {
> - struct drm_intel_gem_bo_bucket *bucket =
> - &bufmgr_gem->cache_bucket[i];
> - if (bucket->size >= size) {
> - return bucket;
> + for (i = 0; i < bufmgr_gem->num_buckets - 1; i++) {
> + if (size >= bufmgr_gem->cache_bucket[i].size &&
> + size < bufmgr_gem->cache_bucket[i+1].size) {
> + return &bufmgr_gem->cache_bucket[i];
Are your buckets not ordered correctly?
Please reduce this patch to a series of small functional changes required to
bring into effect having mixed, page-aligned bo->size within a bucket.
-Chris
More information about the Intel-gfx
mailing list