[Mesa-dev] [PATCH] intel/tools: Remove hardcoded PADDING_SIZE from sanitizer
Danylo Piliaiev
danylo.piliaiev at gmail.com
Thu Oct 18 08:14:11 UTC 2018
On Wed, Oct 17, 2018 at 6:47 PM Rafael Antognolli <
rafael.antognolli at intel.com> wrote:
> On Wed, Oct 17, 2018 at 06:08:34PM +0300, Danylo Piliaiev wrote:
> > Signed-off-by: Danylo Piliaiev <danylo.piliaiev at globallogic.com>
> > ---
> > src/intel/tools/intel_sanitize_gpu.c | 38 +++++++++++++++-------------
> > 1 file changed, 20 insertions(+), 18 deletions(-)
> >
> > diff --git a/src/intel/tools/intel_sanitize_gpu.c
> b/src/intel/tools/intel_sanitize_gpu.c
> > index 9b49b0bbf2..36c4725a2f 100644
> > --- a/src/intel/tools/intel_sanitize_gpu.c
> > +++ b/src/intel/tools/intel_sanitize_gpu.c
> > @@ -51,14 +51,6 @@ static int (*libc_fcntl)(int fd, int cmd, int param);
> >
> > #define DRM_MAJOR 226
> >
> > -/* TODO: we want to make sure that the padding forces
> > - * the BO to take another page on the (PP)GTT; 4KB
> > - * may or may not be the page size for the BO. Indeed,
> > - * depending on GPU, kernel version and GEM size, the
> > - * page size can be one of 4KB, 64KB or 2M.
> > - */
> > -#define PADDING_SIZE 4096
> > -
> > struct refcnt_hash_table {
> > struct hash_table *t;
> > int refcnt;
> > @@ -80,6 +72,8 @@ pthread_mutex_t mutex = PTHREAD_MUTEX_INITIALIZER;
> >
> > static struct hash_table *fds_to_bo_sizes = NULL;
> >
> > +static long padding_size = 0;
> > +
> > static inline struct hash_table*
> > bo_size_table(int fd)
> > {
> > @@ -166,7 +160,7 @@ padding_is_good(int fd, uint32_t handle)
> > struct drm_i915_gem_mmap mmap_arg = {
> > .handle = handle,
> > .offset = bo_size(fd, handle),
> > - .size = PADDING_SIZE,
> > + .size = padding_size,
> > .flags = 0,
> > };
> >
> > @@ -189,17 +183,17 @@ padding_is_good(int fd, uint32_t handle)
> > * if the bo is not cache coherent we likely need to
> > * invalidate the cache lines to get it.
> > */
> > - gen_invalidate_range(mapped, PADDING_SIZE);
> > + gen_invalidate_range(mapped, padding_size);
> >
> > expected_value = handle & 0xFF;
> > - for (uint32_t i = 0; i < PADDING_SIZE; ++i) {
> > + for (uint32_t i = 0; i < padding_size; ++i) {
> > if (expected_value != mapped[i]) {
> > - munmap(mapped, PADDING_SIZE);
> > + munmap(mapped, padding_size);
> > return false;
> > }
> > expected_value = next_noise_value(expected_value);
> > }
> > - munmap(mapped, PADDING_SIZE);
> > + munmap(mapped, padding_size);
> >
> > return true;
> > }
> > @@ -207,9 +201,9 @@ padding_is_good(int fd, uint32_t handle)
> > static int
> > create_with_padding(int fd, struct drm_i915_gem_create *create)
> > {
> > - create->size += PADDING_SIZE;
> > + create->size += padding_size;
> > int ret = libc_ioctl(fd, DRM_IOCTL_I915_GEM_CREATE, create);
> > - create->size -= PADDING_SIZE;
> > + create->size -= padding_size;
> >
> > if (ret != 0)
> > return ret;
> > @@ -218,7 +212,7 @@ create_with_padding(int fd, struct
> drm_i915_gem_create *create)
> > struct drm_i915_gem_mmap mmap_arg = {
> > .handle = create->handle,
> > .offset = create->size,
> > - .size = PADDING_SIZE,
> > + .size = padding_size,
> > .flags = 0,
> > };
> >
> > @@ -228,8 +222,8 @@ create_with_padding(int fd, struct
> drm_i915_gem_create *create)
> >
> > noise_values = (uint8_t*) (uintptr_t) mmap_arg.addr_ptr;
> > fill_noise_buffer(noise_values, create->handle & 0xFF,
> > - PADDING_SIZE);
> > - munmap(noise_values, PADDING_SIZE);
> > + padding_size);
> > + munmap(noise_values, padding_size);
> >
> > _mesa_hash_table_insert(bo_size_table(fd),
> (void*)(uintptr_t)create->handle,
> > (void*)(uintptr_t)create->size);
> > @@ -427,4 +421,12 @@ init(void)
> > libc_close = dlsym(RTLD_NEXT, "close");
> > libc_fcntl = dlsym(RTLD_NEXT, "fcntl");
> > libc_ioctl = dlsym(RTLD_NEXT, "ioctl");
> > +
> > + /* We want to make sure that the padding forces
> > + * the BO to take another page on the (PP)GTT.
> > + */
> > + padding_size = sysconf(_SC_PAGESIZE);
>
> I don't think this is the page size we want. This is the page size of
> CPU/system memory, which might be different from what the GPU is using
> to map pages. For instance, even if we are using 64K pages for GPU
> mapping, I think this call would still return 4K.
>
> Though I'm not sure if there's an interface to query the kernel which
> page size we are using for the GPU...
>
>
I looked a second time at the kernel code and indeed I somehow missed this,
The page size may be even different between allocation and easily different
from CPU page size. My patch makes no sense, sorry.
> > + if (padding_size == -1) {
> > + unreachable("Bad page size");
> > + }
> > }
> > --
> > 2.18.0
> >
> > _______________________________________________
> > mesa-dev mailing list
> > mesa-dev at lists.freedesktop.org
> > https://lists.freedesktop.org/mailman/listinfo/mesa-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.freedesktop.org/archives/mesa-dev/attachments/20181018/ec736676/attachment.html>
More information about the mesa-dev
mailing list