[Intel-gfx] [PATCH 02/20] drm/i915: Switch to object allocations for page directories
Matthew Auld
matthew.william.auld at gmail.com
Mon Jul 6 19:06:38 UTC 2020
On Mon, 6 Jul 2020 at 07:19, Chris Wilson <chris at chris-wilson.co.uk> wrote:
>
> The GEM object is grossly overweight for the practicality of tracking
> large numbers of individual pages, yet it is currently our only
> abstraction for tracking DMA allocations. Since those allocations need
> to be reserved upfront before an operation, and that we need to break
> away from simple system memory, we need to ditch using plain struct page
> wrappers.
>
> In the process, we drop the WC mapping as we ended up clflushing
> everything anyway due to various issues across a wider range of
> platforms. Though in a future step, we need to drop the kmap_atomic
> approach which suggests we need to pre-map all the pages and keep them
> mapped.
>
> Signed-off-by: Chris Wilson <chris at chris-wilson.co.uk>
<snip>
>
> -int setup_scratch_page(struct i915_address_space *vm, gfp_t gfp)
> +int setup_scratch_page(struct i915_address_space *vm)
> {
> unsigned long size;
>
> @@ -338,21 +174,22 @@ int setup_scratch_page(struct i915_address_space *vm, gfp_t gfp)
> */
> size = I915_GTT_PAGE_SIZE_4K;
> if (i915_vm_is_4lvl(vm) &&
> - HAS_PAGE_SIZES(vm->i915, I915_GTT_PAGE_SIZE_64K)) {
> + HAS_PAGE_SIZES(vm->i915, I915_GTT_PAGE_SIZE_64K))
> size = I915_GTT_PAGE_SIZE_64K;
> - gfp |= __GFP_NOWARN;
> - }
> - gfp |= __GFP_ZERO | __GFP_RETRY_MAYFAIL;
>
> do {
> - unsigned int order = get_order(size);
> - struct page *page;
> - dma_addr_t addr;
> + struct drm_i915_gem_object *obj;
>
> - page = alloc_pages(gfp, order);
> - if (unlikely(!page))
> + obj = vm->alloc_pt_dma(vm, size);
> + if (IS_ERR(obj))
> goto skip;
>
> + if (pin_pt_dma(vm, obj))
> + goto skip_obj;
> +
> + if (obj->mm.page_sizes.sg < size)
> + goto skip_obj;
> +
We should still check the alignment of the final dma address
somewhere, in the case of 64K. I have for sure seen dma misalignment
here before.
> /*
> * Use a non-zero scratch page for debugging.
> *
> @@ -362,61 +199,28 @@ int setup_scratch_page(struct i915_address_space *vm, gfp_t gfp)
> * should it ever be accidentally used, the effect should be
> * fairly benign.
> */
> - poison_scratch_page(page, size);
> -
> - addr = dma_map_page_attrs(vm->dma,
> - page, 0, size,
> - PCI_DMA_BIDIRECTIONAL,
> - DMA_ATTR_SKIP_CPU_SYNC |
> - DMA_ATTR_NO_WARN);
> - if (unlikely(dma_mapping_error(vm->dma, addr)))
> - goto free_page;
> -
> - if (unlikely(!IS_ALIGNED(addr, size)))
> - goto unmap_page;
> -
> - vm->scratch[0].base.page = page;
> - vm->scratch[0].base.daddr = addr;
> - vm->scratch_order = order;
> + poison_scratch_page(obj);
Since this is now an internal object, which lacks proper clearing, do
we need to nuke the page(s) somewhere, since it is visible to
userspace? The posion_scratch seems to only be for debug builds.
> +
> + vm->scratch[0] = obj;
> + vm->scratch_order = get_order(size);
> return 0;
>
> -unmap_page:
> - dma_unmap_page(vm->dma, addr, size, PCI_DMA_BIDIRECTIONAL);
> -free_page:
> - __free_pages(page, order);
> +skip_obj:
> + i915_gem_object_put(obj);
> skip:
> if (size == I915_GTT_PAGE_SIZE_4K)
> return -ENOMEM;
>
> size = I915_GTT_PAGE_SIZE_4K;
> - gfp &= ~__GFP_NOWARN;
> } while (1);
> }
>
More information about the Intel-gfx
mailing list