[Intel-xe] [PATCH] drm/xe: Apply upper limit to sg element size

Thomas Hellström thomas.hellstrom at linux.intel.com
Wed May 17 06:01:08 UTC 2023


Hi, Niranjana

On 5/16/23 20:44, Niranjana Vishwanathapura wrote:
> Specify maximum segment size for sg elements by using
> sg_alloc_table_from_pages_segment() to allocate sg_table.
>
> Signed-off-by: Niranjana Vishwanathapura <niranjana.vishwanathapura at intel.com>
> ---
>   drivers/gpu/drm/xe/xe_bo.c |  8 +++++---
>   drivers/gpu/drm/xe/xe_bo.h | 21 +++++++++++++++++++++
>   drivers/gpu/drm/xe/xe_vm.c |  8 +++++---
>   3 files changed, 31 insertions(+), 6 deletions(-)
>
> diff --git a/drivers/gpu/drm/xe/xe_bo.c b/drivers/gpu/drm/xe/xe_bo.c
> index c82e995df779..21c5aca424dd 100644
> --- a/drivers/gpu/drm/xe/xe_bo.c
> +++ b/drivers/gpu/drm/xe/xe_bo.c
> @@ -251,9 +251,11 @@ static int xe_tt_map_sg(struct ttm_tt *tt)
>   	if (xe_tt->sg)
>   		return 0;
>   
> -	ret = sg_alloc_table_from_pages(&xe_tt->sgt, tt->pages, num_pages,
> -					0, (u64)num_pages << PAGE_SHIFT,
> -					GFP_KERNEL);
> +	ret = sg_alloc_table_from_pages_segment(&xe_tt->sgt, tt->pages,
> +						num_pages, 0,
> +						(u64)num_pages << PAGE_SHIFT,
> +						xe_sg_segment_size(xe_tt->dev),
> +						GFP_KERNEL);
>   	if (ret)
>   		return ret;
>   
> diff --git a/drivers/gpu/drm/xe/xe_bo.h b/drivers/gpu/drm/xe/xe_bo.h
> index 7e111332c35a..a1c51cc0ac3c 100644
> --- a/drivers/gpu/drm/xe/xe_bo.h
> +++ b/drivers/gpu/drm/xe/xe_bo.h
> @@ -296,6 +296,27 @@ void xe_bo_put_commit(struct llist_head *deferred);
>   
>   struct sg_table *xe_bo_get_sg(struct xe_bo *bo);
>   
> +/*
> + * xe_sg_segment_size() - Provides upper limit for sg segment size.
> + * @dev: device pointer
> + *
> + * Returns the maximum segment size for the 'struct scatterlist'
> + * elements.
> + */
> +static inline unsigned int xe_sg_segment_size(struct device *dev)
> +{
> +	size_t max = min_t(size_t, UINT_MAX, dma_max_mapping_size(dev));
> +
> +	/*
> +	 * The iommu_dma_map_sg() function ensures iova allocation doesn't
> +	 * cross dma segment boundary. It does so by padding some sg elements.
> +	 * This can cause overflow, ending up with sg->length being set to 0.
> +	 * Avoid this by ensuring maximum segment size is half of 'max'
> +	 * rounded down to PAGE_SIZE.
> +	 */

Is this a bug in the IOMMU code? In any case, shouldn't the fix on our 
side be using

dma_set_seg_boundary() and
dma_set_max_seg_size()

to set reasonable values that avoid that problem?

/Thomas

> +	return round_down(max / 2, PAGE_SIZE);
> +}
> +
>   #if IS_ENABLED(CONFIG_DRM_XE_KUNIT_TEST)
>   /**
>    * xe_bo_is_mem_type - Whether the bo currently resides in the given
> diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
> index 2aa5bf9cfee1..5c2fdfc0e836 100644
> --- a/drivers/gpu/drm/xe/xe_vm.c
> +++ b/drivers/gpu/drm/xe/xe_vm.c
> @@ -117,9 +117,11 @@ int xe_vma_userptr_pin_pages(struct xe_vma *vma)
>   	if (ret)
>   		goto out;
>   
> -	ret = sg_alloc_table_from_pages(&vma->userptr.sgt, pages, pinned,
> -					0, (u64)pinned << PAGE_SHIFT,
> -					GFP_KERNEL);
> +	ret = sg_alloc_table_from_pages_segment(&vma->userptr.sgt, pages,
> +						pinned, 0,
> +						(u64)pinned << PAGE_SHIFT,
> +						xe_sg_segment_size(xe->drm.dev),
> +						GFP_KERNEL);
>   	if (ret) {
>   		vma->userptr.sg = NULL;
>   		goto out;


More information about the Intel-xe mailing list