[Intel-gfx] [RFC 06/13] drm/i915/svm: Page table mirroring support

Niranjan Vishwanathapura niranjana.vishwanathapura at intel.com
Sat Nov 23 04:44:18 UTC 2019


On Fri, Nov 22, 2019 at 11:33:12PM +0000, Jason Gunthorpe wrote:
>On Fri, Nov 22, 2019 at 12:57:27PM -0800, Niranjana Vishwanathapura wrote:
>
>> +static inline bool
>> +i915_range_done(struct hmm_range *range)
>> +{
>> +	bool ret = hmm_range_valid(range);
>> +
>> +	hmm_range_unregister(range);
>> +	return ret;
>> +}
>
>This needs to be updated to follow the new API, but this pattern is
>generally wrong, failure if hmm_range_valid should retry the
>range_fault and so forth. See the hmm.rst.
>

Thanks Jason for the feedback.
Yah, will update as per new API in the next revision.
The caller of this function is indeed retrying if the range is not valid.
But I got the point. I made changes in my local build as per hmm.rst
and it is working fine. Will include the change in next revision.

Noticed that as hmm_range_fault() retuns number of valid pages, hmm.rst
example probably should be updated to check for that instead of non-zero
return value.

>> +static int
>> +i915_range_fault(struct i915_svm *svm, struct hmm_range *range)
>> +{
>> +	long ret;
>> +
>> +	range->default_flags = 0;
>> +	range->pfn_flags_mask = -1UL;
>> +
>> +	ret = hmm_range_register(range, &svm->mirror);
>> +	if (ret) {
>> +		up_read(&svm->mm->mmap_sem);
>> +		return (int)ret;
>> +	}
>
>
>Using a temporary range is the pattern from nouveau, is it really
>necessary in this driver?

Yah, not required. In my local build I tried with proper default_flags
and set pfn_flags_mask to 0 and it is working fine.

>
>> +static u32 i915_svm_build_sg(struct i915_address_space *vm,
>> +			     struct hmm_range *range,
>> +			     struct sg_table *st)
>> +{
>> +	struct scatterlist *sg;
>> +	u32 sg_page_sizes = 0;
>> +	u64 i, npages;
>> +
>> +	sg = NULL;
>> +	st->nents = 0;
>> +	npages = (range->end - range->start) / PAGE_SIZE;
>> +
>> +	/*
>> +	 * No needd to dma map the host pages and later unmap it, as
>> +	 * GPU is not allowed to access it with SVM. Hence, no need
>> +	 * of any intermediate data strucutre to hold the mappings.
>> +	 */
>> +	for (i = 0; i < npages; i++) {
>> +		u64 addr = range->pfns[i] & ~((1UL << range->pfn_shift) - 1);
>> +
>> +		if (sg && (addr == (sg_dma_address(sg) + sg->length))) {
>> +			sg->length += PAGE_SIZE;
>> +			sg_dma_len(sg) += PAGE_SIZE;
>> +			continue;
>> +		}
>> +
>> +		if (sg)
>> +			sg_page_sizes |= sg->length;
>> +
>> +		sg =  sg ? __sg_next(sg) : st->sgl;
>> +		sg_dma_address(sg) = addr;
>> +		sg_dma_len(sg) = PAGE_SIZE;
>> +		sg->length = PAGE_SIZE;
>> +		st->nents++;
>
>It is odd to build the range into a sgl.
>
>IMHO it is not a good idea to use the sg_dma_address like this, that
>should only be filled in by a dma map. Where does it end up being
>used?

The sgl is used to plug into the page table update function in i915.

For the device memory in discrete card, we don't need dma map which
is the case here. Here we don't expect any access to host memory
as mentioned in the comment above.
Well for integrated gfx, if we need to use SVM (with iommu is enabled),
we need to dma map. This requires we maintain the dma address for
each page in some intermediate data structure. Currently, this is TBD.

>
>> +	range.pfn_shift = PAGE_SHIFT;
>> +	range.start = args->start;
>> +	range.flags = i915_range_flags;
>> +	range.values = i915_range_values;
>> +	range.end = range.start + args->length;
>> +	for (i = 0; i < npages; ++i) {
>> +		range.pfns[i] = range.flags[HMM_PFN_VALID];
>> +		if (!(args->flags & I915_BIND_READONLY))
>> +			range.pfns[i] |= range.flags[HMM_PFN_WRITE];
>> +	}
>> +
>> +	down_read(&svm->mm->mmap_sem);
>> +	vma = find_vma(svm->mm, range.start);
>
>Why is find_vma needed?

Ok, looks like not needed, will remove it.

>
>> +	if (unlikely(!vma || vma->vm_end < range.end)) {
>> +		ret = -EFAULT;
>> +		goto vma_done;
>> +	}
>> +again:
>> +	ret = i915_range_fault(svm, &range);
>> +	if (unlikely(ret))
>> +		goto vma_done;
>> +
>> +	sg_page_sizes = i915_svm_build_sg(vm, &range, &st);
>>
>
>Keep in mind what can be done with the range values is limited until
>the lock is obtained. Reading the pfns and flags is OK though.
>

Yah, I either have to build sgl here now and discard later if the range
is not valid. Or, do it while holding the lock which increases contention
period. So, settled on doing it here as it seemed better choice to me.

>> +int i915_svm_bind_mm(struct i915_address_space *vm)
>> +{
>> +	struct i915_svm *svm;
>> +	struct mm_struct *mm;
>> +	int ret = 0;
>> +
>> +	mm = get_task_mm(current);
>
>I don't think the get_task_mm(current) is needed, the mmget is already
>done for current->mm ?

No, I don't think mmget is already done for current->mm here.

>
>Jason


More information about the Intel-gfx mailing list