[PATCH 22/24] drm/amdkfd: Adding new IOCTL for scratch memory v2
Felix Kuehling
felix.kuehling at amd.com
Mon Aug 21 20:06:52 UTC 2017
On 2017-08-21 03:21 PM, Oded Gabbay wrote:
> On Mon, Aug 21, 2017 at 8:39 PM, Jerome Glisse <j.glisse at gmail.com> wrote:
>> On Tue, Aug 15, 2017 at 11:00:20PM -0400, Felix Kuehling wrote:
>>> From: Moses Reuben <moses.reuben at amd.com>
>>>
>>> v2:
>>> * Renamed ALLOC_MEMORY_OF_SCRATCH to SET_SCRATCH_BACKING_VA
>>> * Removed size parameter from the ioctl, it was unused
>>> * Removed hole in ioctl number space
>>> * No more call to write_config_static_mem
>>> * Return correct error code from ioctl
>> What kind of memory is suppose to back this virtual address
>> range ? How big is the range suppose to be ? Can it be any
>> valid virtual address ?
>>
>> My worry here is to ascertain that one can not abuse this
>> ioctl say to set the virtual address to some mmaped shared
>> library code/data section and write something malicious
>> there.
>>
>> I am assuming that if it has to go through ATS/PASID of the
>> IOMMUv2 then the write protection will be asserted and we
>> will see proper COW (copy on write) due to mmap PRIVATE flags.
>>
>>
>> Idealy this area should be a special vma and the driver
>> should track its lifetime and cancel GPU jobs if it is
>> unmap. But i am unsure on how dynamic is that scratch
>> memory suppose to be (ie do you allocate new scratch memory
>> with every GPU job or is it allocated once and reuse for
>> every jobs).
>>
>> Bigger commit message would be nice too. Like i had tons
>> of i believe valid questions.
>>
>> Cheers,
>> Jérôme
> Hi Jerome,
> Great points.
>
> This is the explanation Felix gave a few days ago on this ioctl in a
> different email thread:
>
> "Scratch memory is private memory per work-item. It's used when a shader
> program has too few registers available. With HSA we use flat scratch
> addressing, where shaders can access private memory in a special scratch
> aperture using normal memory instructions. Using the same virtual
> address, each work item gets its own private piece of memory. The
> hardware does the address translation from the VA in the private
> aperture to a scratch-backing VA. The application is responsible for
> allocating the memory to back that scratch area, and to map it somewhere
> in its virtual address space.
>
> This ioctl tells the hardware (or HWS firmware) the VA of the scratch
> backing memory."
>
> From this explanation, I think that the user's supplied VA should be
> tested that its a valid writable area of the user.
> How do you test for that ? could you point me to such a code in the kernel ?
> From looking at other drivers that pin host memory, like mellanox nic,
> they don't do any special testing for the address they receive from
> the user.
>
> Felix,
> Is the scratch memory size fixed ? Because I don't see a size argument
> in the IOCTL.
The hardware needs a continuous pieces of virtual address space for
scratch backing. The base address of that continuous region is
programmed into a per-VMID register (SH_HIDDEN_PRIVATE_BASE_VMID) by the
driver or the HWS (hardware scheduler) firmware. So the size is limited
by the amount of address space reserved for this purpose by user mode.
But the HW doesn't really care what that size is. The HSA runtime
manages the sub-allocation or scratch memory per queue. It knows the
size that it allocates and will need to make sure it sub-allocates
within that size.
In the queue setup data structures it must somewhere contain information
about the offset from the start of scratch memory, and some address
swizzling parameters to determine per work item or per wavefront
addresses. I'm not familiar with the details of those data structures,
and the size limits in the hardware. They would determine an upper limit
for the size of scratch backing.
BTW, on Vega10 the way scratch works has been changed, and this ioctl
becomes basically a no-op. The SH_HIDDEN_PRIVATE_BASE_VMID register no
longer exists. Instead the base address is now provided to the shader
directly without being programmed ahead of time by KFD or HWS. Setting
up scratch memory per queue is entirely up to the Runtime now.
Regards,
Felix
> If it is fixed, what is the size ?
>
> Thanks,
> Oded
More information about the amd-gfx
mailing list