[RFC] drm/radeon: userfence IOCTL

Christian König deathsimple at vodafone.de
Mon Apr 13 08:47:35 PDT 2015


On 13.04.2015 17:39, Jerome Glisse wrote:
> On Mon, Apr 13, 2015 at 11:25:30AM -0400, Serguei Sagalovitch wrote:
>>> the BO to be kept in the same place while it is mapped inside the kernel
>> page table
>> ...
>>> So this requires that we pin down the BO for the duration of the wait
>> IOCTL.
>>
>> But my understanding is that it should be not duration of "wait" IOCTL but
>> "duration" of command buffer execution.
>>
>> BTW: I would assume that this is not the new scenario.
>>
>>   This is scenario:
>>      - User allocate BO
>>      - User get CPU address for BO
>>      - User submit command buffer to write to BO
>>      - User could "poll" / "read" or "write" BO data by CPU
>>
>> So when  TTM needs  to move BO to another location it should also update CPU
>> "mapping" correctly so user will always read / write the correct data.
>>
>> Did I miss anything?
> No this is how things works. But we want to avoid pinning buffer.
> One use case for this userspace fence is i assume same BO same
> user fence use accross several command buffer. Given that the
> userspace wait fence ioctl has not way to know which command
> buffer it is really waiting after then kernel has no knowledge
> of if this user fence will signal at all. So a malicious user
> space (we always have to assume this thing exist) could create
> a big VRAM BO and effectively pin it in VRAM leading to a GPU
> DOS (denial of service).
>
> By the way Christian, i would add a timeout to this ioctl and
> return eagain to userspace on timeout so that userspace can
> resumit. That way a malicious userspace will just keep exhausting
> its cpu timeslot.

Yeah, I knew. I honestly haven't even tested the implementation, just 
first wanted to check how far of the whole idea is.

On the one hand it is rather appealing, but on the other it's a complete 
different approach of what we have done so far. E.g. we can pretty much 
forget the whole kernel fence framework with that.

Regards,
Christian.

>
> Cheers,
> Jérôme



More information about the dri-devel mailing list