[PATCH] drm/radeon: Add support for userspace fence waits

Christian König deathsimple at vodafone.de
Tue Jan 31 15:07:33 PST 2012


On -10.01.-28163 20:59, Jerome Glisse wrote:
> On Tue, Jan 31, 2012 at 02:13:00PM -0500, Alex Deucher wrote:
>> On Tue, Jan 31, 2012 at 2:07 PM, Jerome Glisse<j.glisse at gmail.com>  wrote:
>>> On Tue, Jan 31, 2012 at 01:55:43PM -0500, David Airlie wrote:
>>>>
>>>> Some comments below.
>>>>
>>>>> +   struct radeon_device *rdev = dev->dev_private;
>>>>> +   struct drm_gem_object *gobj;
>>>>> +   struct radeon_bo *robj;
>>>>> +   void *buffer_data;
>>>>> +   uint32_t *fence_data;
>>>>> +   int r = 0;
>>>>> +   long timeout;
>>>>> +   int ring = RADEON_RING_TYPE_GFX_INDEX;
>>>>> +
>>>>> +   /* If you're implementing this for other rings, you'll want to
>>>>> share
>>>>> +      code with radeon_cs_get_ring in radeon_cs.c */
>>>>> +   if (args->ring != RADEON_CS_RING_GFX) {
>>>>> +           return -EINVAL;
>>>>> +   }
>>>>> +
>>>>> +   gobj = drm_gem_object_lookup(dev, filp, args->handle);
>>>>> +   if (gobj == NULL) {
>>>>> +           return -ENOENT;
>>>>> +   }
>>>>> +   robj = gem_to_radeon_bo(gobj);
>>>>> +
>>>>> +   if (gobj->size<  args->offset) {
>>>>> +           r = -EINVAL;
>>>>> +           goto unreference;
>>>>> +   }
>>>>> +
>>>>> +   r = radeon_bo_reserve(robj, true);
>>>>> +   if (r) {
>>>>> +           goto unreference;
>>>>> +   }
>>>>> +
>>>>> +   r = radeon_bo_pin(robj, RADEON_GEM_DOMAIN_GTT, NULL);
>>>>> +   if (r) {
>>>>> +           goto unreserve;
>>>>> +   }
>>>>> +
>>>>> +   r = radeon_bo_kmap(robj,&buffer_data);
>>>>> +   if (r) {
>>>>> +           goto unpin;
>>>>> +   }
>>>>> +
>>>>
>>>> Do you need to pin it? I think if you have it reserved and you are in here you shouldn't need to. (unless kmapping requires a pin)/
>>> No you don't need to pin, actually it's bad to pin the buffer you might
>>> want to wait on might be in vram.
>>>
>>> Also you never assume that things could go wrong and that your value
>>> might never be written to the buffer. So it will wait forever in the
>>> kernel.
>>>
>>> As i said in the other mail i would rather as a wait on submited cs
>>> ioctl and add some return value from cs ioctl. I can't remember the
>>> outcome of this discusion we had when we were doing virtual memory
>>> support. Alex ? Michel ? better memory than i do ? :)
>>>
>> I vaguely recall the discussion, but I don't remember the details,
>> I'll have to look through my old mail.  Maybe a new CS ioctl flag
>> requesting EOP irqs for the CS would be a better approach than a
>> separate ioctl.
>>
>> Alex
> I think the idea was to return u64 value of specific ring and only
> enable irq if new wait cs ioctl was call, a little bit like bo wait.
>
> Will try to dig through my mail too.
>
> Cheers,
> Jerome

As far as I remember we wanted the CS ioctl to return ring+fence number 
(expanded to 64bit) and add another ioctl to wait for a ring to reach 
that fence number. That has the clear advantages that a) we no longer 
need a fence bo for each application and b) not all rings can write a 
fence value from an userspace IB.

I've started to implement that and at least had the 64bit expansion of 
fence values and the returning of them in the CS ioctl running, but then 
got to work on other things first. I probably have those patches still 
laying around somewhere,  on the other hand it should be pretty easy to 
implement...

Regards,
Christian.


More information about the dri-devel mailing list