[PATCH 1/4] drm: add prime helpers

Aaron Plattner aplattner at nvidia.com
Fri Dec 7 12:33:37 PST 2012


On 12/07/2012 10:48 AM, Daniel Vetter wrote:
> On Fri, Dec 07, 2012 at 09:58:38AM -0800, Aaron Plattner wrote:
>> On 12/06/2012 01:46 PM, Daniel Vetter wrote:
>>> - To make it a first-class helper and remove any and all midlayer stints
>>>    from this (I truly loath midlayers ...) maybe shovel this into a
>>>    drm_prime_helper.c file. Also, adding functions to the core vtable for
>>>    pure helpers is usually not too neat, since it makes it way too damn
>>>    easy to mix things up and transform the helper into a midlayer by
>>>    accident. E.g. the crtc helper functions all just use a generic void *
>>>    pointer for their vtables, other helpers either subclass an existing
>>>    struct or use their completely independent structs (which the driver can
>>>    both embed into it's own object struct and then do the right upclassing
>>>    in the callbacks). tbh haven't looked to closely at the series to asses
>>>    what would make sense, but we have way too much gunk in the drm vtable
>>>    already ...
>>
>> I'm not sure I fully understand what you mean by this.  Are you
>> suggesting creating a separate struct for these functions and having a
>> void* pointer to that in struct drm_driver?
>
> Create a new struct like
>
> struct  drm_prime_helper_funcs {
>         int (*gem_prime_pin)(struct drm_gem_object *obj);
>         struct sg_table *(*gem_prime_get_pages)(struct drm_gem_object *obj);
>         struct drm_gem_object *(*gem_prime_import_sg)(struct drm_device *dev,
>                                 size_t size, struct sg_table *sg);
>         void *(*gem_prime_vmap)(struct drm_gem_object *obj);
>         void (*gem_prime_vunmap)(struct drm_gem_object *obj);
> }
>
> struct drm_prime_helper_obj {
>         /* vmap information */
>         int vmapping_count;
>         void *vmapping_ptr;
>
> 	/* any other data the helpers need ... */
> 	struct drm_prime_helper_funcs *funcs;
> }

Ah, I see what you mean.  This would also need either a pointer to the 
drv directly so the helpers can lock drv->struct_mutex, or a helper 
function to convert from a drm_prime_helper_obj* to a gem_buffer_object* 
in a driver-specific way since the helper doesn't know the layout of the 
driver's bo structure.

So it would be something like

   struct drm_prime_helper_obj *helper_obj = dma_buf->priv;
   struct drm_prime_helper_funcs *funcs = helper_obj->funcs;
   struct drm_gem_object *obj = funcs->get_gem_obj(helper_obj);

   mutex_lock(&obj->dev->struct_mutex);
   funcs->whatever(obj);
   mutex_unlock&obj->dev->struct_mutex);

and then for example, radeon_drm_prime_get_gem_obj would be

   struct radeon_bo *bo = container_of(helper_obj, struct radeon_bo,
                                       prime_helper);
   return &bo->gem_base;

?

That seems a little over-engineered to me, but I can code it up.

--
Aaron

> Drivers then fill out a func vtable and embedded the helper obj into their
> own gpu bo struct, i.e.
>
> struct radeon_bo {
> 	struct ttm_bo ttm_bo;
> 	struct gem_buffer_object gem_bo;
> 	struct drm_prime_helper_obj prime_bo;
>
> 	/* other stuff */
> }
>
> In the prime helper callbacks in the driver then upcast the prime_bo to
> the radeon_bo instead of the gem_bo to the radeon bo.
>
> Upsides: Guaranteed to not interfere with drivers not using it, with an
> imo higher chance that the helper is cleanly separate from core stuff (no
> guarantee ofc) and so easier to refactor in the future. And drm is full
> with legacy stuff used by very few drivers, but highly intermingled with
> drm core used by other drivers (my little holy war here is to slowly
> refactor those things out and shovel them into drivers), hence why I
> prefer to have an as clean separation of optional stuff as possible ...
>
> Also, this way allows different drivers to use completely different helper
> libraries for the same task, without those two helper libraries ever
> interfering.
>
> Downside: Adds a pointer to each bo struct for drivers using the helpers.
> Given how big those tend to be, not too onerous.  Given And maybe a bit of
> confusion in driver callbacks, but since the linux container_of is
> typechecked by the compiler, not too onerous either.
>
>>> Dunno whether this aligns with what you want to achieve here. But on a
>>> quick hunch this code feels rather unflexible and just refactors the
>>> identical copypasta code back into one copy. Anyway, just figured I'll
>>> drop my 2 cents here, feel free to ignore (maybe safe for the last one).
>>
>> I think uncopypasta-ing the current code is beneficial on its own.
>> Do you think doing this now would make it harder to do the further
>> refactoring you suggest later?
>
> If we later on go with something like the above, it'll require going over
> all drivers again, doing similar mechanic changes. If no one yet managed
> to intermingle the helpers with the core in the meantime, that is ;-)
>
> Cheers, Daniel
>



More information about the dri-devel mailing list