[PATCH] RFC: omapdrm DRM/KMS driver for TI OMAP platforms
Daniel Vetter
daniel at ffwll.ch
Sat Sep 3 11:53:55 PDT 2011
Hi Rob,
I've taken a look at your driver, some comments inline. Can't really
comment much on the kms side because of lack of expertise in that area.
I think most can be trivially addressed by dropping the plugin layer ;-)
Cheers, Daniel
On Fri, Sep 02, 2011 at 03:07:27PM -0500, Rob Clark wrote:
> [snip]
> +int omap_connector_sync(struct drm_connector *connector)
> +{
> + struct omap_connector *omap_connector = to_omap_connector(connector);
> + struct omap_dss_device *dssdev = omap_connector->dssdev;
> + struct omap_dss_driver *dssdrv = dssdev->driver;
> +
> + DBG("%s", omap_connector->dssdev->name);
> +
> + if (dssdrv->sync) {
> + return dssdrv->sync(dssdev);
> + }
> +
> + return -EINVAL;
> +}
> +EXPORT_SYMBOL(omap_connector_sync);
What is this for? There seem to be a few other EXPORT_SYMBOLS (beside
those to register plugins) that look a bit strange - especially since
there already is the dss layer to arbitrage between all the drivers (drm,
v4l, fbdev). Or is that all stuff needed by your plugins? If so, better
drop them until the first user shows up (and even then it's likely easer
if the glue code comes as a separate patch).
> [snip]
> +/** ensure backing pages are allocated */
> +/* NOTE: a bit similar to psb_gtt_attach_pages().. perhaps could be common?
> + * and maybe a bit similar to i915_gem_object_get_pages_gtt()
> + */
Actually see also i915_gem_object_get_pages_gtt for yet another copy of
this. This is a bit a mess and I'm not sure what to do, safe to slowly
evolve gem into something resembling ttm ...
> +static int omap_gem_attach_pages(struct drm_gem_object *obj)
> +{
> + struct omap_gem_object *omap_obj = to_omap_bo(obj);
> + struct inode *inode;
> + struct address_space *mapping;
> + struct page *p;
> + int i, npages;
> + gfp_t gfpmask;
> +
> + WARN_ON(omap_obj->pages);
> +
> + /* This is the shared memory object that backs the GEM resource */
> + inode = obj->filp->f_path.dentry->d_inode;
> + mapping = inode->i_mapping;
> +
> + npages = obj->size >> PAGE_SHIFT;
> +
> + omap_obj->pages = kmalloc(npages * sizeof(struct page *), GFP_KERNEL);
> + if (omap_obj->pages == NULL)
> + return -ENOMEM;
> +
> + /* FIXME: review flags later */
> + gfpmask = __GFP_DMA32 | mapping_gfp_mask(mapping);
> +
> + for (i = 0; i < npages; i++) {
> + p = shmem_read_mapping_page_gfp(mapping, i, gfpmask);
> + if (IS_ERR(p))
> + goto fail;
> + omap_obj->pages[i] = p;
> + }
> +
> + return 0;
> +
> +fail:
> + while (i--) {
> + page_cache_release(omap_obj->pages[i]);
> + }
> + kfree(omap_obj->pages);
> + omap_obj->pages = NULL;
> + return PTR_ERR(p);
> +
> +
> +}
> +
> +/** release backing pages */
> +/* NOTE: a bit similar to psb_gtt_detatch_pages().. perhaps could be common? */
> +static void omap_gem_detach_pages(struct drm_gem_object *obj)
See i915_gem_object_put_pages_gtt (minus the madv logic for purgeable objects).
> +{
> + struct omap_gem_object *omap_obj = to_omap_bo(obj);
> + int i, npages;
> +
> + npages = obj->size >> PAGE_SHIFT;
> +
> + for (i = 0; i < npages; i++) {
> + /* FIXME: do we need to force dirty */
> + set_page_dirty(omap_obj->pages[i]);
> + /* Undo the reference we took when populating the table */
> + page_cache_release(omap_obj->pages[i]);
> + }
> +
> + kfree(omap_obj->pages);
> + omap_obj->pages = NULL;
> +}
> +
> +/* Get physical address for DMA.. if 'remap' is true, and the buffer is not
> [snip]
> + * already contiguous, remap it to pin in physically contiguous memory.. (ie.
> + * map in TILER)
> + */
> +int omap_gem_get_paddr(struct drm_gem_object *obj,
> + unsigned long *paddr, bool remap)
> +{
> + struct omap_gem_object *omap_obj = to_omap_bo(obj);
> +
> + if (!(omap_obj->flags & OMAP_BO_DMA)) {
> + /* TODO: remap to TILER */
> + return -ENOMEM;
> + }
> +
> + *paddr = omap_obj->paddr;
> +
> + return 0;
> +}
> +EXPORT_SYMBOL(omap_gem_get_paddr);
> +
> +/* Release physical address, when DMA is no longer being performed.. this
> + * could potentially unpin and unmap buffers from TILER
> + */
> +int omap_gem_put_paddr(struct drm_gem_object *obj)
> +{
> + /* do something here when remap to TILER is used.. */
> + return 0;
> +}
> +EXPORT_SYMBOL(omap_gem_put_paddr);
> +
> +/* acquire pages when needed (for example, for DMA where physically
> + * contiguous buffer is not required
> + */
> +int omap_gem_get_pages(struct drm_gem_object *obj, struct page ***pages)
> +{
> + struct omap_gem_object *omap_obj = to_omap_bo(obj);
> + /* TODO: we could attach/detach pages on demand */
> + int ret; // XXX below is common in _fault()..
> + if (obj->filp && !omap_obj->pages) {
> + ret = omap_gem_attach_pages(obj);
> + if (ret) {
> + dev_err(obj->dev->dev, "could not attach pages\n");
> + return ret;
> + }
> + }
> + /* TODO: even phys-contig.. we should have a list of pages */
> + *pages = omap_obj->pages;
> + return 0;
> +}
> +EXPORT_SYMBOL(omap_gem_get_pages);
> +
> +/* release pages when DMA no longer being performed */
> +int omap_gem_put_pages(struct drm_gem_object *obj)
> +{
> + /* do something here if we dynamically attach/detach pages.. at
> + * least they would no longer need to be pinned if everyone has
> + * released the pages..
> + */
> + return 0;
> +}
> +EXPORT_SYMBOL(omap_gem_put_pages);
Above set of get/put functions seem to do very little. Drop them for the
first round?
> [snip]
> +/* Buffer Synchronization:
> + */
> +
> +struct omap_gem_sync_waiter {
> + struct list_head list;
> + struct omap_gem_object *omap_obj;
> + enum omap_gem_op op;
> + uint32_t read_target, write_target;
> + /* notify called w/ sync_lock held */
> + void (*notify)(void *arg);
> + void *arg;
> +};
> +
> +/* list of omap_gem_sync_waiter.. the notify fxn gets called back when
> + * the read and/or write target count is achieved which can call a user
> + * callback (ex. to kick 3d and/or 2d), wakeup blocked task (prep for
> + * cpu access), etc.
> + */
> +static LIST_HEAD(waiters);
> +
> +static inline bool is_waiting(struct omap_gem_sync_waiter *waiter)
> +{
> + struct omap_gem_object *omap_obj = waiter->omap_obj;
> + if ((waiter->op & OMAP_GEM_READ) &&
> + (omap_obj->sync->read_complete < waiter->read_target))
> + return true;
> + if ((waiter->op & OMAP_GEM_WRITE) &&
> + (omap_obj->sync->write_complete < waiter->write_target))
> + return true;
> + return false;
> +}
On a quick read this looks awfully like handrolled gpu sync objects. For
which we already have a fully-featured implementation in ttm. And
and something handrolled in i915 (request tracking). Can we do better?
[ Looks like it's part of the plugin layer, so problem postponed. Puhh ]
> [snip]
> +void omap_gem_set_sync_object(struct drm_gem_object *obj, void *syncobj)
> +{
> + struct omap_gem_object *omap_obj = to_omap_bo(obj);
> +
> + spin_lock(&sync_lock);
> +
> + if ((omap_obj->flags & OMAP_BO_EXT_SYNC) && !syncobj) {
> + /* clearing a previously set syncobj */
> + syncobj = kzalloc(sizeof(*omap_obj->sync), GFP_KERNEL|GFP_ATOMIC);
> + memcpy(syncobj, omap_obj->sync, sizeof(*omap_obj->sync));
> + omap_obj->flags &= ~OMAP_BO_EXT_SYNC;
> + omap_obj->sync = syncobj;
> + } else if (syncobj && !(omap_obj->flags & OMAP_BO_EXT_SYNC)) {
> + /* replacing an existing syncobj */
> + if (omap_obj->sync) {
> + memcpy(syncobj, omap_obj->sync, sizeof(*omap_obj->sync));
> + kfree(omap_obj->sync);
> + }
> + omap_obj->flags |= OMAP_BO_EXT_SYNC;
> + omap_obj->sync = syncobj;
> + }
> + spin_unlock(&sync_lock);
> +}
> +EXPORT_SYMBOL(omap_gem_set_sync_object);
Abstraction layers to adapt some blobs interface (be it the Windows kernel
driver interface or a blob driver core) is a bit frowned upon (actually I
think some people would nuke them first, ask questions later ;-). Maybe
hide that in your own vendor patch?
> +int omap_gem_init_object(struct drm_gem_object *obj)
> +{
> + return -EINVAL; /* unused */
> +}
> +
> +/* don't call directly.. called from GEM core when it is time to actually
> + * free the object..
> + */
> +void omap_gem_free_object(struct drm_gem_object *obj)
> +{
> + struct omap_gem_object *omap_obj = to_omap_bo(obj);
> +
> + if (obj->map_list.map) {
> + drm_gem_free_mmap_offset(obj);
> + }
> +
> + /* don't free externally allocated backing memory */
> + if (!(omap_obj->flags & OMAP_BO_EXT_MEM)) {
Nobody sets OMAP_BO_EXT_MEM flag. What's this for? Better implement this
when it actually gets used ...
> + if (!obj->filp) {
> + /* TODO: is this the best way to check? */
> + omap_vram_free(omap_obj->paddr, obj->size);
> + }
> +
> + if (omap_obj->pages) {
> + omap_gem_detach_pages(obj);
> + }
> + }
> +
> + /* don't free externally allocated syncobj */
> + if (!(omap_obj->flags & OMAP_BO_EXT_SYNC)) {
> + kfree(omap_obj->sync);
> + }
> +
> + drm_gem_object_release(obj);
> +
> + kfree(obj);
> +}
> +
> +/* convenience method to construct a GEM buffer object, and userspace handle */
Inline with its only user above and scrap the forward decl?
> +int omap_gem_new_handle(struct drm_device *dev, struct drm_file *file,
> + size_t size, uint32_t flags, uint32_t *handle, uint64_t *offset)
> +{
> + struct drm_gem_object *obj;
> + int ret;
> +
> + obj = omap_gem_new(dev, size, flags);
> + if (!obj)
> + return -ENOMEM;
> +
> + ret = drm_gem_handle_create(file, obj, handle);
> + if (ret) {
> + drm_gem_object_release(obj);
> + kfree(obj); /* TODO isn't there a dtor to call? just copying i915 */
> + return ret;
> + }
> +
> + if (offset) {
> + *offset = omap_gem_mmap_offset(obj);
> + }
> +
> + /* drop reference from allocate - handle holds it now */
> + drm_gem_object_unreference_unlocked(obj);
> +
> + return 0;
> +}
> +/* common constructor body */
> +static struct drm_gem_object * omap_gem_new_impl(struct drm_device *dev,
> + size_t size, uint32_t flags, unsigned long paddr, struct page **pages,
> + struct omap_gem_vm_ops *ops)
> +{
> + struct omap_gem_object *omap_obj;
> + struct drm_gem_object *obj;
> + int ret;
> +
> + size = PAGE_ALIGN(size);
> +
> + omap_obj = kzalloc(sizeof(*omap_obj), GFP_KERNEL);
> + if (!omap_obj) {
> + dev_err(dev->dev, "could not allocate GEM object\n");
> + goto fail;
> + }
> +
> + obj = &omap_obj->base;
> +
> + if (paddr) {
> + flags |= OMAP_BO_DMA;
> + }
> +
> + omap_obj->paddr = paddr;
> + omap_obj->pages = pages;
> + omap_obj->flags = flags;
> + omap_obj->ops = ops;
> +
> + if (flags & (OMAP_BO_DMA|OMAP_BO_EXT_MEM)) {
> + ret = drm_gem_private_object_init(dev, obj, size);
> + } else {
> + ret = drm_gem_object_init(dev, obj, size);
> + }
> +
> + if (ret) {
> + /* Yikes! need to clean up! XXX */
> + goto fail;
> + }
> +
> + /* Make it mmapable */
> + ret = drm_gem_create_mmap_offset(obj);
> + if (ret) {
> + dev_err(dev->dev, "could not allocate mmap offset");
The drm mmap offset manager isn't the fatest beast in the world. Maybe
create mmappings on demand only when userspace actually needs them? I.e.
in the ioctl_gem_info function.
> + goto fail;
> + }
> +
> + return obj;
> +
> +fail:
> + if (omap_obj) {
> + kfree(omap_obj);
> + }
> + return NULL;
> +}
> +
> [snip]
--
Daniel Vetter
Mail: daniel at ffwll.ch
Mobile: +41 (0)79 365 57 48
More information about the dri-devel
mailing list