[PATCH 5/5] drm: allocate kernel mode-object IDs in cyclic fashion

Daniel Vetter daniel at ffwll.ch
Wed Sep 9 06:03:19 PDT 2015


On Wed, Sep 09, 2015 at 02:21:33PM +0200, David Herrmann wrote:
> Now that we support connector hotplugging, user-space might see
> mode-object IDs coming and going asynchronously. Therefore, we must make
> sure to not re-use object IDs, so to not confuse user-space and introduce
> races. Therefore, all kernel-allocated objects will no longer recycle
> IDs. Instead, we use the cyclic idr allocator (which performs still fine
> for reasonable allocation schemes).
> 
> However, for user-space allocated objects like framebuffers, we don't
> want to risk allowing malicious users to screw with the ID space.
> Furthermore, those objects happen to not be subject to ID hotplug races,
> as they're allocated and freed explicitly. Hence, we still recycle IDs
> for these objects (which are just framebuffers so far).

Since atomic also blob properties can be created by userspace. And there
has actually been trouble once with some attempt in SNA to cache the edid
blob without realizing that they get reused. So working userspace _always_
has to re-read the blob prop to make sure it has the current one anyway.
-Daniel

> 
> For atomic-modesetting, objects are looked up by the kernel without
> specifying an object-type. Hence, we have a theoretical race where a
> framebuffer recycles a previous connector ID. However, user-allocated
> objects are never returned by drm_mode_object_find() (as they need
> separate ref-count handling), so this race cannot happen with the
> currently available objects. Even if we add ref-counting to other
> objects, all we need to make sure is to never lookup user-controlled
> objects with the same function as kernel-controlled objects, but not
> specifying the type. This sounds highly unlikely to happen ever, so we
> should be safe, anyway.
> 
> Signed-off-by: David Herrmann <dh.herrmann at gmail.com>
> ---
>  drivers/gpu/drm/drm_crtc.c | 37 ++++++++++++++++++++++++++++++++-----
>  1 file changed, 32 insertions(+), 5 deletions(-)
> 
> diff --git a/drivers/gpu/drm/drm_crtc.c b/drivers/gpu/drm/drm_crtc.c
> index 33d877c..fd8a2e2 100644
> --- a/drivers/gpu/drm/drm_crtc.c
> +++ b/drivers/gpu/drm/drm_crtc.c
> @@ -269,18 +269,42 @@ const char *drm_get_format_name(uint32_t format)
>  EXPORT_SYMBOL(drm_get_format_name);
>  
>  /*
> + * Flags for drm_mode_object_get_reg():
> + * DRM_MODE_OBJECT_ID_UNLINKED:	Allocate the object ID, but do not store the
> + * 				object pointer. Hence, the object is not
> + * 				registered but needs to be inserted manually.
> + * 				This must be used for hotplugged objects.
> + * DRM_MODE_OBJECT_ID_RECYCLE:	Allow recycling previously allocated IDs. If
> + * 				not set, the IDs space is allocated in a cyclic
> + * 				fashion. This should be the default for all
> + * 				kernel allocated objects, to not confuse
> + * 				user-space on hotplug. This must not be used
> + * 				for user-allocated objects, though.
> + */
> +enum {
> +	DRM_MODE_OBJECT_ID_UNLINKED		= (1U << 0),
> +	DRM_MODE_OBJECT_ID_RECYCLE			= (1U << 1),
> +};
> +
> +/*
>   * Internal function to assign a slot in the object idr and optionally
>   * register the object into the idr.
>   */
>  static int drm_mode_object_get_reg(struct drm_device *dev,
>  				   struct drm_mode_object *obj,
>  				   uint32_t obj_type,
> -				   bool register_obj)
> +				   unsigned int flags)
>  {
> +	void *ptr = (flags & DRM_MODE_OBJECT_ID_UNLINKED) ? NULL : obj;
>  	int ret;
>  
>  	mutex_lock(&dev->mode_config.idr_mutex);
> -	ret = idr_alloc(&dev->mode_config.crtc_idr, register_obj ? obj : NULL, 1, 0, GFP_KERNEL);
> +	if (flags & DRM_MODE_OBJECT_ID_RECYCLE)
> +		ret = idr_alloc(&dev->mode_config.crtc_idr, ptr, 1, 0,
> +				GFP_KERNEL);
> +	else
> +		ret = idr_alloc_cyclic(&dev->mode_config.crtc_idr, ptr, 1, 0,
> +				       GFP_KERNEL);
>  	if (ret >= 0) {
>  		/*
>  		 * Set up the object linking under the protection of the idr
> @@ -312,7 +336,7 @@ static int drm_mode_object_get_reg(struct drm_device *dev,
>  int drm_mode_object_get(struct drm_device *dev,
>  			struct drm_mode_object *obj, uint32_t obj_type)
>  {
> -	return drm_mode_object_get_reg(dev, obj, obj_type, true);
> +	return drm_mode_object_get_reg(dev, obj, obj_type, 0);
>  }
>  
>  static void drm_mode_object_register(struct drm_device *dev,
> @@ -414,7 +438,8 @@ int drm_framebuffer_init(struct drm_device *dev, struct drm_framebuffer *fb,
>  	fb->dev = dev;
>  	fb->funcs = funcs;
>  
> -	ret = drm_mode_object_get(dev, &fb->base, DRM_MODE_OBJECT_FB);
> +	ret = drm_mode_object_get_reg(dev, &fb->base, DRM_MODE_OBJECT_FB,
> +				      DRM_MODE_OBJECT_ID_RECYCLE);
>  	if (ret)
>  		goto out;
>  
> @@ -872,7 +897,9 @@ int drm_connector_init(struct drm_device *dev,
>  
>  	drm_modeset_lock_all(dev);
>  
> -	ret = drm_mode_object_get_reg(dev, &connector->base, DRM_MODE_OBJECT_CONNECTOR, false);
> +	ret = drm_mode_object_get_reg(dev, &connector->base,
> +				      DRM_MODE_OBJECT_CONNECTOR,
> +				      DRM_MODE_OBJECT_ID_UNLINKED);
>  	if (ret)
>  		goto out_unlock;
>  
> -- 
> 2.5.1
> 

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch


More information about the dri-devel mailing list