[Intel-gfx] [PATCH 08/13] drm/i915: Allow a context to define its set of engines

Tvrtko Ursulin tvrtko.ursulin at linux.intel.com
Mon Mar 11 10:12:52 UTC 2019


On 11/03/2019 09:45, Chris Wilson wrote:
> Quoting Tvrtko Ursulin (2019-03-11 09:23:44)
>>
>> On 08/03/2019 16:47, Chris Wilson wrote:
>>> Quoting Tvrtko Ursulin (2019-03-08 16:27:22)
>>>>
>>>> On 08/03/2019 14:12, Chris Wilson wrote:
>>>>> +static int
>>>>> +set_engines(struct i915_gem_context *ctx,
>>>>> +         const struct drm_i915_gem_context_param *args)
>>>>> +{
>>>>> +     struct i915_context_param_engines __user *user;
>>>>> +     struct set_engines set = { .ctx = ctx };
>>>>> +     u64 size, extensions;
>>>>> +     unsigned int n;
>>>>> +     int err;
>>>>> +
>>>>> +     user = u64_to_user_ptr(args->value);
>>>>> +     size = args->size;
>>>>> +     if (!size)
>>>>> +             goto out;
>>>>
>>>> This prevents a hypothetical extension with empty map data.
>>>
>>> No... This is required for resetting and I think that's covered in what
>>> little docs there are. It's the set.nengine==0 test later
>>> that you mean to object to. But we can't do that as that's how we
>>> differentiate between modes at the moment.
>>>
>>> We could use ctx->nengine = 0 and ctx->engines = ZERO_PTR.
>>
>> size == sizeof(struct i915_context_param_engines) could mean reset -
>> meaning no map array provided.
> 
> Nah, size=sizeof() => 0 [], size=0 => default map.
>   
>> Meaning one could reset the map and still pass in extensions.
> 
> I missed that you were pointing out we didn't follow the extensions on
> resetting.
> 
> I'm not sure if that makes sense tbh. The extensions are written around
> the concept of applying to the new engines[], and if the use has
> explicitly removed the engines[] (distinct from defining a zero array),
> what extensions can apply? One hopes they end up -EINVAL. As they should
> -EINVAL, I guess it is no harm done to apply them.

Yeah it is hypothetical for now. Just future proofing for if we add some 
engine map extension which makes sense after resetting the map.

>>>>> +     BUILD_BUG_ON(!IS_ALIGNED(sizeof(*user), sizeof(*user->class_instance)));
>>>>> +     if (size < sizeof(*user) || size % sizeof(*user->class_instance))
>>>>
>>>> IS_ALIGNED for the second condition for consistency with the BUILD_BUG_ON?
>>>>
>>>>> +             return -EINVAL;
>>>>> +
>>>>> +     set.nengine = (size - sizeof(*user)) / sizeof(*user->class_instance);
>>>>> +     if (set.nengine == 0 || set.nengine > I915_EXEC_RING_MASK + 1)
>>>>
>>>> I would prefer we drop the size restriction since it doesn't apply to
>>>> the engine map per se.
>>>
>>> u64 is a limit that will be non-trivial to lift. Marking the limits of
>>> the kernel doesn't restrict it being lifted later.
>>
>> My thinking is that u64 limit applies to the load balancing extension,
>> and the 64 engine limit applies to execbuf. Engine map itself is not
>> limited. But I guess it is a theoretical/pointless discussion at this point.
> 
> I know what you mean, I'm just looking at that we use u64 around the
> uAPI for masks, and u8/unsigned long internally. So even going beyond
> BITS_PER_LONG is problematic.
> 
> I'm in two minds. Yes, the limit doesn't apply to engines[] itself, but
> for practical reasons there is a limit, and until we can remove those,
> lifting the restriction here is immaterial :|

Ok.

> 
>>>>> +static int
>>>>> +get_engines(struct i915_gem_context *ctx,
>>>>> +         struct drm_i915_gem_context_param *args)
>>>>> +{
>>>>> +     struct i915_context_param_engines *local;
>>>>> +     unsigned int n, count, size;
>>>>> +     int err = 0;
>>>>> +
>>>>> +restart:
>>>>> +     count = READ_ONCE(ctx->nengine);
>>>>> +     if (count > (INT_MAX - sizeof(*local)) / sizeof(*local->class_instance))
>>>>> +             return -ENOMEM; /* unrepresentable! */
>>>>
>>>> Probably overly paranoid since we can't end up with this state set.
>>>
>>> And I thought you wanted many engines! Paranoia around kmalloc/user
>>> oveflows is always useful, because you know someone will send a patch
>>> later (and smatch doesn't really care as it only checks the limits of
>>> types and local constraints).
>>
>> Put a comment on what it is checking then. Why INT_MAX and not U32_MAX btw?
> 
> Vague memories about what gets checked for overflow in drm_malloc_large.
> Nowadays, the in-trend is check_mul_overflow() with size_t.

Oh that one.. I was thinking about args->size = size assignment (u32). 
Both are 32-bit luckily.

Regards,

Tvrtko


More information about the Intel-gfx mailing list