[Intel-gfx] [PATCH 9/9] drm/i915: Expand subslice mask

Tvrtko Ursulin tvrtko.ursulin at linux.intel.com
Tue Sep 10 08:13:59 UTC 2019


On 10/09/2019 05:53, Summers, Stuart wrote:
> On Fri, 2019-09-06 at 19:13 +0100, Chris Wilson wrote:
>> Quoting Tvrtko Ursulin (2019-09-02 14:42:44)
>>>
>>> On 24/07/2019 14:05, Tvrtko Ursulin wrote:
>>>>
>>>> On 23/07/2019 16:49, Stuart Summers wrote:
>>>>> +u32 intel_sseu_get_subslices(const struct sseu_dev_info *sseu,
>>>>> u8 slice)
>>>>> +{
>>>>> +    int i, offset = slice * sseu->ss_stride;
>>>>> +    u32 mask = 0;
>>>>> +
>>>>> +    if (slice >= sseu->max_slices) {
>>>>> +        DRM_ERROR("%s: invalid slice %d, max: %d\n",
>>>>> +              __func__, slice, sseu->max_slices);
>>>>> +        return 0;
>>>>> +    }
>>>>> +
>>>>> +    if (sseu->ss_stride > sizeof(mask)) {
>>>>> +        DRM_ERROR("%s: invalid subslice stride %d, max:
>>>>> %lu\n",
>>>>> +              __func__, sseu->ss_stride, sizeof(mask));
>>>>> +        return 0;
>>>>> +    }
>>>>> +
>>>>> +    for (i = 0; i < sseu->ss_stride; i++)
>>>>> +        mask |= (u32)sseu->subslice_mask[offset + i] <<
>>>>> +            i * BITS_PER_BYTE;
>>>>> +
>>>>> +    return mask;
>>>>> +}
>>>>
>>>> Why do you actually need these complications when the plan from
>>>> the
>>>> start was that the driver and user sseu representation structures
>>>> can be
>>>> different?
>>>>
>>>> I only gave it a quick look so I might be wrong, but why not just
>>>> expand
>>>> the driver representations of subslice mask up from u8? Userspace
>>>> API
>>>> should be able to cope with strides already.
>>>
>>> I never got an answer to this and the series was merged in the
>>> meantime.
> 
> Thanks for the note here Tvrtko and sorry for the missed response! For
> some reason I hadn't caught this comment earlier :(

Ok no worries.

>>>
>>> Maybe not much harm but I still don't understand why all the
>>> complications seemingly just to avoid bumping the *internal* ss
>>> mask up
>>> from u8. As long as the internal and abi sseu info struct are well
>>> separated and access point few and well controlled (I think they
>>> are)
>>> then I don't see why the internal side had to be converted to u8
>>> and
>>> strides. But maybe I am missing something.
>>
>> I looked at it and thought it was open-coding bitmap.h as well. I
>> accepted it in good faith that it improved certain use cases and
>> should
>> even make tidying up the code without regressing those easier.
> 
> The goal here is to make sure we have an infrastructure in place that
> always provides a consistent bit layout to userspace regardless of
> underlying architecture endianness. Perhaps this could have been made
> more clear in the commit message here.

My point was that internal and userspace representation do not have to 
match and that it probably would have been much simpler code if that 
principle remained. We already had a split between internal and ABI sseu 
structs and whereas the latter understands concept of stride already, 
the former could have just had it's subslice mask field expended from u8 
to u16, or whatever. But anyway, at this point I don't even remember all 
the details your series did, and given it's merged I won't be going 
re-reading it.

Regards,

Tvrtko


More information about the Intel-gfx mailing list