[PATCH 1/2] drm/xe/guc: Use proper definitions while processing G2H events

Michal Wajdeczko michal.wajdeczko at intel.com
Wed Jan 10 17:58:03 UTC 2024



On 10.01.2024 01:44, Matthew Brost wrote:
> On Wed, Jan 10, 2024 at 12:00:14AM +0100, Michal Wajdeczko wrote:
>> While dispatching G2H events we should use HXG EVENT definitions,
>> no need to rely on outer CTB layer definitions that forced us to
>> use shifted offsets:
>>
>> 	FIELD_GET(GUC_HXG_MSG_0_xxx, msg[1])
>> vs
>> 	FIELD_GET(GUC_HXG_MSG_0_xxx, hxg[0])
>>
>> Signed-off-by: Michal Wajdeczko <michal.wajdeczko at intel.com>
>> Cc: Matthew Brost <matthew.brost at intel.com>
>> ---
>>  drivers/gpu/drm/xe/xe_guc_ct.c | 15 +++++++++------
>>  1 file changed, 9 insertions(+), 6 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/xe/xe_guc_ct.c b/drivers/gpu/drm/xe/xe_guc_ct.c
>> index c29f095aa1b9..9d1d855da229 100644
>> --- a/drivers/gpu/drm/xe/xe_guc_ct.c
>> +++ b/drivers/gpu/drm/xe/xe_guc_ct.c
>> @@ -923,18 +923,21 @@ static int parse_g2h_msg(struct xe_guc_ct *ct, u32 *msg, u32 len)
>>  	return ret;
>>  }
>>  
>> -static int process_g2h_msg(struct xe_guc_ct *ct, u32 *msg, u32 len)
>> +static int process_g2h_msg(struct xe_guc_ct *ct, u32 *hxg, u32 len)
> 
> I can't say I love this change as we pass around ct->msg as an argument

IMO passing ct->msg together with the ct pointer is also questionable

> to bunch of other function and changing this to hxg makes this
> incongruent. Maybe change this patch to assign hxg from the msg first
> and then parse the fields from the hxg variable? Also maybe add a
> msg_to_hxg helper and cleanup all the msg[1] usage in this file too.
> This would make everything consistent.

since I also like consistency, lets start with local variable:

	u32 *hxg = msg_to_hxg(msg);

> 
> Matt
> 
>>  {
>>  	struct xe_device *xe = ct_to_xe(ct);
>>  	struct xe_guc *guc = ct_to_guc(ct);
>> -	u32 action = FIELD_GET(GUC_HXG_EVENT_MSG_0_ACTION, msg[1]);
>> -	u32 *payload = msg + GUC_CTB_HXG_MSG_MIN_LEN;
>> -	u32 adj_len = len - GUC_CTB_HXG_MSG_MIN_LEN;
>> +	u32 action, adj_len;
>> +	u32 *payload;
>>  	int ret = 0;
>>  
>> -	if (FIELD_GET(GUC_HXG_MSG_0_TYPE, msg[1]) != GUC_HXG_TYPE_EVENT)
>> +	if (FIELD_GET(GUC_HXG_MSG_0_TYPE, hxg[0]) != GUC_HXG_TYPE_EVENT)
>>  		return 0;
>>  
>> +	action = FIELD_GET(GUC_HXG_EVENT_MSG_0_ACTION, hxg[0]);
>> +	payload = hxg + GUC_HXG_EVENT_MSG_MIN_LEN;
>> +	adj_len = len - GUC_HXG_EVENT_MSG_MIN_LEN;
>> +
>>  	switch (action) {
>>  	case XE_GUC_ACTION_SCHED_CONTEXT_MODE_DONE:
>>  		ret = xe_guc_sched_done_handler(guc, payload, adj_len);
>> @@ -1145,7 +1148,7 @@ static int dequeue_one_g2h(struct xe_guc_ct *ct)
>>  	if (unlikely(ret < 0))
>>  		return ret;
>>  
>> -	ret = process_g2h_msg(ct, ct->msg, len);
>> +	ret = process_g2h_msg(ct, ct->msg + GUC_CTB_MSG_MIN_LEN, len - GUC_CTB_MSG_MIN_LEN);
>>  	if (unlikely(ret < 0))
>>  		return ret;
>>  
>>
>> base-commit: 39df1f6b1259816cc42b5f2451ca5092fad340ce
>> -- 
>> 2.25.1
>>


More information about the Intel-xe mailing list