[PATCH v5 1/4] drm/sched: add device name to the drm_sched_process_job event

Christian König christian.koenig at amd.com
Mon Jun 17 12:59:26 UTC 2024


Am 17.06.24 um 14:54 schrieb Pierre-Eric Pelloux-Prayer:
> Hi,
>
> Le 14/06/2024 à 11:08, Christian König a écrit :
>>
>>
>> Am 14.06.24 um 10:16 schrieb Pierre-Eric Pelloux-Prayer:
>>> Until the switch from kthread to workqueue, a userspace application 
>>> could
>>> determine the source device from the pid of the thread sending the 
>>> event.
>>>
>>> With workqueues this is not possible anymore, so the event needs to 
>>> contain
>>> the dev_name() to identify the device.
>>>
>>> Signed-off-by: Pierre-Eric Pelloux-Prayer 
>>> <pierre-eric.pelloux-prayer at amd.com>
>>> ---
>>>   drivers/gpu/drm/scheduler/gpu_scheduler_trace.h | 10 ++++++++--
>>>   1 file changed, 8 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/drivers/gpu/drm/scheduler/gpu_scheduler_trace.h 
>>> b/drivers/gpu/drm/scheduler/gpu_scheduler_trace.h
>>> index c75302ca3427..1f9c5a884487 100644
>>> --- a/drivers/gpu/drm/scheduler/gpu_scheduler_trace.h
>>> +++ b/drivers/gpu/drm/scheduler/gpu_scheduler_trace.h
>>> @@ -42,6 +42,7 @@ DECLARE_EVENT_CLASS(drm_sched_job,
>>>                    __field(uint64_t, id)
>>>                    __field(u32, job_count)
>>>                    __field(int, hw_job_count)
>>> +                 __string(dev, dev_name(sched_job->sched->dev))
>>>                    ),
>>>           TP_fast_assign(
>>> @@ -52,6 +53,7 @@ DECLARE_EVENT_CLASS(drm_sched_job,
>>>                  __entry->job_count = 
>>> spsc_queue_count(&entity->job_queue);
>>>                  __entry->hw_job_count = atomic_read(
>>> &sched_job->sched->credit_count);
>>> +               __assign_str(dev);
>>>                  ),
>>>           TP_printk("entity=%p, id=%llu, fence=%p, ring=%s, job 
>>> count:%u, hw job count:%d",
>>>                 __entry->entity, __entry->id,
>>> @@ -64,9 +66,13 @@ DEFINE_EVENT(drm_sched_job, drm_sched_job,
>>>           TP_ARGS(sched_job, entity)
>>>   );
>>> -DEFINE_EVENT(drm_sched_job, drm_run_job,
>>> +DEFINE_EVENT_PRINT(drm_sched_job, drm_run_job,
>>>           TP_PROTO(struct drm_sched_job *sched_job, struct 
>>> drm_sched_entity *entity),
>>> -        TP_ARGS(sched_job, entity)
>>> +        TP_ARGS(sched_job, entity),
>>> +        TP_printk("dev=%s, entity=%p id=%llu, fence=%p, ring=%s, 
>>> job count:%u, hw job count:%d",
>>> +              __get_str(dev), __entry->entity, __entry->id,
>>> +              __entry->fence, __get_str(name),
>>> +              __entry->job_count, __entry->hw_job_count)
>>
>> Why not doing that in the TP_printk() of the drm_sched_job event class?
>>
>
> I didn't because drm_sched_job + drm_run_job usually work in pair so 
> the ring+device information will be available anyway.
>
> But maybe you're right and it's better to add it in the event the job 
> never gets scheduled.
>
> For the other events it's not necessary IMO: the first event will 
> always be drm_sched_job, so it's possible to figure out which the 
> device/ring for a given job.

Yeah, but we have that strace in the trace event even when we don't 
print it.

I would just go ahead and always print it even when it is available 
somehow else.

Regards,
Christian.

>
> Regards,
> Pierre-eric
>
>> The device should now be available for all trace events of that class.
>>
>> Regards,
>> Christian.
>>
>>>   );
>>>   TRACE_EVENT(drm_sched_process_job,



More information about the dri-devel mailing list