[Mesa-dev] Perfetto CPU/GPU tracing

Tamminen, Eero T eero.t.tamminen at intel.com
Thu Feb 18 19:26:15 UTC 2021


Hi,

(This isn't anymore that related to Mesa, but maybe it's still of
interest.)

On Thu, 2021-02-18 at 16:40 +0100, Primiano Tucci wrote:

> On 18/02/2021 14:35, Tamminen, Eero T wrote:
[...]
> > It doesn't require executable code to be writable from user-space,
> > library code can remain read-only because kernel can toggle relevant
> > page writable for uprobe breakpoint setup and back.
> 
> The problem is not who rewrites the .text pages (although, yes, I agree 
> that the kernel doing this is better than userspace doing it). The 
> problem is:
> 
> 1. Losing the ability to verify the integrity of system executables. 
> tell if some malware/rootkit did alter them or uprobes did. Effectively 
> you lose the ability to verify the full chain of bootloader -> system 
> image -> file integrity.

Why you would lose it?

Integrity checks will succeed when there are no trace points enabled,
and trace points should be enabled only when you start tracing, so you
know what is causing integrity check failures (especially when they
start passing again once you disable tracepoints)...


> 2. In general, a mechanism that allows dynamic rewriting of code is a 
> wide attack surface, not welcome on production devices (for the same 
> very unlikely to fly for non-dev images IMHO. Many system processes 
> contain too sensitive information like cookie jar, oauth2 tokens etc.

Isn't there any kind of dev-mode which would be required to enable
things that are normally disallowed?

(like kernel modifying RO mapped user-space process memory pages)


> 
[...]
> > Yes, if you need more context, or handle really frequent events,
> > static
> > breakpoints are a better choice.
> > 
> > 
> > In case of more frequent events, on Linux one might consider using
> > some
> > BPF program to process dynamic tracepoint data so that much smaller
> > amount needs to be transferred to user-space.  But I'm not sure
> > whether
> > support for attaching BPF to tracepoints is in upstream Linux kernel
> > yet.
> 
> eBPF, which you can use in recent kernels with tracepoints, solves 
> different problem. It solves e.g., (1) dynamic filtering or (2) 
> computing aggregations from hi-freq events. It doesn't solve problems 
> like "I want to see all scheduling events and all frame-related 
> userspace instrumentation points. But given that sched events are so 
> hi-traffic I want to put them in a separate buffer, so they don't 
> clobber all the rest". Turning scheduling events into a histogram 
> (something you can do with eBPF+tracepoints) doesn't really solve cases 
> where you want to follow the full scheduling block/wake chain while some
> userspace events taking unexpectedly long.

You could e.g. filter out all sched events except ones for the process
you're interested about.  That should already provide huge reduction in
amount of data, for use-cases where scheduling of rest of processes is
of less interest.

However, I think high frequency kernel tracing is a different use-case
from user-space tracing, which requires its own tooling [1] (and just
few user-space trace points to provide context for traced kernel
activity).


	- Eero

[1] In corporate setting I would expect this kind of latency
investigations to be actually HW assisted, otherwise tracing itself
disturbs the system too much.  Ultimately it could be using instruction
branch tracing to catch *everything*, as both ARM and x86 have HW
support for that.

(Instruction branch tracing doesn't include context, but that can be
injected separately to the data stream.  Because it catches everything,
one can infer some of the context from the trace itself too.  I don't
think there's any good Open Source post-processing / visualization tools
for such data though.)



More information about the mesa-dev mailing list