<div dir="ltr">Hi,<br><div class="gmail_extra"><br></div><div class="gmail_extra">2013/10/25 Pekka Paalanen <span dir="ltr"><<a href="mailto:pq@iki.fi" target="_blank">pq@iki.fi</a>></span><br><div class="gmail_quote">
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div class="im"><br></div>Just curious, how do you detect interesting instructions to<br>
instrument from uninteresting instructions that do not access mmio<br>
areas?<br>
<br></blockquote><div><br></div><div>As I currently use this for data race detection in general, there is no need to separate accesses to mmio areas from the accesses to other memory. The tool just tracks all except the accesses to the data on the stack (if it can know for sure the data are on the stack from the address of the memory area). These are usually not interested for data race detection in the kernel anyway.</div>
<div><br></div><div>So, yes, almost all the instructions that may access memory (except some special instructions as well as MMX, SSE, AVX, ...) are instrumented. For some instructions, it is easy to determine in advance if they access memory, so I enhanced the decoder from Kprobes to provide that info. For other instructions (e.g. CMPXCHG, conditional MOVs), it is determined in runtime whether they access memory and whether this event should be reported.</div>
<div><br></div><div>So, currently, it does not handle mmio areas in any special way. I am just evaluating, if it could be useful to create a tool based on the same technique for these purposes.<br></div><div><br></div><div>
mmio areas can be obtained by a driver through a few kernel functions. A set of currently obtained such areas could be used to filter the accesses and decide whether to report them or not. So, yes, basically, it is "instrument everything, filter before reporting to user space".</div>
<div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">I guess to be sure your approach does not miss anything, we'd still<br>
need the page faulting setup as a safety net to know when or if<br>
something is missed, right? And somehow have the instrumented code<br>
circumvent it.<br></blockquote><div><br></div><div>Page faulting as a safety net... I haven't thought that through yet. </div><div><br></div><div>I suppose, I'll look at the code first when I have time and try to understand at least the common ways for a driver to access mmio areas. It will be clearer then how to make sure we do not lose anything. And - if it is possible with the techniques KernelStrider uses.</div>
<div><br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<br>
We could use some comments from the real reverse-engineers. I used<br>
to be mostly a tool writer.<br></blockquote><div><br></div><div>Yes, if some experts could share their knowledge of this matter, this would be most welcome!</div><div><br></div><div>Regards,</div><div><br></div><div>Eugene</div>
<div><br></div><div>P.S. If you are interested, more info concerning KernelStrider can be found in my recent talk at LinuxCon Europe. The slides and notes for them are available in "Talks and slides" section on the project page (<a href="https://code.google.com/p/kernel-strider/">https://code.google.com/p/kernel-strider/</a>). This is mostly about data races though.</div>
</div></div></div>