[Mesa-dev] [PATCH 2/5] gallivm: print out how long it takes to optimize shader IR.

sroland at vmware.com sroland at vmware.com
Mon May 12 19:02:38 PDT 2014


From: Roland Scheidegger <sroland at vmware.com>

Enabled with GALLIVM_DEBUG=perf (which up to now was only used to print
warnings for unoptimized code).
The output will look like this:
optimizing fs88_variant0_partial took 13541 msec
optimizing draw_llvm_shader took 21 msec
optimizing draw_llvm_shader_elts took 21 msec
optimizing fs89_variant0_partial took 39 msec
There are currently problems with some shaders taking WAY too much time to
compile (with all time spent inside llvms's DominatorTree::dominate()
function) and while this does not fix the issue it makes it easier to spot
any performance issues due to shader compiliation, though at least for now
it is only available in debug builds (which are not always suitable for such
analysis). And since this uses system time, it might not be all that accurate
(even llvmpipe's own rasterization threads might be running at the same time,
or just other tasks). (llvmpipe also has LP_DEBUG=counters which will give
the total time and average time per shader spent compiling, which counts
other stuff not just IR optimization.)
---
 src/gallium/auxiliary/gallivm/lp_bld_init.c | 14 ++++++++++++--
 1 file changed, 12 insertions(+), 2 deletions(-)

diff --git a/src/gallium/auxiliary/gallivm/lp_bld_init.c b/src/gallium/auxiliary/gallivm/lp_bld_init.c
index ba992f5..45c29f1 100644
--- a/src/gallium/auxiliary/gallivm/lp_bld_init.c
+++ b/src/gallium/auxiliary/gallivm/lp_bld_init.c
@@ -32,6 +32,7 @@
 #include "util/u_debug.h"
 #include "util/u_memory.h"
 #include "util/u_simple_list.h"
+#include "os/os_time.h"
 #include "lp_bld.h"
 #include "lp_bld_debug.h"
 #include "lp_bld_misc.h"
@@ -572,14 +573,23 @@ static void
 gallivm_optimize_function(struct gallivm_state *gallivm,
                           LLVMValueRef func)
 {
-   if (0) {
-      debug_printf("optimizing %s...\n", LLVMGetValueName(func));
+   int64_t time_begin;
+
+   if (gallivm_debug & GALLIVM_DEBUG_PERF) {
+      time_begin = os_time_get();
    }
 
    assert(gallivm->passmgr);
 
    /* Apply optimizations to LLVM IR */
    LLVMRunFunctionPassManager(gallivm->passmgr, func);
+
+   if (gallivm_debug & GALLIVM_DEBUG_PERF) {
+      int64_t time_end = os_time_get();
+      int time_msec = (int)(time_end - time_begin) / 1000;
+      debug_printf("optimizing %s took %d msec\n",
+                   LLVMGetValueName(func), time_msec);
+   }
 }
 
 
-- 
1.9.1


More information about the mesa-dev mailing list